idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,301
Anomaly detection based on clustering
ELKI includes a class called KMeansOutlierDetection (and many more). But of all the methods that I have tried, this one worked worst: Even on easy, artificial data it doesn't work too well, except for the trivial objects (that literally any method will detect). The problems with cluster-based outlier detection is that you need a really really good clustering result for this to work. On this data set, k-means does not work too well (the colors are not k-means clusters). Here, k-means did not work too well, and thus you have false outliers along the bad cuts that k-means did: Even worse, k-means is sensitive to outliers. So when you have lots of outliers, it tends to produce really bad results. You will want to first remove outliers, then run k-means; not the other way round! You will end up having lots of outliers at the borders between clusters. But if the clusters are not good, this may well be in the very middle of the data!
Anomaly detection based on clustering
ELKI includes a class called KMeansOutlierDetection (and many more). But of all the methods that I have tried, this one worked worst: Even on easy, artificial data it doesn't work too well, except fo
Anomaly detection based on clustering ELKI includes a class called KMeansOutlierDetection (and many more). But of all the methods that I have tried, this one worked worst: Even on easy, artificial data it doesn't work too well, except for the trivial objects (that literally any method will detect). The problems with cluster-based outlier detection is that you need a really really good clustering result for this to work. On this data set, k-means does not work too well (the colors are not k-means clusters). Here, k-means did not work too well, and thus you have false outliers along the bad cuts that k-means did: Even worse, k-means is sensitive to outliers. So when you have lots of outliers, it tends to produce really bad results. You will want to first remove outliers, then run k-means; not the other way round! You will end up having lots of outliers at the borders between clusters. But if the clusters are not good, this may well be in the very middle of the data!
Anomaly detection based on clustering ELKI includes a class called KMeansOutlierDetection (and many more). But of all the methods that I have tried, this one worked worst: Even on easy, artificial data it doesn't work too well, except fo
49,302
How to calculate probability of observing a value given a permutation distribution?
This is quite straightforward, there is no need to infer a distribution under the Null Hypothesis. Your p-value is just the number of times $x_{permuted}$ get superior or equal to $0.5$, divided by the number of permutations made. This fits the definition of the p-value: "If H0 is true and a new sample is drawn, what is the probability to get at least such extreme results ?" Maybe you mixed things up with the Bootstrap procedure which is generally done to estimate the "real" distribution of your statistic of interest. I don't say your approach is completely wrong, if your distribution looks like normal, you could eventually do a z-test, and it should give a quite reliable p-value. But I think the spirit of the permutation test is just to count how often you indeed get equal or more extreme results because you have a direct access to it, whatever the real distribution of $x_{permuted}$ is. For the sake of the comparison it would be interesting that you give how often $x_{permuted}$ get superior or equal to $0.5$ in your data set, we could compare it with what would give a 1-tailed z-test.
How to calculate probability of observing a value given a permutation distribution?
This is quite straightforward, there is no need to infer a distribution under the Null Hypothesis. Your p-value is just the number of times $x_{permuted}$ get superior or equal to $0.5$, divided by th
How to calculate probability of observing a value given a permutation distribution? This is quite straightforward, there is no need to infer a distribution under the Null Hypothesis. Your p-value is just the number of times $x_{permuted}$ get superior or equal to $0.5$, divided by the number of permutations made. This fits the definition of the p-value: "If H0 is true and a new sample is drawn, what is the probability to get at least such extreme results ?" Maybe you mixed things up with the Bootstrap procedure which is generally done to estimate the "real" distribution of your statistic of interest. I don't say your approach is completely wrong, if your distribution looks like normal, you could eventually do a z-test, and it should give a quite reliable p-value. But I think the spirit of the permutation test is just to count how often you indeed get equal or more extreme results because you have a direct access to it, whatever the real distribution of $x_{permuted}$ is. For the sake of the comparison it would be interesting that you give how often $x_{permuted}$ get superior or equal to $0.5$ in your data set, we could compare it with what would give a 1-tailed z-test.
How to calculate probability of observing a value given a permutation distribution? This is quite straightforward, there is no need to infer a distribution under the Null Hypothesis. Your p-value is just the number of times $x_{permuted}$ get superior or equal to $0.5$, divided by th
49,303
Comparison of Bernstain and Chebyshev inequalities applied to Bernoulli distribution - simulation in R gives unexpected results
I ran the code and didn't see any cases for which the theoretical bounds were breached. But even so, it could happen, as the bound is on the probability that the empirical mean deviates from the true mean. Run the experiment $N$ times, and due to bad luck many of the empirical means could deviate from $\mu$, in fact, the proportion of trials could be more than $$2e^{ - \frac{m\varepsilon^2}{2(\sigma^2 + \frac{1}{3}M\varepsilon)}} $$ "violating" Bernstein's inequality. For a simple example, $P(\text{flipping 5 heads in a row}) \leq \frac{1}{10}$ (it is in fact $\frac{1}{32}$). But performing 1000 trials of 5 flips could result in any proportion of 5-head trials.
Comparison of Bernstain and Chebyshev inequalities applied to Bernoulli distribution - simulation in
I ran the code and didn't see any cases for which the theoretical bounds were breached. But even so, it could happen, as the bound is on the probability that the empirical mean deviates from the true
Comparison of Bernstain and Chebyshev inequalities applied to Bernoulli distribution - simulation in R gives unexpected results I ran the code and didn't see any cases for which the theoretical bounds were breached. But even so, it could happen, as the bound is on the probability that the empirical mean deviates from the true mean. Run the experiment $N$ times, and due to bad luck many of the empirical means could deviate from $\mu$, in fact, the proportion of trials could be more than $$2e^{ - \frac{m\varepsilon^2}{2(\sigma^2 + \frac{1}{3}M\varepsilon)}} $$ "violating" Bernstein's inequality. For a simple example, $P(\text{flipping 5 heads in a row}) \leq \frac{1}{10}$ (it is in fact $\frac{1}{32}$). But performing 1000 trials of 5 flips could result in any proportion of 5-head trials.
Comparison of Bernstain and Chebyshev inequalities applied to Bernoulli distribution - simulation in I ran the code and didn't see any cases for which the theoretical bounds were breached. But even so, it could happen, as the bound is on the probability that the empirical mean deviates from the true
49,304
How to identify variables with significant loadings in PCA?
This is not (yet) and answer, only a comment but too long for the box I do not really know how to determine such significance; but out of couriosity I did a bootstrap-procedure: from a replication of the original data to a pseudo-population of $N=19200$ I draw $t=1000$ randomsamples of $n=150$ (each row of the dataset could occur at most $128$ times). From each of this $t=1000$ experiments I computed the pca-solutions and stored the first pc only in a list. From this 1000 instances of first pc's I got the following statistics for their loadings: PrC[1]: Mean Min Max Stddev SE_mean lb(95%) mean ub(95%) ------------------------------------------------------------------------------ S.L 0.362 0.314 0.412 0.015 0.000 0.361 0.362 0.362 S.W -0.085 -0.131 -0.023 0.017 0.001 -0.086 -0.085 -0.083 P.L 0.856 0.841 0.869 0.004 0.000 0.856 0.856 0.857 P.W 0.358 0.334 0.382 0.008 0.000 0.358 0.358 0.359 The 95% confidence interval for the item S.Width was -0.085 .. - 0.083 and this shows that this value seems to be from zero not by the pure random-effect of the sampling. (Similarly narrow appear all 95% confidence intervals for the other loadings) After that it's clear I need more clarification what it means for a loading to "contribute significantly" - significance derived from what expectance? (But that's what I do not yet understand, I'm competely illiterate yet with the question of significance-estimation for covariances and for loadings in a factormodel, so this all might be of no help at all here) [Update 2] Here is a picture which shows the location of the Iris-items in the coordinates of the first 2 principal components, evaluated by the Monte-Carlo-experiment ("population": $N=128 \cdot 150=19200$, "sample": $n=150$, number-of-samples: $s=1000$) Picture 1: (using covariance-matrix, loadings from eigenvectors as done in the OP's question) From the picture I'd say, that the small loading of Sepal.Width of -0.141 on pc1 is a reliable (different from zero, however small) estimate of the loading in the "population" (because the whole cloud is separated from the y-axis) Using the standard interpretation of PCA (based on correlations, using scaled eigenvectors) the picture looks a bit different, but still with very little disturbances of the loadings of the items. The statistics are as in the following: PrC[1] Mean Min Max Stddev SE_mean lb(95%) mean ub(95%) ------------------------------------------------------------------------------ S.L 0.891 0.840 0.937 0.015 0.000 0.890 0.891 0.892 S.W -0.459 -0.705 -0.159 0.081 0.003 -0.465 -0.459 -0.454 P.L 0.991 0.987 0.994 0.001 0.000 0.991 0.991 0.991 P.W 0.965 0.946 0.980 0.005 0.000 0.965 0.965 0.965 Picture 2: (using correlation-matrix, principal components taken in the standard method) [Update 1] Just for my own couriosity I made a set of plots of the empirical loadings-matrices when samples are drawn from a known population. That's somehow bootstrapping, and I've not yet seen similar images. I took as population a set of 1000 normal random distributed cases with a certain factorial structure. Then I draw 256 random samples from the population with n=40 and did the same components-analysis/rotation for each of that 256 samples. To compare and to see, how the accuracy of the estimation improves I took the same number of samples, but now each sample with n=160. See the comparision at http://go.helms-net.de/stat/sse/StabilityofPC
How to identify variables with significant loadings in PCA?
This is not (yet) and answer, only a comment but too long for the box I do not really know how to determine such significance; but out of couriosity I did a bootstrap-procedure: from a replication o
How to identify variables with significant loadings in PCA? This is not (yet) and answer, only a comment but too long for the box I do not really know how to determine such significance; but out of couriosity I did a bootstrap-procedure: from a replication of the original data to a pseudo-population of $N=19200$ I draw $t=1000$ randomsamples of $n=150$ (each row of the dataset could occur at most $128$ times). From each of this $t=1000$ experiments I computed the pca-solutions and stored the first pc only in a list. From this 1000 instances of first pc's I got the following statistics for their loadings: PrC[1]: Mean Min Max Stddev SE_mean lb(95%) mean ub(95%) ------------------------------------------------------------------------------ S.L 0.362 0.314 0.412 0.015 0.000 0.361 0.362 0.362 S.W -0.085 -0.131 -0.023 0.017 0.001 -0.086 -0.085 -0.083 P.L 0.856 0.841 0.869 0.004 0.000 0.856 0.856 0.857 P.W 0.358 0.334 0.382 0.008 0.000 0.358 0.358 0.359 The 95% confidence interval for the item S.Width was -0.085 .. - 0.083 and this shows that this value seems to be from zero not by the pure random-effect of the sampling. (Similarly narrow appear all 95% confidence intervals for the other loadings) After that it's clear I need more clarification what it means for a loading to "contribute significantly" - significance derived from what expectance? (But that's what I do not yet understand, I'm competely illiterate yet with the question of significance-estimation for covariances and for loadings in a factormodel, so this all might be of no help at all here) [Update 2] Here is a picture which shows the location of the Iris-items in the coordinates of the first 2 principal components, evaluated by the Monte-Carlo-experiment ("population": $N=128 \cdot 150=19200$, "sample": $n=150$, number-of-samples: $s=1000$) Picture 1: (using covariance-matrix, loadings from eigenvectors as done in the OP's question) From the picture I'd say, that the small loading of Sepal.Width of -0.141 on pc1 is a reliable (different from zero, however small) estimate of the loading in the "population" (because the whole cloud is separated from the y-axis) Using the standard interpretation of PCA (based on correlations, using scaled eigenvectors) the picture looks a bit different, but still with very little disturbances of the loadings of the items. The statistics are as in the following: PrC[1] Mean Min Max Stddev SE_mean lb(95%) mean ub(95%) ------------------------------------------------------------------------------ S.L 0.891 0.840 0.937 0.015 0.000 0.890 0.891 0.892 S.W -0.459 -0.705 -0.159 0.081 0.003 -0.465 -0.459 -0.454 P.L 0.991 0.987 0.994 0.001 0.000 0.991 0.991 0.991 P.W 0.965 0.946 0.980 0.005 0.000 0.965 0.965 0.965 Picture 2: (using correlation-matrix, principal components taken in the standard method) [Update 1] Just for my own couriosity I made a set of plots of the empirical loadings-matrices when samples are drawn from a known population. That's somehow bootstrapping, and I've not yet seen similar images. I took as population a set of 1000 normal random distributed cases with a certain factorial structure. Then I draw 256 random samples from the population with n=40 and did the same components-analysis/rotation for each of that 256 samples. To compare and to see, how the accuracy of the estimation improves I took the same number of samples, but now each sample with n=160. See the comparision at http://go.helms-net.de/stat/sse/StabilityofPC
How to identify variables with significant loadings in PCA? This is not (yet) and answer, only a comment but too long for the box I do not really know how to determine such significance; but out of couriosity I did a bootstrap-procedure: from a replication o
49,305
Bonferroni Adjustment and Assumptions?
When one analyses the proof that Bonferroni controls the type I error ''family-wise'' then you see that no assumptions are needed; it basically uses only the inequality of Boole. So Bonferroni does not need e.g. an independence assumption. However, the analysis of the proof learns that the probability of a type I error is at most $\alpha$, i.e. the Bonferroni method can have a type-I error probability that is stricly smaller than $\alpha$ (and this will result in a loss of power). The cases where the probability of a type I error probability is strictly smaller than $\alpha$, (one says that in these cases Bonferroni is conservative) occur when the tests are dependent or when the p-values of the indiviual tests are themselves conservative. The latter can be the case for discrete random variables (in a univariate test for a Binomial variable e.g. the ''observed'' type I error probability may be strictly smaller than $\alpha$). . Note that Holm's method also controls type I error probability and its power is at least as good as the one of the Bonferroni method. For discrete random variables, like the Binomial, other multiple test correction methods methods have been shown to be more powerful (e.g. minP). So to summarise: the Bonferroni method does not need any additional assumptions to show that the type-I error probability is controled, however, it can be conservative.
Bonferroni Adjustment and Assumptions?
When one analyses the proof that Bonferroni controls the type I error ''family-wise'' then you see that no assumptions are needed; it basically uses only the inequality of Boole. So Bonferroni does n
Bonferroni Adjustment and Assumptions? When one analyses the proof that Bonferroni controls the type I error ''family-wise'' then you see that no assumptions are needed; it basically uses only the inequality of Boole. So Bonferroni does not need e.g. an independence assumption. However, the analysis of the proof learns that the probability of a type I error is at most $\alpha$, i.e. the Bonferroni method can have a type-I error probability that is stricly smaller than $\alpha$ (and this will result in a loss of power). The cases where the probability of a type I error probability is strictly smaller than $\alpha$, (one says that in these cases Bonferroni is conservative) occur when the tests are dependent or when the p-values of the indiviual tests are themselves conservative. The latter can be the case for discrete random variables (in a univariate test for a Binomial variable e.g. the ''observed'' type I error probability may be strictly smaller than $\alpha$). . Note that Holm's method also controls type I error probability and its power is at least as good as the one of the Bonferroni method. For discrete random variables, like the Binomial, other multiple test correction methods methods have been shown to be more powerful (e.g. minP). So to summarise: the Bonferroni method does not need any additional assumptions to show that the type-I error probability is controled, however, it can be conservative.
Bonferroni Adjustment and Assumptions? When one analyses the proof that Bonferroni controls the type I error ''family-wise'' then you see that no assumptions are needed; it basically uses only the inequality of Boole. So Bonferroni does n
49,306
Bonferroni Adjustment and Assumptions?
The Bonferroni (and similar corrections like Bonferroni-Holm etc.) assumes that the p-value you provide it follows a uniform distribution under the null hypothesis and would under the null hypothesis only be below 0.05 in 5% of the time, if you repeated your experiment again and again. Pre-tests such as for normality of residuals lead to a violation of this assumption and thus, lead to an inflation of the familywise type I error rate even if your use Bonferroni. I am not aware of any way of avoiding that (e.g. reducing the significance level of the pre-test) and any procedures with such a pre-test tend to lead to type I error inflation (if you are lucky they are small though). It seems strange to so strictly control it with e.g. Bonferroni (or probably better at least Bonferroni-Holm) and then to get your p-values in such a way that a supposed level $\alpha$ test ist not actually a level $\alpha$ test so that even with a Bonferroni correction you cannot have type I error control. If you truly want strict familywise type I error rate control, then either being pretty sure that the residuals are sufficiently close to normal to use AN(C)OVA or using non-parametric (permutation) tests from the start seems more logical.
Bonferroni Adjustment and Assumptions?
The Bonferroni (and similar corrections like Bonferroni-Holm etc.) assumes that the p-value you provide it follows a uniform distribution under the null hypothesis and would under the null hypothesis
Bonferroni Adjustment and Assumptions? The Bonferroni (and similar corrections like Bonferroni-Holm etc.) assumes that the p-value you provide it follows a uniform distribution under the null hypothesis and would under the null hypothesis only be below 0.05 in 5% of the time, if you repeated your experiment again and again. Pre-tests such as for normality of residuals lead to a violation of this assumption and thus, lead to an inflation of the familywise type I error rate even if your use Bonferroni. I am not aware of any way of avoiding that (e.g. reducing the significance level of the pre-test) and any procedures with such a pre-test tend to lead to type I error inflation (if you are lucky they are small though). It seems strange to so strictly control it with e.g. Bonferroni (or probably better at least Bonferroni-Holm) and then to get your p-values in such a way that a supposed level $\alpha$ test ist not actually a level $\alpha$ test so that even with a Bonferroni correction you cannot have type I error control. If you truly want strict familywise type I error rate control, then either being pretty sure that the residuals are sufficiently close to normal to use AN(C)OVA or using non-parametric (permutation) tests from the start seems more logical.
Bonferroni Adjustment and Assumptions? The Bonferroni (and similar corrections like Bonferroni-Holm etc.) assumes that the p-value you provide it follows a uniform distribution under the null hypothesis and would under the null hypothesis
49,307
Bonferroni Adjustment and Assumptions?
Adjusting for multiple comparisons generally applies to your actual hypotheses, not to tests of assumptions.
Bonferroni Adjustment and Assumptions?
Adjusting for multiple comparisons generally applies to your actual hypotheses, not to tests of assumptions.
Bonferroni Adjustment and Assumptions? Adjusting for multiple comparisons generally applies to your actual hypotheses, not to tests of assumptions.
Bonferroni Adjustment and Assumptions? Adjusting for multiple comparisons generally applies to your actual hypotheses, not to tests of assumptions.
49,308
Interaction effect in a multiple regression vs split sample
What you have to realize it that a split sample is different from a interaction effect. Interaction effect with x concerns only a change of the slope of that particular independent variable x, leaving all other slopes constant. Splitting the sample is equivalent to having an interaction dummy for every independent variable. In other words, you allow to to have a change in the slope for every independent variable. So essentially, $y=\alpha + D+ \beta_1 D x_1+ \beta_2D x_2+...+\beta_n Dx_n +\varepsilon \equiv if D=1: y=(\alpha + D)+\beta_1x_1+ \beta_2x_2+...+\beta_nx_n + \varepsilon $ If use use just one interaction dummy on one regressand, you use different assumptions that result in a totally different model.
Interaction effect in a multiple regression vs split sample
What you have to realize it that a split sample is different from a interaction effect. Interaction effect with x concerns only a change of the slope of that particular independent variable x, leaving
Interaction effect in a multiple regression vs split sample What you have to realize it that a split sample is different from a interaction effect. Interaction effect with x concerns only a change of the slope of that particular independent variable x, leaving all other slopes constant. Splitting the sample is equivalent to having an interaction dummy for every independent variable. In other words, you allow to to have a change in the slope for every independent variable. So essentially, $y=\alpha + D+ \beta_1 D x_1+ \beta_2D x_2+...+\beta_n Dx_n +\varepsilon \equiv if D=1: y=(\alpha + D)+\beta_1x_1+ \beta_2x_2+...+\beta_nx_n + \varepsilon $ If use use just one interaction dummy on one regressand, you use different assumptions that result in a totally different model.
Interaction effect in a multiple regression vs split sample What you have to realize it that a split sample is different from a interaction effect. Interaction effect with x concerns only a change of the slope of that particular independent variable x, leaving
49,309
Confidence interval for exponential distribution
The asymptotic confidence interval may be based on the (asymptotic) distribution of the mle. The Fisher information for this problem is given by $\frac{1}{\theta^2}$. Hence an asymptotic CI for $\theta$ is given by $$\bar{X} \pm 1.96 \sqrt{\frac{\bar{X}^2}{n}}$$ where we have replaced $\theta^2$ by its mle, since we do not know the population parameter. And here is a very simple R-simulation of the coverage for the case of a sample of size fifty from an exponential distribution with parameter $2$. r<-rep(0,1000) for(i in 1:1000){ x<-rexp(50,2) mle<-mean(x) if(1/2<=mle+qnorm(0.975)*sqrt((mle^2)/50) & 1/2>=mle+qnorm(0.025)*sqrt((mle^2)/50)){r[i]<-1} } sum(r==1) [1] 948
Confidence interval for exponential distribution
The asymptotic confidence interval may be based on the (asymptotic) distribution of the mle. The Fisher information for this problem is given by $\frac{1}{\theta^2}$. Hence an asymptotic CI for $\thet
Confidence interval for exponential distribution The asymptotic confidence interval may be based on the (asymptotic) distribution of the mle. The Fisher information for this problem is given by $\frac{1}{\theta^2}$. Hence an asymptotic CI for $\theta$ is given by $$\bar{X} \pm 1.96 \sqrt{\frac{\bar{X}^2}{n}}$$ where we have replaced $\theta^2$ by its mle, since we do not know the population parameter. And here is a very simple R-simulation of the coverage for the case of a sample of size fifty from an exponential distribution with parameter $2$. r<-rep(0,1000) for(i in 1:1000){ x<-rexp(50,2) mle<-mean(x) if(1/2<=mle+qnorm(0.975)*sqrt((mle^2)/50) & 1/2>=mle+qnorm(0.025)*sqrt((mle^2)/50)){r[i]<-1} } sum(r==1) [1] 948
Confidence interval for exponential distribution The asymptotic confidence interval may be based on the (asymptotic) distribution of the mle. The Fisher information for this problem is given by $\frac{1}{\theta^2}$. Hence an asymptotic CI for $\thet
49,310
Confidence interval for exponential distribution
For ii) you have several options but two of them are particularly appealing. One option is to go for a Wald-type confidence interval the other is to go with an interval based on the log-likelihood ratio statistic. For the Wald-type you first get the limiting distribution of the MLE of $\theta$, thus something along the line $\hat \theta \sim N(\theta, \text{se}(\hat\theta)^2)$, where $\text{se}(\hat\theta)^2$ is the estimated variance of the distribution of $\hat\theta$, the MLE of $\theta$. The Wald-type interval of approximate confidence level $1-\alpha$ is (the usual) $$ \hat\theta \pm z_{\alpha/2}\text{se}(\hat\theta). $$ where $z_{\alpha/2}$ is s.t. $P(Z\leq z_{\alpha/2}) = \alpha/2$ and $Z\sim N(0,1)$. This is what JohnK is suggesting in his answer. For the confidence interval based on the log-likelihood ratio statistic, you find the full explanation here with the associated R code. You'll have to adapt it to your situation; i.e. just replace the log-likelihood function used there with yours. Note that with this method you'll have to use a computer to compute the interval since the method entails the inversion of a non-linear function. These two confidence intervals are approximate in that the coverage probability for a given fixed sample will not typically be exactly $1-\alpha$; nor will it be always $\geq 1-\alpha$. But, the guarantee is that for increasing sample size, we expect the coverage probability to converge at $1-\alpha$.
Confidence interval for exponential distribution
For ii) you have several options but two of them are particularly appealing. One option is to go for a Wald-type confidence interval the other is to go with an interval based on the log-likelihood rat
Confidence interval for exponential distribution For ii) you have several options but two of them are particularly appealing. One option is to go for a Wald-type confidence interval the other is to go with an interval based on the log-likelihood ratio statistic. For the Wald-type you first get the limiting distribution of the MLE of $\theta$, thus something along the line $\hat \theta \sim N(\theta, \text{se}(\hat\theta)^2)$, where $\text{se}(\hat\theta)^2$ is the estimated variance of the distribution of $\hat\theta$, the MLE of $\theta$. The Wald-type interval of approximate confidence level $1-\alpha$ is (the usual) $$ \hat\theta \pm z_{\alpha/2}\text{se}(\hat\theta). $$ where $z_{\alpha/2}$ is s.t. $P(Z\leq z_{\alpha/2}) = \alpha/2$ and $Z\sim N(0,1)$. This is what JohnK is suggesting in his answer. For the confidence interval based on the log-likelihood ratio statistic, you find the full explanation here with the associated R code. You'll have to adapt it to your situation; i.e. just replace the log-likelihood function used there with yours. Note that with this method you'll have to use a computer to compute the interval since the method entails the inversion of a non-linear function. These two confidence intervals are approximate in that the coverage probability for a given fixed sample will not typically be exactly $1-\alpha$; nor will it be always $\geq 1-\alpha$. But, the guarantee is that for increasing sample size, we expect the coverage probability to converge at $1-\alpha$.
Confidence interval for exponential distribution For ii) you have several options but two of them are particularly appealing. One option is to go for a Wald-type confidence interval the other is to go with an interval based on the log-likelihood rat
49,311
Confidence interval for exponential distribution
I will explain a bit more deeply the process @JohnK gone through: $\hat{\lambda}_{MLE}$=$\frac{1}{\bar{X}}$ $l''$=$\log(L)'' = \frac{-n}{\lambda^2}$ therefore, $-E[l''] = Fisher's \ Information = \frac{1}{\lambda^2} (n=1 \ for \ a \ single \ sample \ variance)$ now : $I({\hat{\lambda}_{MLE}}) = \bar{X}^2$ $The \ variance \ of \ the \ estimator:\frac{1}{I({\lambda})*n }= \frac{1}{\frac{n}{\lambda^2}}=\frac{1}{\frac{n}{\hat{\lambda}_{MLE}^2}}= \frac{\bar{X}^2}{n} $
Confidence interval for exponential distribution
I will explain a bit more deeply the process @JohnK gone through: $\hat{\lambda}_{MLE}$=$\frac{1}{\bar{X}}$ $l''$=$\log(L)'' = \frac{-n}{\lambda^2}$ therefore, $-E[l''] = Fisher's \ Information = \fra
Confidence interval for exponential distribution I will explain a bit more deeply the process @JohnK gone through: $\hat{\lambda}_{MLE}$=$\frac{1}{\bar{X}}$ $l''$=$\log(L)'' = \frac{-n}{\lambda^2}$ therefore, $-E[l''] = Fisher's \ Information = \frac{1}{\lambda^2} (n=1 \ for \ a \ single \ sample \ variance)$ now : $I({\hat{\lambda}_{MLE}}) = \bar{X}^2$ $The \ variance \ of \ the \ estimator:\frac{1}{I({\lambda})*n }= \frac{1}{\frac{n}{\lambda^2}}=\frac{1}{\frac{n}{\hat{\lambda}_{MLE}^2}}= \frac{\bar{X}^2}{n} $
Confidence interval for exponential distribution I will explain a bit more deeply the process @JohnK gone through: $\hat{\lambda}_{MLE}$=$\frac{1}{\bar{X}}$ $l''$=$\log(L)'' = \frac{-n}{\lambda^2}$ therefore, $-E[l''] = Fisher's \ Information = \fra
49,312
Fit exponential distribution with noise
In the absence of a response to my questions relating to the variation about the signal, I'll explain a little about nonlinear least squares. You can fit a model of the following form: $y_i = c + \alpha \exp(-\alpha x_i)+\varepsilon_i$, where $E(\varepsilon_i)=0$. If the $\varepsilon$ values are independent and of constant variance (or close to it), this should be quite a good approach (and would be my idea of a good starting point). If they're also normal it will also be maximum likelihood, and makes for simpler confidence intervals and tests (should you want those). There's no closed form formula for the parameter estimates. They must be obtained iteratively, generally by taking a linear approximation at a current estimate to get the next estimate. Software to do this is in most stats packages. Here's an example. I made a tiny set of (x,y) data (here printed to 4 significant figures): x y 1.186 2.695 2.805 2.677 3.095 2.657 1.399 2.661 2.150 2.713 7.989 2.547 1.847 2.673 3.867 2.588 7.133 2.580 6.136 2.581 1.230 2.711 7.272 2.581 I fitted your model in R (free statistical software), as follows: expfnfit = nls( y ~ c+a*exp(-a*x) , start=list(c=2,a=.5)) # fits the model summary(expfnfit) # shows information about the fit Formula: y ~ c + a * exp(-a * x) Parameters: Estimate Std. Error t value Pr(>|t|) c 2.529316 0.008608 293.848 < 2e-16 *** a 0.229818 0.027285 8.423 7.48e-06 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.02448 on 10 degrees of freedom Number of iterations to convergence: 6 Achieved convergence tolerance: 1.39e-06
Fit exponential distribution with noise
In the absence of a response to my questions relating to the variation about the signal, I'll explain a little about nonlinear least squares. You can fit a model of the following form: $y_i = c + \alp
Fit exponential distribution with noise In the absence of a response to my questions relating to the variation about the signal, I'll explain a little about nonlinear least squares. You can fit a model of the following form: $y_i = c + \alpha \exp(-\alpha x_i)+\varepsilon_i$, where $E(\varepsilon_i)=0$. If the $\varepsilon$ values are independent and of constant variance (or close to it), this should be quite a good approach (and would be my idea of a good starting point). If they're also normal it will also be maximum likelihood, and makes for simpler confidence intervals and tests (should you want those). There's no closed form formula for the parameter estimates. They must be obtained iteratively, generally by taking a linear approximation at a current estimate to get the next estimate. Software to do this is in most stats packages. Here's an example. I made a tiny set of (x,y) data (here printed to 4 significant figures): x y 1.186 2.695 2.805 2.677 3.095 2.657 1.399 2.661 2.150 2.713 7.989 2.547 1.847 2.673 3.867 2.588 7.133 2.580 6.136 2.581 1.230 2.711 7.272 2.581 I fitted your model in R (free statistical software), as follows: expfnfit = nls( y ~ c+a*exp(-a*x) , start=list(c=2,a=.5)) # fits the model summary(expfnfit) # shows information about the fit Formula: y ~ c + a * exp(-a * x) Parameters: Estimate Std. Error t value Pr(>|t|) c 2.529316 0.008608 293.848 < 2e-16 *** a 0.229818 0.027285 8.423 7.48e-06 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.02448 on 10 degrees of freedom Number of iterations to convergence: 6 Achieved convergence tolerance: 1.39e-06
Fit exponential distribution with noise In the absence of a response to my questions relating to the variation about the signal, I'll explain a little about nonlinear least squares. You can fit a model of the following form: $y_i = c + \alp
49,313
Fit exponential distribution with noise
First of all, are you sure that the noise is additive? For instance, if your noise was multiplicative, then linearization would have worked. For an additive noise you can't do much. Use nonlinear regression.
Fit exponential distribution with noise
First of all, are you sure that the noise is additive? For instance, if your noise was multiplicative, then linearization would have worked. For an additive noise you can't do much. Use nonlinear regr
Fit exponential distribution with noise First of all, are you sure that the noise is additive? For instance, if your noise was multiplicative, then linearization would have worked. For an additive noise you can't do much. Use nonlinear regression.
Fit exponential distribution with noise First of all, are you sure that the noise is additive? For instance, if your noise was multiplicative, then linearization would have worked. For an additive noise you can't do much. Use nonlinear regr
49,314
Does summary.aov() of MANOVA object adjust for multiple comparisons?
In any situation where you already have a set of unadjusted P values, you can use p.adjust to obtain adjusted ones for any of the methods in p.adjust.methods: > p.adjust(c(.002432, .02292, .7524), "fdr") [1] 0.007296 0.034380 0.752400 I had a much more elaborate answer using the lsmeans package, but it does not seem necessary. Let me know if you want to see it.
Does summary.aov() of MANOVA object adjust for multiple comparisons?
In any situation where you already have a set of unadjusted P values, you can use p.adjust to obtain adjusted ones for any of the methods in p.adjust.methods: > p.adjust(c(.002432, .02292, .7524), "fd
Does summary.aov() of MANOVA object adjust for multiple comparisons? In any situation where you already have a set of unadjusted P values, you can use p.adjust to obtain adjusted ones for any of the methods in p.adjust.methods: > p.adjust(c(.002432, .02292, .7524), "fdr") [1] 0.007296 0.034380 0.752400 I had a much more elaborate answer using the lsmeans package, but it does not seem necessary. Let me know if you want to see it.
Does summary.aov() of MANOVA object adjust for multiple comparisons? In any situation where you already have a set of unadjusted P values, you can use p.adjust to obtain adjusted ones for any of the methods in p.adjust.methods: > p.adjust(c(.002432, .02292, .7524), "fd
49,315
Differences in correlation for individual and aggregated data
The issue is with the binning. When you order the variable $A$ by size, divide it to 100 equal bins and then sum the data in the bins you introduce the order. The bins at the beginning will have lower sums and higher sums at the end. This is perfectly normal, because that is the way bins were constructed. Here is a simple simulation for illustration. Generate 1 random million values of exponential distribution library(dplyr) a <- rexp(1e6) Divide into 100 equal sized bins using quantiles: q <- quantile(a, seq(0, 1, length.out = 101)) q[1] <- 0 q[101] <- Inf bin <- cut(a, q) Sum the values in the bins and plot them: dd <- data_frame(a=a,bin=bin) ee <- dd %>% group_by(bin) %>% summarise(a=sum(a)) plot(q[-101], ee$a) Compare the two graphs. The first is totally random, and in the second we have almost perfect relationship, because of the way we constructed the bins. Now if we have another variable which is correlated with the original one, this introduced order does not disappear. b <- rexp(1e6) + a/3 Here we observe linear relationship, with a lot of noise, which is no surprise, because this is the way we constructed the second variable. If we perform binning, we get that the relationship is much stronger: dd <- data_frame(a=a, b=b, bin=bin) ee <- dd %>% group_by(bin) %>% summarise_each(funs(sum), a:b) plot(ee$a, ee$b) So the binning you performed accentuated existing relationship, but this does not mean that relationship is actually that strong. Given that your data is article views and thumbs up, it is natural to expect that articles with high number of views tend to have more thumbs up. But this relationship is very noisy as evidenced by your initial scatter plot of the data. You should probably fit a regression to figure out the relationship and how strong it is.
Differences in correlation for individual and aggregated data
The issue is with the binning. When you order the variable $A$ by size, divide it to 100 equal bins and then sum the data in the bins you introduce the order. The bins at the beginning will have lower
Differences in correlation for individual and aggregated data The issue is with the binning. When you order the variable $A$ by size, divide it to 100 equal bins and then sum the data in the bins you introduce the order. The bins at the beginning will have lower sums and higher sums at the end. This is perfectly normal, because that is the way bins were constructed. Here is a simple simulation for illustration. Generate 1 random million values of exponential distribution library(dplyr) a <- rexp(1e6) Divide into 100 equal sized bins using quantiles: q <- quantile(a, seq(0, 1, length.out = 101)) q[1] <- 0 q[101] <- Inf bin <- cut(a, q) Sum the values in the bins and plot them: dd <- data_frame(a=a,bin=bin) ee <- dd %>% group_by(bin) %>% summarise(a=sum(a)) plot(q[-101], ee$a) Compare the two graphs. The first is totally random, and in the second we have almost perfect relationship, because of the way we constructed the bins. Now if we have another variable which is correlated with the original one, this introduced order does not disappear. b <- rexp(1e6) + a/3 Here we observe linear relationship, with a lot of noise, which is no surprise, because this is the way we constructed the second variable. If we perform binning, we get that the relationship is much stronger: dd <- data_frame(a=a, b=b, bin=bin) ee <- dd %>% group_by(bin) %>% summarise_each(funs(sum), a:b) plot(ee$a, ee$b) So the binning you performed accentuated existing relationship, but this does not mean that relationship is actually that strong. Given that your data is article views and thumbs up, it is natural to expect that articles with high number of views tend to have more thumbs up. But this relationship is very noisy as evidenced by your initial scatter plot of the data. You should probably fit a regression to figure out the relationship and how strong it is.
Differences in correlation for individual and aggregated data The issue is with the binning. When you order the variable $A$ by size, divide it to 100 equal bins and then sum the data in the bins you introduce the order. The bins at the beginning will have lower
49,316
BIC in Item Response Theory Models: Using log(N) vs log(N*I) as a weight
I think it's neither. The "textbook" information criteria formulation that you cite are derived for i.i.d. data, while you have a two-way array with weird cross-dependencies: you have the same questions, and you have the same students. The issue is always there with mixed models. I am not going to try to reproduce their expressions, but Delattre et. al. (2014) paper (DOI:10.1214/14-EJS890) derives the relevant contributions from the observation-level (student-by-item) and cluster-level (student) data. They, however, seem to be oblivious to the prior work by the SAMSI group on Bayesian latent variable modeling, although frankly it did not do such a great job of documenting their results. The most important one is fully documented only in somebody's presentation. It was quoted by Jim Berger in the 2007 Wald lecture at the Joint Statistical Meetings. Finally, a rather lean book on random effect and latent variable model selection edited by David Dunson appear to have closely related results, but don't derive them in the form applicable to construction of the mixed-model BIC.
BIC in Item Response Theory Models: Using log(N) vs log(N*I) as a weight
I think it's neither. The "textbook" information criteria formulation that you cite are derived for i.i.d. data, while you have a two-way array with weird cross-dependencies: you have the same questio
BIC in Item Response Theory Models: Using log(N) vs log(N*I) as a weight I think it's neither. The "textbook" information criteria formulation that you cite are derived for i.i.d. data, while you have a two-way array with weird cross-dependencies: you have the same questions, and you have the same students. The issue is always there with mixed models. I am not going to try to reproduce their expressions, but Delattre et. al. (2014) paper (DOI:10.1214/14-EJS890) derives the relevant contributions from the observation-level (student-by-item) and cluster-level (student) data. They, however, seem to be oblivious to the prior work by the SAMSI group on Bayesian latent variable modeling, although frankly it did not do such a great job of documenting their results. The most important one is fully documented only in somebody's presentation. It was quoted by Jim Berger in the 2007 Wald lecture at the Joint Statistical Meetings. Finally, a rather lean book on random effect and latent variable model selection edited by David Dunson appear to have closely related results, but don't derive them in the form applicable to construction of the mixed-model BIC.
BIC in Item Response Theory Models: Using log(N) vs log(N*I) as a weight I think it's neither. The "textbook" information criteria formulation that you cite are derived for i.i.d. data, while you have a two-way array with weird cross-dependencies: you have the same questio
49,317
Fitting an ARIMA model with conflicting indicators
The null of a unit root is not rejected in the ADF test and the null of stationarity in the KPSS test is not rejected either. This is an inconvenient situation since none of the hypotheses can be rejected. In principle, as mentioned here, in this situation it may be more cautions to consider the presence of a unit root and take first differences to detrend the series. However, the remaining information that you give suggests that the data can be considered stationary. (The OP mentions that the estimate of the AR coefficient is lower than 0.9 and the ACF of the residuals looks like the ACF of white noise.) This is probably a limiting case (close to unit root). In that case, taking first differences to the data will not make much harm or difference. If any, the forecasts of the ARIMA(0,1,0) will be flat (equal to the last observation), while the AR(1) will exhibit a smooth trend pattern. It would also be interesting to check if the results of the tests are affected by the presence of some outlying observations or patterns (e.g., extreme value at some point or a shift in the level). In R, you can use the package tsoutliers to check for this. If outliers are found, you could run again the ADF test on the series adjusted for the effect of those outliers.
Fitting an ARIMA model with conflicting indicators
The null of a unit root is not rejected in the ADF test and the null of stationarity in the KPSS test is not rejected either. This is an inconvenient situation since none of the hypotheses can be reje
Fitting an ARIMA model with conflicting indicators The null of a unit root is not rejected in the ADF test and the null of stationarity in the KPSS test is not rejected either. This is an inconvenient situation since none of the hypotheses can be rejected. In principle, as mentioned here, in this situation it may be more cautions to consider the presence of a unit root and take first differences to detrend the series. However, the remaining information that you give suggests that the data can be considered stationary. (The OP mentions that the estimate of the AR coefficient is lower than 0.9 and the ACF of the residuals looks like the ACF of white noise.) This is probably a limiting case (close to unit root). In that case, taking first differences to the data will not make much harm or difference. If any, the forecasts of the ARIMA(0,1,0) will be flat (equal to the last observation), while the AR(1) will exhibit a smooth trend pattern. It would also be interesting to check if the results of the tests are affected by the presence of some outlying observations or patterns (e.g., extreme value at some point or a shift in the level). In R, you can use the package tsoutliers to check for this. If outliers are found, you could run again the ADF test on the series adjusted for the effect of those outliers.
Fitting an ARIMA model with conflicting indicators The null of a unit root is not rejected in the ADF test and the null of stationarity in the KPSS test is not rejected either. This is an inconvenient situation since none of the hypotheses can be reje
49,318
Recovering original regression coefficients from standardized
The models are not the same. Therefore the coefficients should differ. When you recenter, you are taking linear combinations of the columns of $X$ with the vector $\mathbf{1}=(1,1,\ldots, 1)^\prime$. This is fine, provided that $\mathbf{1}$ lies in the column space. In your example it does not. What is worse, when you do include $\mathbf{1}$ as a column, the entire calculation falls apart due to singularities. In detail, the original model (including a constant term) should be $$\mathbb{E}(Y) = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p = \beta_0 + \sum_{i=1}^p \beta_i X_i.$$ Standardizing to $X_i = \sigma_i Z_i + \mu_i$ yields $$\mathbb{E}(Y) = \beta_0 + \sum_{i=1}^p \beta_i (\sigma_i Z_i + \mu_i) = \left(\beta_0 + \sum_{i=1}^p \beta_i \mu_i\right) + \sum_{i=1}^p (\beta_i \sigma_i) Z_i = \beta_0^{*} + \sum_{i=1}^p \beta_i^{*} Z_i,$$ with $\beta_i^{*} = \sigma_i \beta_i$ giving the correct relationships between the "standardized" and unstandardized coefficients. (The usual definition of standardized coefficient also standardizes the response variable, so these $\beta_i^{*}$ might better be called "semi-standardized.")
Recovering original regression coefficients from standardized
The models are not the same. Therefore the coefficients should differ. When you recenter, you are taking linear combinations of the columns of $X$ with the vector $\mathbf{1}=(1,1,\ldots, 1)^\prime$
Recovering original regression coefficients from standardized The models are not the same. Therefore the coefficients should differ. When you recenter, you are taking linear combinations of the columns of $X$ with the vector $\mathbf{1}=(1,1,\ldots, 1)^\prime$. This is fine, provided that $\mathbf{1}$ lies in the column space. In your example it does not. What is worse, when you do include $\mathbf{1}$ as a column, the entire calculation falls apart due to singularities. In detail, the original model (including a constant term) should be $$\mathbb{E}(Y) = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p = \beta_0 + \sum_{i=1}^p \beta_i X_i.$$ Standardizing to $X_i = \sigma_i Z_i + \mu_i$ yields $$\mathbb{E}(Y) = \beta_0 + \sum_{i=1}^p \beta_i (\sigma_i Z_i + \mu_i) = \left(\beta_0 + \sum_{i=1}^p \beta_i \mu_i\right) + \sum_{i=1}^p (\beta_i \sigma_i) Z_i = \beta_0^{*} + \sum_{i=1}^p \beta_i^{*} Z_i,$$ with $\beta_i^{*} = \sigma_i \beta_i$ giving the correct relationships between the "standardized" and unstandardized coefficients. (The usual definition of standardized coefficient also standardizes the response variable, so these $\beta_i^{*}$ might better be called "semi-standardized.")
Recovering original regression coefficients from standardized The models are not the same. Therefore the coefficients should differ. When you recenter, you are taking linear combinations of the columns of $X$ with the vector $\mathbf{1}=(1,1,\ldots, 1)^\prime$
49,319
Deep learning: representation learning or classification?
In my opinion: it's both. It's referenced many times in the highly cited article on convolutional neural networks Gradient-Based Learning Applied to Document Recognition by Yann LeCun, Yoshua Bengio, Leon Bottou and Patrick Haffner. The idea is that it is quite hard to hand-design a rich and complex feature hierarchy. For low level features, we see that conv-nets learn edges or color blobs. This makes intuitive sense and from early computer vision methods, we have some good quality hand-crafted edge feature detectors. But how to compose these features to form richer and more complex features is not a simple task to do by hand. And now imagine trying to design a 10-level feature hierarchy. Instead what you can do is tie the representation learning and classification tasks together, as is done in deep networks. Now we allow the data to drive the feature learning mechanism. Deep architectures are designed to learn a hierarchy of features from the data as opposed to ad-hoc hand-crafted features designed by humans. Most importantly, the features will be learned with the explicit objective of learning a hierarchical feature representation which obtains low error on a given loss function which measures the performance of our deep net. A priori, given some hand-crafted features, one does not know how good these features are for the task at hand. In this manner, desired high performance on the task at hand will drive the quality of the learned features and they become inextricably linked together. This end-to-end training/classification pipeline has been a big idea when it comes to designing computer vision architectures.
Deep learning: representation learning or classification?
In my opinion: it's both. It's referenced many times in the highly cited article on convolutional neural networks Gradient-Based Learning Applied to Document Recognition by Yann LeCun, Yoshua Bengio,
Deep learning: representation learning or classification? In my opinion: it's both. It's referenced many times in the highly cited article on convolutional neural networks Gradient-Based Learning Applied to Document Recognition by Yann LeCun, Yoshua Bengio, Leon Bottou and Patrick Haffner. The idea is that it is quite hard to hand-design a rich and complex feature hierarchy. For low level features, we see that conv-nets learn edges or color blobs. This makes intuitive sense and from early computer vision methods, we have some good quality hand-crafted edge feature detectors. But how to compose these features to form richer and more complex features is not a simple task to do by hand. And now imagine trying to design a 10-level feature hierarchy. Instead what you can do is tie the representation learning and classification tasks together, as is done in deep networks. Now we allow the data to drive the feature learning mechanism. Deep architectures are designed to learn a hierarchy of features from the data as opposed to ad-hoc hand-crafted features designed by humans. Most importantly, the features will be learned with the explicit objective of learning a hierarchical feature representation which obtains low error on a given loss function which measures the performance of our deep net. A priori, given some hand-crafted features, one does not know how good these features are for the task at hand. In this manner, desired high performance on the task at hand will drive the quality of the learned features and they become inextricably linked together. This end-to-end training/classification pipeline has been a big idea when it comes to designing computer vision architectures.
Deep learning: representation learning or classification? In my opinion: it's both. It's referenced many times in the highly cited article on convolutional neural networks Gradient-Based Learning Applied to Document Recognition by Yann LeCun, Yoshua Bengio,
49,320
Deep learning: representation learning or classification?
I would say it's basically representation learning followed by classification at the end. Consider an Image Classification Problem. -> We have to find some way(some characteristics/attributes) to tell if the image is a dog or a cat (which in our terms is referred to as features) -> Extracting the useful features is the most important part of machine learning & we call this extraction of features as representation learning (i.e basically extracting the useful features from the raw data). -> For the given example (dog vs cat), it's difficult for the model to get hold of the most important features in the first glance itself (i.e we can't use the pixel information and extract the target features directly). -> Which leads us to the deep learning concept, where we look to extract the features in stages/layers (by using the previous layers information). In our problem, it would look something like: Layer 1 - learns about the edges (which gives an outline of the object in the image) Layer 2 - learns about the corners & contours (which gives information about the shapes in image) Layer 3 - learn about the object parts (legs, nose etc) Final Layer - which basically tries to classify the data/features (that it learned from the previous layers)
Deep learning: representation learning or classification?
I would say it's basically representation learning followed by classification at the end. Consider an Image Classification Problem. -> We have to find some way(some characteristics/attributes) to tell
Deep learning: representation learning or classification? I would say it's basically representation learning followed by classification at the end. Consider an Image Classification Problem. -> We have to find some way(some characteristics/attributes) to tell if the image is a dog or a cat (which in our terms is referred to as features) -> Extracting the useful features is the most important part of machine learning & we call this extraction of features as representation learning (i.e basically extracting the useful features from the raw data). -> For the given example (dog vs cat), it's difficult for the model to get hold of the most important features in the first glance itself (i.e we can't use the pixel information and extract the target features directly). -> Which leads us to the deep learning concept, where we look to extract the features in stages/layers (by using the previous layers information). In our problem, it would look something like: Layer 1 - learns about the edges (which gives an outline of the object in the image) Layer 2 - learns about the corners & contours (which gives information about the shapes in image) Layer 3 - learn about the object parts (legs, nose etc) Final Layer - which basically tries to classify the data/features (that it learned from the previous layers)
Deep learning: representation learning or classification? I would say it's basically representation learning followed by classification at the end. Consider an Image Classification Problem. -> We have to find some way(some characteristics/attributes) to tell
49,321
How to assess if a model is good in multinomial logistic regression?
Let's take apart your modeling approach to see if we can figure out why a certain model is going to "fit" better. Multinomial vs ordinal: Multinomial I would bet is almost always going to fit better than an ordinal because it gives you coefficients for every level. It is the most flexible here and has the least restrictive assumptions, namely, ordinal logit assumes parallel lines /proportional odds between each level. In an absolute sense, I think an ordinal logit would only fit better if you certainly have proportional odds or really close to it so that the fewer estimated parameters saves you in an information criteria like AIC or BIC. More vs less predictors: More predictors means you have more explanatory power in your model, so naturally it will fit better. Only if the increase in fit is not sufficient to make up for the penalty of added parameters will the simpler model fit better based on AIC/BIC. I would recommend using more comprehensive fit statistics - while the AIC is really good, it is not the only one out there (have you used the BIC as well?). Use Wald tests or LR tests to compare the interaction effects, looking into cross-validation for overall predictive ability, test the parallel lines assumptions. Also think about it theoretically - do you think your measures are ordinal? Does an interaction term make sense? Depending on the field, many journals (particularly in the social sciences) look down on purely data driven modeling approaches if you don't have strong theory to support your decisions.
How to assess if a model is good in multinomial logistic regression?
Let's take apart your modeling approach to see if we can figure out why a certain model is going to "fit" better. Multinomial vs ordinal: Multinomial I would bet is almost always going to fit better
How to assess if a model is good in multinomial logistic regression? Let's take apart your modeling approach to see if we can figure out why a certain model is going to "fit" better. Multinomial vs ordinal: Multinomial I would bet is almost always going to fit better than an ordinal because it gives you coefficients for every level. It is the most flexible here and has the least restrictive assumptions, namely, ordinal logit assumes parallel lines /proportional odds between each level. In an absolute sense, I think an ordinal logit would only fit better if you certainly have proportional odds or really close to it so that the fewer estimated parameters saves you in an information criteria like AIC or BIC. More vs less predictors: More predictors means you have more explanatory power in your model, so naturally it will fit better. Only if the increase in fit is not sufficient to make up for the penalty of added parameters will the simpler model fit better based on AIC/BIC. I would recommend using more comprehensive fit statistics - while the AIC is really good, it is not the only one out there (have you used the BIC as well?). Use Wald tests or LR tests to compare the interaction effects, looking into cross-validation for overall predictive ability, test the parallel lines assumptions. Also think about it theoretically - do you think your measures are ordinal? Does an interaction term make sense? Depending on the field, many journals (particularly in the social sciences) look down on purely data driven modeling approaches if you don't have strong theory to support your decisions.
How to assess if a model is good in multinomial logistic regression? Let's take apart your modeling approach to see if we can figure out why a certain model is going to "fit" better. Multinomial vs ordinal: Multinomial I would bet is almost always going to fit better
49,322
Strange results in parallel analysis -- weird output by rstudio but not R-Fiddle
Pay attention to the plot's y-axis label. It says: "Eigen values of original and simulated factors and components" (emphasis mine). Parallel analysis (PA) produces separate sets of eigen values for both factors (factor analysis, FA) and components (principal component analysis, PCA). While FA and PCA seem to be similar, those methods are conceptually different. Here's how Professor William Revelle (well-known expert in psychometric theory, maintainer of the Personality Project and the author of psych R package) explains the difference (Revelle, 2015, p. 158): Although on the surface, the component model and factor model appear to very similar (...), they are logically very different. In the components model, components are linear sums of the variables. In the factor model, on the other hand, factors are latent variables whose weighted sum accounts for the common part of the observed variables. In path analytic terms, for the component model, arrows go from the variables to the components, while in the factor model they go from the factors to the variables. My statement above about the contents of the plot is intentionally not very accurate for pedagogical reasons (but, I hope still conceptually valid), as, in fact, fa.parallel() and fa.parallel.poly() functions plot "the eigenvalues for a principal components solution as well as the eigen values when the communalities are estimated by a one factor minres solution for a given data set as well as that of n (default value = 20) randomly generated parallel data sets of the same number of variables and subjects" (Revelle, 2015, p. 175-176) [bold emphasis mine]. UPDATE (based on the OP's clarification of the question): I think that the double line output on number of factors/components is due to a second call to fa.parallel() or fa.parallel.poly() somewhere, because the standard output from fa.parallel.poly() contains only a single such line. See the MRE below. Code: library(psych) data(bock) fa.info <- fa.parallel.poly(lsat6) summary(fa.info) Output: See the graphic output for a description of the results Parallel analysis suggests that the number of factors = 3 and the number of components = 1 References Revelle, W. (2015). An introduction to psychometric theory with applications in R. [Website] Retrieved from http://www.personality-project.org/r/book NOTE: Both my citations can be found in Chapter 6 of the referenced book, directly downloadable from http://www.personality-project.org/r/book/Chapter6.pdf.
Strange results in parallel analysis -- weird output by rstudio but not R-Fiddle
Pay attention to the plot's y-axis label. It says: "Eigen values of original and simulated factors and components" (emphasis mine). Parallel analysis (PA) produces separate sets of eigen values for bo
Strange results in parallel analysis -- weird output by rstudio but not R-Fiddle Pay attention to the plot's y-axis label. It says: "Eigen values of original and simulated factors and components" (emphasis mine). Parallel analysis (PA) produces separate sets of eigen values for both factors (factor analysis, FA) and components (principal component analysis, PCA). While FA and PCA seem to be similar, those methods are conceptually different. Here's how Professor William Revelle (well-known expert in psychometric theory, maintainer of the Personality Project and the author of psych R package) explains the difference (Revelle, 2015, p. 158): Although on the surface, the component model and factor model appear to very similar (...), they are logically very different. In the components model, components are linear sums of the variables. In the factor model, on the other hand, factors are latent variables whose weighted sum accounts for the common part of the observed variables. In path analytic terms, for the component model, arrows go from the variables to the components, while in the factor model they go from the factors to the variables. My statement above about the contents of the plot is intentionally not very accurate for pedagogical reasons (but, I hope still conceptually valid), as, in fact, fa.parallel() and fa.parallel.poly() functions plot "the eigenvalues for a principal components solution as well as the eigen values when the communalities are estimated by a one factor minres solution for a given data set as well as that of n (default value = 20) randomly generated parallel data sets of the same number of variables and subjects" (Revelle, 2015, p. 175-176) [bold emphasis mine]. UPDATE (based on the OP's clarification of the question): I think that the double line output on number of factors/components is due to a second call to fa.parallel() or fa.parallel.poly() somewhere, because the standard output from fa.parallel.poly() contains only a single such line. See the MRE below. Code: library(psych) data(bock) fa.info <- fa.parallel.poly(lsat6) summary(fa.info) Output: See the graphic output for a description of the results Parallel analysis suggests that the number of factors = 3 and the number of components = 1 References Revelle, W. (2015). An introduction to psychometric theory with applications in R. [Website] Retrieved from http://www.personality-project.org/r/book NOTE: Both my citations can be found in Chapter 6 of the referenced book, directly downloadable from http://www.personality-project.org/r/book/Chapter6.pdf.
Strange results in parallel analysis -- weird output by rstudio but not R-Fiddle Pay attention to the plot's y-axis label. It says: "Eigen values of original and simulated factors and components" (emphasis mine). Parallel analysis (PA) produces separate sets of eigen values for bo
49,323
Interpreting standard deviation for PCA
From your input, you should use the "Cumulative Proportion" field as a guide how many principal components to keep. You define the percentage of variance and then you select the column (which is also the number of that principal component) which cumulatively accounts the variance you would like to keep. For 85% and more variance on your example, you would need to keep 7 principal components. Concerning the added plot, it might be more tricky to read it. In order to proceed as described in the previous paragraph, when you are given some percentage to keep, you would first integrate and then read off the value of needed components. Actually you have this information already, this is the very same "Cumulative Proportion" field. Just plot it and you will see. Finally about the (non)inclusion of class variable into the dataset to be analyzed with PCA. Your intent is to analyze the dataset given some measurements and not the class label. The class label is some additional information (typically posterior). You don't want it to be analyzed together with the dataset. It will be hard to interpret the maximum variance directions if the dataset included also the class variable.
Interpreting standard deviation for PCA
From your input, you should use the "Cumulative Proportion" field as a guide how many principal components to keep. You define the percentage of variance and then you select the column (which is also
Interpreting standard deviation for PCA From your input, you should use the "Cumulative Proportion" field as a guide how many principal components to keep. You define the percentage of variance and then you select the column (which is also the number of that principal component) which cumulatively accounts the variance you would like to keep. For 85% and more variance on your example, you would need to keep 7 principal components. Concerning the added plot, it might be more tricky to read it. In order to proceed as described in the previous paragraph, when you are given some percentage to keep, you would first integrate and then read off the value of needed components. Actually you have this information already, this is the very same "Cumulative Proportion" field. Just plot it and you will see. Finally about the (non)inclusion of class variable into the dataset to be analyzed with PCA. Your intent is to analyze the dataset given some measurements and not the class label. The class label is some additional information (typically posterior). You don't want it to be analyzed together with the dataset. It will be hard to interpret the maximum variance directions if the dataset included also the class variable.
Interpreting standard deviation for PCA From your input, you should use the "Cumulative Proportion" field as a guide how many principal components to keep. You define the percentage of variance and then you select the column (which is also
49,324
Interpreting standard deviation for PCA
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I think we should set up a threshold by multiplying with 0.50 the top selected feature, in this case, 1.7440 multiplied by 0.50 is equal to 0.872 which means we can take all combinations of features with a standard deviation greater or equal to 0.872. For your problem consider the top Five possible combinations.
Interpreting standard deviation for PCA
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Interpreting standard deviation for PCA Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I think we should set up a threshold by multiplying with 0.50 the top selected feature, in this case, 1.7440 multiplied by 0.50 is equal to 0.872 which means we can take all combinations of features with a standard deviation greater or equal to 0.872. For your problem consider the top Five possible combinations.
Interpreting standard deviation for PCA Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
49,325
Maximum number of alternatives in a discrete choice model
The main issue with asmprobit is the flexibility it provides which relaxes the independence of irrelevant alternatives (IIA) assumption but it comes at the cost of increased computing power. In this sense you allow the odds of choosing one alternative over some other alternative to depend on the remaining alternative, though this involves evaluation of probabilities from the multivariate normal distribution. Since there is no closed form solution to those you have to rely on simulation techniques. That's the bottle neck. The simulation method used by asmprobit in order to solve the simulated maximum likelihood is the Geweke-Hajivassiliou–Kean multivariate normal simulator (GHK documentation) which allows only for dimension $m\leq 20$. That's where the restriction in asmprobit comes from because for more alternatives the simulation time becomes unmanageable. For a detailed description of this you can also see the "Simulated Likelihood" part in the Methods and Formulas section of the asmprobit documentation. Given that the reason for the limit is a computational one rather than one that is concerned with implementation I would not be too hopeful for a better estimation routine in R. If there was one then probably also Stata would have implemented it by now. By the way, this restriction is also a problem for other probit models of discrete choice (e.g. mprobit allows max 30 distinct choices) for the same reason as outlined above. A useful reference for you should be Train, Kenneth E. 2007. Discrete Choice Models with Simulation. New York: Cambridge University Press. which is probably the main reference on this topic. If I remember correctly he also discusses cases where the number of choices is very large. I'm not sure, however, if you can have the best of both worlds, i.e. relaxing IIA and allowing for many alternatives. Certainly probit models will not get you far because of the multivariate normal but perhaps other models that also relax the IIA assumption may be useful. For instance, the mixed logit model also relaxes this assumption so it might be worth to have a look at the Stata options for estimating these kinds of models (see for instance this presentation for an overview).
Maximum number of alternatives in a discrete choice model
The main issue with asmprobit is the flexibility it provides which relaxes the independence of irrelevant alternatives (IIA) assumption but it comes at the cost of increased computing power. In this s
Maximum number of alternatives in a discrete choice model The main issue with asmprobit is the flexibility it provides which relaxes the independence of irrelevant alternatives (IIA) assumption but it comes at the cost of increased computing power. In this sense you allow the odds of choosing one alternative over some other alternative to depend on the remaining alternative, though this involves evaluation of probabilities from the multivariate normal distribution. Since there is no closed form solution to those you have to rely on simulation techniques. That's the bottle neck. The simulation method used by asmprobit in order to solve the simulated maximum likelihood is the Geweke-Hajivassiliou–Kean multivariate normal simulator (GHK documentation) which allows only for dimension $m\leq 20$. That's where the restriction in asmprobit comes from because for more alternatives the simulation time becomes unmanageable. For a detailed description of this you can also see the "Simulated Likelihood" part in the Methods and Formulas section of the asmprobit documentation. Given that the reason for the limit is a computational one rather than one that is concerned with implementation I would not be too hopeful for a better estimation routine in R. If there was one then probably also Stata would have implemented it by now. By the way, this restriction is also a problem for other probit models of discrete choice (e.g. mprobit allows max 30 distinct choices) for the same reason as outlined above. A useful reference for you should be Train, Kenneth E. 2007. Discrete Choice Models with Simulation. New York: Cambridge University Press. which is probably the main reference on this topic. If I remember correctly he also discusses cases where the number of choices is very large. I'm not sure, however, if you can have the best of both worlds, i.e. relaxing IIA and allowing for many alternatives. Certainly probit models will not get you far because of the multivariate normal but perhaps other models that also relax the IIA assumption may be useful. For instance, the mixed logit model also relaxes this assumption so it might be worth to have a look at the Stata options for estimating these kinds of models (see for instance this presentation for an overview).
Maximum number of alternatives in a discrete choice model The main issue with asmprobit is the flexibility it provides which relaxes the independence of irrelevant alternatives (IIA) assumption but it comes at the cost of increased computing power. In this s
49,326
How to create a QQ plot of azimuths to test rotational symmetry of a spherical point dataset?
You just want to study the azimuths of a set of spherical points $P_i$ relative to their spherical mean $\bar P$. The most straightforward solution solves the spherical triangles $(N,\bar P, P_i)$ where $N$ is the North Pole. Let the co-latitudes of the points $P_i$ and $\bar P$ (angles from North) be $a$ and $b$ respectively. Let $\gamma$ be the angle between them: it's just the difference between the longitudes of the same two points. The azimuth, with due East being zero and orienting angles counterclockwise, is determined by $$\arctan_2(\sin(b)\cos(a) - \cos(b)\sin(a)\cos(\gamma),\ \sin(a)\sin(\gamma))$$ where $\arctan_2(y,x)$ is the angle of a point $(x,y)$ in the plane. (This is supposed to be a numerically stable version of the formula, but I haven't tested it extensively.) In this example, $100$ points were generated according to a (symmetric) Fisher-von Mises distribution distributed throughout the southern and western hemispheres, along with another $50$ points focused in the south and east. The resulting distribution is not symmetric. The mean point is shown as a red triangle. Relative to the mean point, there is a cluster of points to its right (East) and upward (North), creating a swath of azimuths in the QQ plot between $0$ and $1$ (expressed in radians). The diffuse cluster to its west creates a broader swath of azimuths between $3$ and $5$. The QQ plot is clearly not uniform (for otherwise it would lie close to the dashed diagonal line), reflecting the bimodality of the spherical point distribution. The R code that produced this example can be used to generate azimuthal QQ plots for any data. It assumes the spherical coordinates are provided as rows in an array; the relevant rows are indexed by "phi" and "theta". # # Spherical triangle, two sides and included angle given. # Returns the angle `alpha` opposite `a`, in radians between 0 and 2*pi. # SAS <- function(a, gamma, b) { atan2(sin(b)*cos(a) - cos(b)*sin(a)*cos(gamma), sin(a)*sin(gamma)) %% (2*pi) } # # Cartesian coordinate conversion (for generating points). # xyz.to.spherical <- function(xyz) { xyz <- matrix(xyz, nrow=3) x <- xyz[1,]; y <- xyz[2,]; z <- xyz[3,] r2 <- x^2 + y^2 rho <- sqrt(r2 + z^2) theta <- pi/2 - atan2(z, sqrt(r2)) phi <- atan2(y, x) theta[x==0 && y==0] <- sign(z) * pi/2 return (rbind(rho, theta, phi)) } # # Generate random points on the sphere. # library(MASS) set.seed(17) n.1 <- 100 n.2 <- 50 mu.1 <- c(0,-1,-1/4) * 2 # Center of first distribution mu.2 <- c(1,1,-1/2) * 5 # Center of the second distribution Sigma <- outer(1:3, 1:3, "==") # Identity covariance matrix xyz.1 <- t(mvrnorm(n.1, mu.1, Sigma)) # Each column is a point xyz.2 <- t(mvrnorm(n.2, mu.2, Sigma)) xyz <- cbind(xyz.1, xyz.2) # The Cartesian coordinates rtf <- xyz.to.spherical(xyz) # The spherical coordinates (also in columns) # # Compute the spherical mean and the azimuths relative to that mean. # mean.rtf <- xyz.to.spherical(rowMeans(xyz)) a <- SAS(rtf["theta",], rtf["phi",]-mean.rtf["phi",], mean.rtf["theta",]) # # Plot the data and a QQ plot of the azimuths. # par(mfrow=c(1,2)) plot(c(-pi, pi), c(-1,1), type="n", xlab="Phi", ylab="Cos(theta)", main="Sample points and their mean") abline(h=0, col="Gray") # The Equator abline(v=0, col="Gray") # The Prime Meridian points(rtf["phi",], cos(rtf["theta", ]), col="#00000080") points(mean.rtf["phi",], cos(mean.rtf["theta",]), bg="Red", pch=24, cex=1.25) plot(c(0,1), c(0,2*pi), type="n", xlab="Quantile", ylab="Azimuth", main="Azimuthal QQ Plot") abline(c(0, 2*pi), lty=3, lwd=2, col="Gray") points(seq(0, 1, along.with=a), sort(a))
How to create a QQ plot of azimuths to test rotational symmetry of a spherical point dataset?
You just want to study the azimuths of a set of spherical points $P_i$ relative to their spherical mean $\bar P$. The most straightforward solution solves the spherical triangles $(N,\bar P, P_i)$ wh
How to create a QQ plot of azimuths to test rotational symmetry of a spherical point dataset? You just want to study the azimuths of a set of spherical points $P_i$ relative to their spherical mean $\bar P$. The most straightforward solution solves the spherical triangles $(N,\bar P, P_i)$ where $N$ is the North Pole. Let the co-latitudes of the points $P_i$ and $\bar P$ (angles from North) be $a$ and $b$ respectively. Let $\gamma$ be the angle between them: it's just the difference between the longitudes of the same two points. The azimuth, with due East being zero and orienting angles counterclockwise, is determined by $$\arctan_2(\sin(b)\cos(a) - \cos(b)\sin(a)\cos(\gamma),\ \sin(a)\sin(\gamma))$$ where $\arctan_2(y,x)$ is the angle of a point $(x,y)$ in the plane. (This is supposed to be a numerically stable version of the formula, but I haven't tested it extensively.) In this example, $100$ points were generated according to a (symmetric) Fisher-von Mises distribution distributed throughout the southern and western hemispheres, along with another $50$ points focused in the south and east. The resulting distribution is not symmetric. The mean point is shown as a red triangle. Relative to the mean point, there is a cluster of points to its right (East) and upward (North), creating a swath of azimuths in the QQ plot between $0$ and $1$ (expressed in radians). The diffuse cluster to its west creates a broader swath of azimuths between $3$ and $5$. The QQ plot is clearly not uniform (for otherwise it would lie close to the dashed diagonal line), reflecting the bimodality of the spherical point distribution. The R code that produced this example can be used to generate azimuthal QQ plots for any data. It assumes the spherical coordinates are provided as rows in an array; the relevant rows are indexed by "phi" and "theta". # # Spherical triangle, two sides and included angle given. # Returns the angle `alpha` opposite `a`, in radians between 0 and 2*pi. # SAS <- function(a, gamma, b) { atan2(sin(b)*cos(a) - cos(b)*sin(a)*cos(gamma), sin(a)*sin(gamma)) %% (2*pi) } # # Cartesian coordinate conversion (for generating points). # xyz.to.spherical <- function(xyz) { xyz <- matrix(xyz, nrow=3) x <- xyz[1,]; y <- xyz[2,]; z <- xyz[3,] r2 <- x^2 + y^2 rho <- sqrt(r2 + z^2) theta <- pi/2 - atan2(z, sqrt(r2)) phi <- atan2(y, x) theta[x==0 && y==0] <- sign(z) * pi/2 return (rbind(rho, theta, phi)) } # # Generate random points on the sphere. # library(MASS) set.seed(17) n.1 <- 100 n.2 <- 50 mu.1 <- c(0,-1,-1/4) * 2 # Center of first distribution mu.2 <- c(1,1,-1/2) * 5 # Center of the second distribution Sigma <- outer(1:3, 1:3, "==") # Identity covariance matrix xyz.1 <- t(mvrnorm(n.1, mu.1, Sigma)) # Each column is a point xyz.2 <- t(mvrnorm(n.2, mu.2, Sigma)) xyz <- cbind(xyz.1, xyz.2) # The Cartesian coordinates rtf <- xyz.to.spherical(xyz) # The spherical coordinates (also in columns) # # Compute the spherical mean and the azimuths relative to that mean. # mean.rtf <- xyz.to.spherical(rowMeans(xyz)) a <- SAS(rtf["theta",], rtf["phi",]-mean.rtf["phi",], mean.rtf["theta",]) # # Plot the data and a QQ plot of the azimuths. # par(mfrow=c(1,2)) plot(c(-pi, pi), c(-1,1), type="n", xlab="Phi", ylab="Cos(theta)", main="Sample points and their mean") abline(h=0, col="Gray") # The Equator abline(v=0, col="Gray") # The Prime Meridian points(rtf["phi",], cos(rtf["theta", ]), col="#00000080") points(mean.rtf["phi",], cos(mean.rtf["theta",]), bg="Red", pch=24, cex=1.25) plot(c(0,1), c(0,2*pi), type="n", xlab="Quantile", ylab="Azimuth", main="Azimuthal QQ Plot") abline(c(0, 2*pi), lty=3, lwd=2, col="Gray") points(seq(0, 1, along.with=a), sort(a))
How to create a QQ plot of azimuths to test rotational symmetry of a spherical point dataset? You just want to study the azimuths of a set of spherical points $P_i$ relative to their spherical mean $\bar P$. The most straightforward solution solves the spherical triangles $(N,\bar P, P_i)$ wh
49,327
How to choose a regression tree (base learner) at each iteration of Gradient Tree Boosting?
The regions are not split based only on the data features. In each iteration of gradient boosting, you fit a regression tree to the residuals of the loss function at the current prediction, $\frac{\partial L(y_i, F(x_i))}{\partial F(x_i)},$ where $F$ is the function you have learned so far. Since these residuals change at every iteration (because $F$ is different at every iteration), each base learner will learn to split up the data differently.
How to choose a regression tree (base learner) at each iteration of Gradient Tree Boosting?
The regions are not split based only on the data features. In each iteration of gradient boosting, you fit a regression tree to the residuals of the loss function at the current prediction, $\frac{\pa
How to choose a regression tree (base learner) at each iteration of Gradient Tree Boosting? The regions are not split based only on the data features. In each iteration of gradient boosting, you fit a regression tree to the residuals of the loss function at the current prediction, $\frac{\partial L(y_i, F(x_i))}{\partial F(x_i)},$ where $F$ is the function you have learned so far. Since these residuals change at every iteration (because $F$ is different at every iteration), each base learner will learn to split up the data differently.
How to choose a regression tree (base learner) at each iteration of Gradient Tree Boosting? The regions are not split based only on the data features. In each iteration of gradient boosting, you fit a regression tree to the residuals of the loss function at the current prediction, $\frac{\pa
49,328
using caret and glmnet for variable selection
If you check the lambdas and your best lambda obtained from caret, you will see that it is not present in the model: lassoFit1$bestTune$lambda [1] 0.01545996 lassoFit1$bestTune$lambda %in% lassoFit1$finalModel$lambda [1] FALSE If you do: coef(lassoFit1$finalModel,lassoFit1$bestTune$lambda) 8 x 1 sparse Matrix of class "dgCMatrix" 1 (Intercept) -4.532659e-15 Population 1.493984e-01 Income . Illiteracy . Murder -7.929823e-01 HS.Grad 2.669362e-01 Frost -1.979238e-01 Area . It will give you the values from the lambda it tested, that is closest to your best tune lambda. You can of course re-fit the model again with your specified lambda and alpha: fit = glmnet(x=statedata[,c(1:3,5,6,7,8)],y=statedata[,4], lambda=lassoFit1$bestTune$lambda,alpah=lassoFit1$bestTune$alpha) > fit$beta 7 x 1 sparse Matrix of class "dgCMatrix" s0 Population 0.1493747 Income . Illiteracy . Murder -0.7929223 HS.Grad 0.2669745 Frost -0.1979134 Area . Which you can see is close enough to the first approximation. To answer your other questions: I get the coefficients. Is this the best model? You did coef(cvfit, s="lambda.min") which is the lambda with the least error. If you read the glmnet paper, they go with Breimen's 1SE rule (see this for a complete view), as it calls uses a less complicated model. You might want to consider using coef(cvfit, s="lambda.1se"). does test more lambdas in the cross validation, is that true? Does caret or glmnet lead to a better model?It looks like glmnet by default cv.glmnet test a defined number of lambdas, in this example it is 67 but you can specify more by passing lambda=<your set of lambda to test>. You should get similar values using caret or cv.glmnet, but note that you cannot vary alpha with cv.glmnet() How do I manage to extrage the best final model from caret and glmnet and plug it in a cox hazard model for example? I guess you want to take the non-zero coefficients. and you can do this by #exclude intercept res = coef(cvfit, s="lambda.1se")[-1,] names(res)[which(res!=0)] [1] "Murder" "HS.Grad"
using caret and glmnet for variable selection
If you check the lambdas and your best lambda obtained from caret, you will see that it is not present in the model: lassoFit1$bestTune$lambda [1] 0.01545996 lassoFit1$bestTune$lambda %in% lassoFit1$f
using caret and glmnet for variable selection If you check the lambdas and your best lambda obtained from caret, you will see that it is not present in the model: lassoFit1$bestTune$lambda [1] 0.01545996 lassoFit1$bestTune$lambda %in% lassoFit1$finalModel$lambda [1] FALSE If you do: coef(lassoFit1$finalModel,lassoFit1$bestTune$lambda) 8 x 1 sparse Matrix of class "dgCMatrix" 1 (Intercept) -4.532659e-15 Population 1.493984e-01 Income . Illiteracy . Murder -7.929823e-01 HS.Grad 2.669362e-01 Frost -1.979238e-01 Area . It will give you the values from the lambda it tested, that is closest to your best tune lambda. You can of course re-fit the model again with your specified lambda and alpha: fit = glmnet(x=statedata[,c(1:3,5,6,7,8)],y=statedata[,4], lambda=lassoFit1$bestTune$lambda,alpah=lassoFit1$bestTune$alpha) > fit$beta 7 x 1 sparse Matrix of class "dgCMatrix" s0 Population 0.1493747 Income . Illiteracy . Murder -0.7929223 HS.Grad 0.2669745 Frost -0.1979134 Area . Which you can see is close enough to the first approximation. To answer your other questions: I get the coefficients. Is this the best model? You did coef(cvfit, s="lambda.min") which is the lambda with the least error. If you read the glmnet paper, they go with Breimen's 1SE rule (see this for a complete view), as it calls uses a less complicated model. You might want to consider using coef(cvfit, s="lambda.1se"). does test more lambdas in the cross validation, is that true? Does caret or glmnet lead to a better model?It looks like glmnet by default cv.glmnet test a defined number of lambdas, in this example it is 67 but you can specify more by passing lambda=<your set of lambda to test>. You should get similar values using caret or cv.glmnet, but note that you cannot vary alpha with cv.glmnet() How do I manage to extrage the best final model from caret and glmnet and plug it in a cox hazard model for example? I guess you want to take the non-zero coefficients. and you can do this by #exclude intercept res = coef(cvfit, s="lambda.1se")[-1,] names(res)[which(res!=0)] [1] "Murder" "HS.Grad"
using caret and glmnet for variable selection If you check the lambdas and your best lambda obtained from caret, you will see that it is not present in the model: lassoFit1$bestTune$lambda [1] 0.01545996 lassoFit1$bestTune$lambda %in% lassoFit1$f
49,329
Correcting naΓ―ve Sensitivity and Specificity for classifier tested against imperfect gold standard
Hugues, This should be relatively straightforward given one very crucial assumption, that we will get to. Let's establish some notation. Let's define $X$ to be the random variable obtained by randomly selecting a data point from your set and classifying it using your classifier. $Y$ as the random variable obtained by randomly selecting a data point from your set and getting it's gold standard class label. And $Z$ as the random variable obtained by randomly selecting a data point from your set and getting its true label. Now let's summarize the information we have so far. The things we know or believe we know are $$ P(X|Y=1), P(X|Y=0), P(Y|Z=1), P(Y|Z=0). $$ These are given by the sensitivity and specificity values that you have measured or assumed. So more succinctly we know: $$ P(X|Y), P(Y|Z) $$ What we want to know is $P(X|Z)$, the true sensitivity and specificity of your classifier. We can obtain this from $P(X,Y|Z)$ by summing over all (both) possible values of $Y$, if we can get $P(X,Y|Z)$. It is a simple consequence of the definition of conditional probability that $$ P(X,Y|Z) = P(X|Y,Z) \cdot P(Y|Z), $$ [if this is new to you remove the Z and it will be quite familiar]. But we don't know $P(X|Y,Z)$. Therefore the pivotal assumption, without which I don't think we can do anything (unless you know the true label of individuals in which case you should train on that), is that $X,Z$ are conditionally independent, given $Y$, in which case $P(X|Y,Z) = P(X|Y)$, that is the only way the true label effects your prediction is by effecting the (gold) label you used to train your predictor. So if your comfortable with that assumption, then we can proceed to calculate: $$ P(X=1|Z=1) = P(X=1|Y=1)P(Y=1|Z=1) + P(X=1|Y=0)P(Y=0|Z=1) = 0.84\cdot0.95 + 0.05\cdot0.05 $$ I will leave the other calculation to you. Hope this was helpful!
Correcting naΓ―ve Sensitivity and Specificity for classifier tested against imperfect gold standard
Hugues, This should be relatively straightforward given one very crucial assumption, that we will get to. Let's establish some notation. Let's define $X$ to be the random variable obtained by random
Correcting naΓ―ve Sensitivity and Specificity for classifier tested against imperfect gold standard Hugues, This should be relatively straightforward given one very crucial assumption, that we will get to. Let's establish some notation. Let's define $X$ to be the random variable obtained by randomly selecting a data point from your set and classifying it using your classifier. $Y$ as the random variable obtained by randomly selecting a data point from your set and getting it's gold standard class label. And $Z$ as the random variable obtained by randomly selecting a data point from your set and getting its true label. Now let's summarize the information we have so far. The things we know or believe we know are $$ P(X|Y=1), P(X|Y=0), P(Y|Z=1), P(Y|Z=0). $$ These are given by the sensitivity and specificity values that you have measured or assumed. So more succinctly we know: $$ P(X|Y), P(Y|Z) $$ What we want to know is $P(X|Z)$, the true sensitivity and specificity of your classifier. We can obtain this from $P(X,Y|Z)$ by summing over all (both) possible values of $Y$, if we can get $P(X,Y|Z)$. It is a simple consequence of the definition of conditional probability that $$ P(X,Y|Z) = P(X|Y,Z) \cdot P(Y|Z), $$ [if this is new to you remove the Z and it will be quite familiar]. But we don't know $P(X|Y,Z)$. Therefore the pivotal assumption, without which I don't think we can do anything (unless you know the true label of individuals in which case you should train on that), is that $X,Z$ are conditionally independent, given $Y$, in which case $P(X|Y,Z) = P(X|Y)$, that is the only way the true label effects your prediction is by effecting the (gold) label you used to train your predictor. So if your comfortable with that assumption, then we can proceed to calculate: $$ P(X=1|Z=1) = P(X=1|Y=1)P(Y=1|Z=1) + P(X=1|Y=0)P(Y=0|Z=1) = 0.84\cdot0.95 + 0.05\cdot0.05 $$ I will leave the other calculation to you. Hope this was helpful!
Correcting naΓ―ve Sensitivity and Specificity for classifier tested against imperfect gold standard Hugues, This should be relatively straightforward given one very crucial assumption, that we will get to. Let's establish some notation. Let's define $X$ to be the random variable obtained by random
49,330
Correct glmer distribution family and link for a continuous zero-inflated data set
Assuming that you are describing conditional and not marginal distributions (i.e., if your response variable is y then hist(mydata$y) will not typically give you what you want; you should be concerned with the distribution around the expected values): Changing the link function won't help you; it determines the dependence of location on predictors, not the conditional distribution I would recommend a two-stage approach; use a binomial model to fit zero vs. non-zero, then use either a Gamma model (probably with a log link, it's much more stable than the canonical inverse link) or (more flexibly) transform your non-zero values to make them approximately Normal. There are very few distributional models for positive data that admit zeros (Gamma, Weibull, log-Normal all give likelihood=zero for data exactly equal to zero, at least for some parameter regimes [LN always, Gamma and Weibull for shape<1]; in any case they don't account for a point mass (spike) at zero. Similarly, some data transformations (Box-Cox) will break with non-positive data, others (Yeo-Johnson) won't break, but won't handle a pile of zeros gracefully. The only real downside of the two-stage model is that the zero-vs-nonzero and conditional-if-nonzero models are completely independent. If you want to stick with the Gaussian assumption, you could do something nonparametric (bootstrapping or permutation tests) to try to make your results robust to violations of distributional assumptions. You could try a model based on a Tweedie distribution; check out the cpglmm function from the cplm package.
Correct glmer distribution family and link for a continuous zero-inflated data set
Assuming that you are describing conditional and not marginal distributions (i.e., if your response variable is y then hist(mydata$y) will not typically give you what you want; you should be concerned
Correct glmer distribution family and link for a continuous zero-inflated data set Assuming that you are describing conditional and not marginal distributions (i.e., if your response variable is y then hist(mydata$y) will not typically give you what you want; you should be concerned with the distribution around the expected values): Changing the link function won't help you; it determines the dependence of location on predictors, not the conditional distribution I would recommend a two-stage approach; use a binomial model to fit zero vs. non-zero, then use either a Gamma model (probably with a log link, it's much more stable than the canonical inverse link) or (more flexibly) transform your non-zero values to make them approximately Normal. There are very few distributional models for positive data that admit zeros (Gamma, Weibull, log-Normal all give likelihood=zero for data exactly equal to zero, at least for some parameter regimes [LN always, Gamma and Weibull for shape<1]; in any case they don't account for a point mass (spike) at zero. Similarly, some data transformations (Box-Cox) will break with non-positive data, others (Yeo-Johnson) won't break, but won't handle a pile of zeros gracefully. The only real downside of the two-stage model is that the zero-vs-nonzero and conditional-if-nonzero models are completely independent. If you want to stick with the Gaussian assumption, you could do something nonparametric (bootstrapping or permutation tests) to try to make your results robust to violations of distributional assumptions. You could try a model based on a Tweedie distribution; check out the cpglmm function from the cplm package.
Correct glmer distribution family and link for a continuous zero-inflated data set Assuming that you are describing conditional and not marginal distributions (i.e., if your response variable is y then hist(mydata$y) will not typically give you what you want; you should be concerned
49,331
Detect periodic events within data
I would not recommend turning the data into 0/1 . I have had a lot of experience with daily bank payment data , ATM access , deposits etc.. If the payments are systematic/regular to a particular day of the month then its is fairly straightforward to identify these patterns. If however the data is non-systematic then I would suggest a two stage approach. Stage 1 would identify pulses and record the date and the magnitude of the pulse. The second stage would require a rule-based approach where you took the pulses and the data es that were associated with them and pooled/analyzed them according to your specification i.e. things 1 day apart or 2 days apart from regular should be considered/classified as part of the same family. Another way is to use the size of the pulse/unusual value as the qualified for grouping. Detecting unusual activity requires a model that incorporates the routine e.g. day-of-the-week effects, holiday effects, weekly effects, monthly effects and any auto-correlative structure evidenced in your data. Commercially available software may have to customized/improved to deal with this potentially thorny issue but that's typical of how software develops as a result of it's documented current inadequacy.
Detect periodic events within data
I would not recommend turning the data into 0/1 . I have had a lot of experience with daily bank payment data , ATM access , deposits etc.. If the payments are systematic/regular to a particular day
Detect periodic events within data I would not recommend turning the data into 0/1 . I have had a lot of experience with daily bank payment data , ATM access , deposits etc.. If the payments are systematic/regular to a particular day of the month then its is fairly straightforward to identify these patterns. If however the data is non-systematic then I would suggest a two stage approach. Stage 1 would identify pulses and record the date and the magnitude of the pulse. The second stage would require a rule-based approach where you took the pulses and the data es that were associated with them and pooled/analyzed them according to your specification i.e. things 1 day apart or 2 days apart from regular should be considered/classified as part of the same family. Another way is to use the size of the pulse/unusual value as the qualified for grouping. Detecting unusual activity requires a model that incorporates the routine e.g. day-of-the-week effects, holiday effects, weekly effects, monthly effects and any auto-correlative structure evidenced in your data. Commercially available software may have to customized/improved to deal with this potentially thorny issue but that's typical of how software develops as a result of it's documented current inadequacy.
Detect periodic events within data I would not recommend turning the data into 0/1 . I have had a lot of experience with daily bank payment data , ATM access , deposits etc.. If the payments are systematic/regular to a particular day
49,332
Measuring length of intervention effect
Given that you seem to have a panel of individuals who you follow over time of which some are treated and others are not you could run a difference in difference analysis. You could run a regression like $$y_{it} = \beta_1 (\text{treat}_{i}) + \beta_2 (\text{intervention}_t) + \beta_3 (\text{treat}_{i} \cdot \text{intervention}_t) + \epsilon_{it}$$ where $\text{treat}_{i}$ is a dummy for whether individual $i$ is in the treatment group, $\text{intervention}_t$ is a dummy for the post-treatment period, and the interaction between the two captures the treatment effect in $\beta_3$. If you now want to estimate the fading out time, estimate instead $$y_{it} = \sum^m_{\gamma = 0} \beta_{-\gamma}(\text{treatment}_{it}) + \eta_{it}$$ where $\text{treatment}_{it}$ is a dummy variable which equals one if individual $i$ is in the treatment group AND time $t$ is at or after the treatment date. This estimates the first equation but with $m$ lags of the treatment for which you can choose the number of periods you have from the start of the treatment to the end of the sample period. Then $\beta_0$ is the treatment effect at the intervention date, $\beta_1$ is the effect of the intervention at the first period after the intervention date, and so on. The nice thing about this approach is that it is easily implemented in any statistical software (you just need to create the dummies and run a regression) the $\beta_0, ..., \beta_m$ coefficients will have standard errors and confidence intervals which you can use to see the time (lag of the treatment) from when the intervention stops to have an effect the $\beta_0, ..., \beta_m$ coefficients will give you an estimate of the magnitude the intervention had in subsequent periods If you also have additional control variables like characteristics of the study participants $X_{it}$ you can easily include them in the regression, $$y_{it} = \sum^m_{\gamma = 0} \beta_{-\gamma}(\text{treatment}_{it}) + X'_{it}\rho + \eta_{it}$$ this will not affect the estimate of the intervention effect (because identification comes from the group differences between treatment and control groups) but it helps to reduce residual variance and therefore increases precision.
Measuring length of intervention effect
Given that you seem to have a panel of individuals who you follow over time of which some are treated and others are not you could run a difference in difference analysis. You could run a regression l
Measuring length of intervention effect Given that you seem to have a panel of individuals who you follow over time of which some are treated and others are not you could run a difference in difference analysis. You could run a regression like $$y_{it} = \beta_1 (\text{treat}_{i}) + \beta_2 (\text{intervention}_t) + \beta_3 (\text{treat}_{i} \cdot \text{intervention}_t) + \epsilon_{it}$$ where $\text{treat}_{i}$ is a dummy for whether individual $i$ is in the treatment group, $\text{intervention}_t$ is a dummy for the post-treatment period, and the interaction between the two captures the treatment effect in $\beta_3$. If you now want to estimate the fading out time, estimate instead $$y_{it} = \sum^m_{\gamma = 0} \beta_{-\gamma}(\text{treatment}_{it}) + \eta_{it}$$ where $\text{treatment}_{it}$ is a dummy variable which equals one if individual $i$ is in the treatment group AND time $t$ is at or after the treatment date. This estimates the first equation but with $m$ lags of the treatment for which you can choose the number of periods you have from the start of the treatment to the end of the sample period. Then $\beta_0$ is the treatment effect at the intervention date, $\beta_1$ is the effect of the intervention at the first period after the intervention date, and so on. The nice thing about this approach is that it is easily implemented in any statistical software (you just need to create the dummies and run a regression) the $\beta_0, ..., \beta_m$ coefficients will have standard errors and confidence intervals which you can use to see the time (lag of the treatment) from when the intervention stops to have an effect the $\beta_0, ..., \beta_m$ coefficients will give you an estimate of the magnitude the intervention had in subsequent periods If you also have additional control variables like characteristics of the study participants $X_{it}$ you can easily include them in the regression, $$y_{it} = \sum^m_{\gamma = 0} \beta_{-\gamma}(\text{treatment}_{it}) + X'_{it}\rho + \eta_{it}$$ this will not affect the estimate of the intervention effect (because identification comes from the group differences between treatment and control groups) but it helps to reduce residual variance and therefore increases precision.
Measuring length of intervention effect Given that you seem to have a panel of individuals who you follow over time of which some are treated and others are not you could run a difference in difference analysis. You could run a regression l
49,333
Measuring length of intervention effect
Intervention Detection http://www.unc.edu/~jbhill/tsay.pdf and elsewhere can be employed with or without a user-suggested intervention variable. In either case one needs to treat any auto-projective process that might be present i.e. the ARIMA structure. Identifying both the ARIMA structure and the response to any user-suggested variable and the "new intervention series" requires some trial and error as the search process/method is to identify if and when this waiting-to-be-discovered intervention arises. This search process can be solved/aided with software but one needs to confirm that the software incorporates any needed impact i.e. user-specified possible predictor series/variables including their lag structures and that the ARIMA process identification phase was not damaged/impacted by the intervention that is waiting-to-be-discovered which of course would have a deleterious effect. The length would be the difference between the point of the user-specified intervention variable ...if it existed OR the difference between the two identified Level Shift/Step Shift variables that were found/discovered/unmasked .
Measuring length of intervention effect
Intervention Detection http://www.unc.edu/~jbhill/tsay.pdf and elsewhere can be employed with or without a user-suggested intervention variable. In either case one needs to treat any auto-projective p
Measuring length of intervention effect Intervention Detection http://www.unc.edu/~jbhill/tsay.pdf and elsewhere can be employed with or without a user-suggested intervention variable. In either case one needs to treat any auto-projective process that might be present i.e. the ARIMA structure. Identifying both the ARIMA structure and the response to any user-suggested variable and the "new intervention series" requires some trial and error as the search process/method is to identify if and when this waiting-to-be-discovered intervention arises. This search process can be solved/aided with software but one needs to confirm that the software incorporates any needed impact i.e. user-specified possible predictor series/variables including their lag structures and that the ARIMA process identification phase was not damaged/impacted by the intervention that is waiting-to-be-discovered which of course would have a deleterious effect. The length would be the difference between the point of the user-specified intervention variable ...if it existed OR the difference between the two identified Level Shift/Step Shift variables that were found/discovered/unmasked .
Measuring length of intervention effect Intervention Detection http://www.unc.edu/~jbhill/tsay.pdf and elsewhere can be employed with or without a user-suggested intervention variable. In either case one needs to treat any auto-projective p
49,334
Comparing many performance curves: request for data visualization tips
As was the recommendation in Color and line thickness recommendations for line plots, small multiples are a common solution for plots that have problems with overplotting. Here is an example with the 5 curves you provided. It is a lot of information, but it is pretty easy to see that A, C & E are all decreasing. C & E have increasing variance for high parameter values, while B and D are fairly constant, and A hits the bottom (I would guess the metric has to be a positive value given the graphs.) Small multiples can extend to 15 curves, but when making the panels smaller IMO you should make the panels a bit more minimalist in the smaller space, e.g. ditch the gridlines, make the tick marks smaller and more sparse, etc. Error bars make the overplotting problem even more problematic, so it is harder to stuff multiple error bars into one graph if the trajectories overlap. One alternative way I like though is to use semi-transparent areas as opposed to the points and bars. My labeling could use some work, but here is an example with these curves (plus and minus two standard deviations to make the areas a bit wider). I had to put E in a separate panel, as it occluded A and C. Seeing the overlap of two areas is not that difficult given the right colors for the areas, three though is very difficult.
Comparing many performance curves: request for data visualization tips
As was the recommendation in Color and line thickness recommendations for line plots, small multiples are a common solution for plots that have problems with overplotting. Here is an example with the
Comparing many performance curves: request for data visualization tips As was the recommendation in Color and line thickness recommendations for line plots, small multiples are a common solution for plots that have problems with overplotting. Here is an example with the 5 curves you provided. It is a lot of information, but it is pretty easy to see that A, C & E are all decreasing. C & E have increasing variance for high parameter values, while B and D are fairly constant, and A hits the bottom (I would guess the metric has to be a positive value given the graphs.) Small multiples can extend to 15 curves, but when making the panels smaller IMO you should make the panels a bit more minimalist in the smaller space, e.g. ditch the gridlines, make the tick marks smaller and more sparse, etc. Error bars make the overplotting problem even more problematic, so it is harder to stuff multiple error bars into one graph if the trajectories overlap. One alternative way I like though is to use semi-transparent areas as opposed to the points and bars. My labeling could use some work, but here is an example with these curves (plus and minus two standard deviations to make the areas a bit wider). I had to put E in a separate panel, as it occluded A and C. Seeing the overlap of two areas is not that difficult given the right colors for the areas, three though is very difficult.
Comparing many performance curves: request for data visualization tips As was the recommendation in Color and line thickness recommendations for line plots, small multiples are a common solution for plots that have problems with overplotting. Here is an example with the
49,335
Clustering in Instrumental Variables Regression?
The relevant reference would be Shore-Sheppard (1996) "The Precision of Instrumental Variables Estimates With Grouped Data". You can directly calculate by how much the standard errors in 2SLS are over-estimated by using the Moulton factor $$\frac{Var(\widehat{\beta}^c)}{Var(\widehat{\beta}^{ols})} = 1 + \left(\frac{Var(n_g)}{\overline{n}} + \overline{n} -1 \right)\rho_z\rho $$ where $g$ are the groups, $\overline{n}$ is the average group size $$\rho_z = \frac{\sum_g \sum_{i\neq k}(z_{ig}-\overline{z})(z_{kg}-\overline{z})}{Var(z_{ig})\sum_g n_g (n_g - 1)} $$ is the intra-class correlation coefficient of the instrument $z$ and $\rho$ is the intra-class correlation coefficient of the second stage error - clustering in the first stage error does not matter for this. From this you see that your 2SLS standard error depends on the number of groups and their average sizes, and the two intra-class correlation coefficients. If you need more information on this have a look at these lecture notes by Steve Pischke.
Clustering in Instrumental Variables Regression?
The relevant reference would be Shore-Sheppard (1996) "The Precision of Instrumental Variables Estimates With Grouped Data". You can directly calculate by how much the standard errors in 2SLS are over
Clustering in Instrumental Variables Regression? The relevant reference would be Shore-Sheppard (1996) "The Precision of Instrumental Variables Estimates With Grouped Data". You can directly calculate by how much the standard errors in 2SLS are over-estimated by using the Moulton factor $$\frac{Var(\widehat{\beta}^c)}{Var(\widehat{\beta}^{ols})} = 1 + \left(\frac{Var(n_g)}{\overline{n}} + \overline{n} -1 \right)\rho_z\rho $$ where $g$ are the groups, $\overline{n}$ is the average group size $$\rho_z = \frac{\sum_g \sum_{i\neq k}(z_{ig}-\overline{z})(z_{kg}-\overline{z})}{Var(z_{ig})\sum_g n_g (n_g - 1)} $$ is the intra-class correlation coefficient of the instrument $z$ and $\rho$ is the intra-class correlation coefficient of the second stage error - clustering in the first stage error does not matter for this. From this you see that your 2SLS standard error depends on the number of groups and their average sizes, and the two intra-class correlation coefficients. If you need more information on this have a look at these lecture notes by Steve Pischke.
Clustering in Instrumental Variables Regression? The relevant reference would be Shore-Sheppard (1996) "The Precision of Instrumental Variables Estimates With Grouped Data". You can directly calculate by how much the standard errors in 2SLS are over
49,336
Clustering in Instrumental Variables Regression?
I did some background research and found this here which characterizes the clustering issue in IV regression. Naturally, the clustering of errors will only appear in the covariance matrix of the structural errors. Therefore it is non-sensical to write down clustered first-stage errors. Hence \begin{eqnarray} Y_{i,g} = X'_{i,g} \beta + \eta_{g} + \epsilon_{i,g} \end{eqnarray} would be one line of the second stage regression while the other remains unchanged.
Clustering in Instrumental Variables Regression?
I did some background research and found this here which characterizes the clustering issue in IV regression. Naturally, the clustering of errors will only appear in the covariance matrix of the struc
Clustering in Instrumental Variables Regression? I did some background research and found this here which characterizes the clustering issue in IV regression. Naturally, the clustering of errors will only appear in the covariance matrix of the structural errors. Therefore it is non-sensical to write down clustered first-stage errors. Hence \begin{eqnarray} Y_{i,g} = X'_{i,g} \beta + \eta_{g} + \epsilon_{i,g} \end{eqnarray} would be one line of the second stage regression while the other remains unchanged.
Clustering in Instrumental Variables Regression? I did some background research and found this here which characterizes the clustering issue in IV regression. Naturally, the clustering of errors will only appear in the covariance matrix of the struc
49,337
Clustering in Instrumental Variables Regression?
In the standard instrumental variable case with 2-SLS, you indeed not do need to take into account the errors in the first stage as you say. However, if you were confronted with weak instruments, or want some more fancy endogeneity tests etc, then the usual weak instruments asymptotic need to be adjusted for the presence of cluster heteroskedasticity. A good overview of this can be found in: . Colin Cameron and Douglas L. Miller, "A Practitioner's Guide to Cluster-Robust Inference", Journal of Human Resources, forthcoming, Spring 2015, page 33-34.
Clustering in Instrumental Variables Regression?
In the standard instrumental variable case with 2-SLS, you indeed not do need to take into account the errors in the first stage as you say. However, if you were confronted with weak instruments, or
Clustering in Instrumental Variables Regression? In the standard instrumental variable case with 2-SLS, you indeed not do need to take into account the errors in the first stage as you say. However, if you were confronted with weak instruments, or want some more fancy endogeneity tests etc, then the usual weak instruments asymptotic need to be adjusted for the presence of cluster heteroskedasticity. A good overview of this can be found in: . Colin Cameron and Douglas L. Miller, "A Practitioner's Guide to Cluster-Robust Inference", Journal of Human Resources, forthcoming, Spring 2015, page 33-34.
Clustering in Instrumental Variables Regression? In the standard instrumental variable case with 2-SLS, you indeed not do need to take into account the errors in the first stage as you say. However, if you were confronted with weak instruments, or
49,338
How to interpret this PCA biplot?
Your interpretation is mostly correct. The first PC accounts for most of the variance, and the first eigenvector (principal axis) has all positive coordinates. It probably means that all variables are positively correlated between each other, and the first PC represents this "common factor". The second PC (looks like it has much smaller variance) contrasts b5 and b7 from everything else. Is there a connection between the blue vectors direction and the position of the scores? Meaning, do variable vectors which end close to some scores have something to do with those same scores? Here is one way to look at it. Imagine that you had a data point with original coordinates $(1,0,0,\ldots)$, i.e. only one variable is equal to $1$ and others to zero. Then this imaginary data point would have PC scores as the end-point of your b1 vector. The same goes for other vectors as well. Having said that, as the original 6D space is projected on 2D, many different points can be projected to the same 2D point, so if one blue vector, e.g. b1 has an end-point near one particular red point, it does not necessarily mean that this data point had coordinates $(1,0,0,\ldots)$. I should add that the above is true only for this particular normalization of a biplot, when blue lines correspond to eigenvectors, and the PC scores are not standardized. If data are well clustered, how do I interpret the results of the PCA in terms of what variable has a major (or minor) influence on the system? I don't really understand this question. I would not call these data "well clustered", it rather looks like a unimodal distribution. And all variables seem to be pretty similar in your case, positively correlated between each other and contributing similarly to PC1/PC2.
How to interpret this PCA biplot?
Your interpretation is mostly correct. The first PC accounts for most of the variance, and the first eigenvector (principal axis) has all positive coordinates. It probably means that all variables are
How to interpret this PCA biplot? Your interpretation is mostly correct. The first PC accounts for most of the variance, and the first eigenvector (principal axis) has all positive coordinates. It probably means that all variables are positively correlated between each other, and the first PC represents this "common factor". The second PC (looks like it has much smaller variance) contrasts b5 and b7 from everything else. Is there a connection between the blue vectors direction and the position of the scores? Meaning, do variable vectors which end close to some scores have something to do with those same scores? Here is one way to look at it. Imagine that you had a data point with original coordinates $(1,0,0,\ldots)$, i.e. only one variable is equal to $1$ and others to zero. Then this imaginary data point would have PC scores as the end-point of your b1 vector. The same goes for other vectors as well. Having said that, as the original 6D space is projected on 2D, many different points can be projected to the same 2D point, so if one blue vector, e.g. b1 has an end-point near one particular red point, it does not necessarily mean that this data point had coordinates $(1,0,0,\ldots)$. I should add that the above is true only for this particular normalization of a biplot, when blue lines correspond to eigenvectors, and the PC scores are not standardized. If data are well clustered, how do I interpret the results of the PCA in terms of what variable has a major (or minor) influence on the system? I don't really understand this question. I would not call these data "well clustered", it rather looks like a unimodal distribution. And all variables seem to be pretty similar in your case, positively correlated between each other and contributing similarly to PC1/PC2.
How to interpret this PCA biplot? Your interpretation is mostly correct. The first PC accounts for most of the variance, and the first eigenvector (principal axis) has all positive coordinates. It probably means that all variables are
49,339
Confidence intervals of fitted Weibull survival function?
I just worked this out as I had the same question myself. My answer is largely thanks to Mara Tableman (p65 of Survival Analysis Using S/R). Let's say you have the survival function $S(t)$ using the Weibull distribution, the lower and upper bounds of the 95% confidence interval can be calculated for every $t$ of the follow-up: $$CI_{lo} = exp\left\{ log\left(S(t)\right)\times e^{z^*/\sqrt{n_u}}\right\}$$ $$CI_{hi} = exp\left\{ log\left(S(t)\right)\times e^{-z^*/\sqrt{n_u}}\right\}$$ where $z^*$ is the relevant quantile of the normal distribution ($z^*\approx1.96$ for a 95% CI) and $n_u$ is the number of events observed for the follow-up time (i.e., not the number of patients/subjects). The below is R code to calculate the confidence bounds for a vector, named S_t, of survival proportions (a vector of $S(t)$ at specified follow-up times, $t$). nu # number of events, needs to be set z_st<-qnorm(0.975) ci_lo <- exp(log(S_t)*exp(z_st/sqrt(nu))) ci_hi <- exp(log(S_t)*exp(-z_st/sqrt(nu)))
Confidence intervals of fitted Weibull survival function?
I just worked this out as I had the same question myself. My answer is largely thanks to Mara Tableman (p65 of Survival Analysis Using S/R). Let's say you have the survival function $S(t)$ using the
Confidence intervals of fitted Weibull survival function? I just worked this out as I had the same question myself. My answer is largely thanks to Mara Tableman (p65 of Survival Analysis Using S/R). Let's say you have the survival function $S(t)$ using the Weibull distribution, the lower and upper bounds of the 95% confidence interval can be calculated for every $t$ of the follow-up: $$CI_{lo} = exp\left\{ log\left(S(t)\right)\times e^{z^*/\sqrt{n_u}}\right\}$$ $$CI_{hi} = exp\left\{ log\left(S(t)\right)\times e^{-z^*/\sqrt{n_u}}\right\}$$ where $z^*$ is the relevant quantile of the normal distribution ($z^*\approx1.96$ for a 95% CI) and $n_u$ is the number of events observed for the follow-up time (i.e., not the number of patients/subjects). The below is R code to calculate the confidence bounds for a vector, named S_t, of survival proportions (a vector of $S(t)$ at specified follow-up times, $t$). nu # number of events, needs to be set z_st<-qnorm(0.975) ci_lo <- exp(log(S_t)*exp(z_st/sqrt(nu))) ci_hi <- exp(log(S_t)*exp(-z_st/sqrt(nu)))
Confidence intervals of fitted Weibull survival function? I just worked this out as I had the same question myself. My answer is largely thanks to Mara Tableman (p65 of Survival Analysis Using S/R). Let's say you have the survival function $S(t)$ using the
49,340
Confidence intervals of fitted Weibull survival function?
I found that @tystanza's answer (Tableman's equations) give a too conservative confidence interval -- that is, they are too wide. When I did a Monte Carlo simulation to estimate the coverage of the Tableman confidence interval, I found that nominal 95% CI's gave 100% coverage. I found another option from the package flexsurv. I don't think they directly expose the confidence intervals, but they can be found from "summary". Here is the hack workaround I found to get CI's. I did verify through MC sampling that the coverage of these CI's is just about right (I found the CI's contained the true value in 930/1000 cases for 95% confidence intervals -- pretty close). require(flexsurv) require(survival) sWei = flexsurvreg(Surv(samples, rep(1,N)) ~ 1, dist='weibull', cl = 1-alpha) s = unlist(summary(sWei)) lowerCI = s[(2*N+1):(3*N)] upperCI = s[(3*N+1):(4*N)]
Confidence intervals of fitted Weibull survival function?
I found that @tystanza's answer (Tableman's equations) give a too conservative confidence interval -- that is, they are too wide. When I did a Monte Carlo simulation to estimate the coverage of the Ta
Confidence intervals of fitted Weibull survival function? I found that @tystanza's answer (Tableman's equations) give a too conservative confidence interval -- that is, they are too wide. When I did a Monte Carlo simulation to estimate the coverage of the Tableman confidence interval, I found that nominal 95% CI's gave 100% coverage. I found another option from the package flexsurv. I don't think they directly expose the confidence intervals, but they can be found from "summary". Here is the hack workaround I found to get CI's. I did verify through MC sampling that the coverage of these CI's is just about right (I found the CI's contained the true value in 930/1000 cases for 95% confidence intervals -- pretty close). require(flexsurv) require(survival) sWei = flexsurvreg(Surv(samples, rep(1,N)) ~ 1, dist='weibull', cl = 1-alpha) s = unlist(summary(sWei)) lowerCI = s[(2*N+1):(3*N)] upperCI = s[(3*N+1):(4*N)]
Confidence intervals of fitted Weibull survival function? I found that @tystanza's answer (Tableman's equations) give a too conservative confidence interval -- that is, they are too wide. When I did a Monte Carlo simulation to estimate the coverage of the Ta
49,341
Tsallis and RΓ©nyi Normalized Entropy
Tsallis and RΓ©nyi entropy is the same thing, up to some rescaling. All of them are functions of $\sum_i p_i^\alpha$, with the special case of $\alpha\to1$ giving Shannon entropy. Look at Tom Leinster's "Entropy, Diversity and Cardinality (Part 2)", especially at the table comparing these properties. In short: RΓ©nyi entropies are in $[0, \log(N)]$, Tsallis entropies (called there $\alpha$-diversities) are in $[0, (1-N^{1-\alpha})/(1-\alpha)]$, $\alpha$-cardinalities are in $[1, N]$. Also, one more way to go is to use: 1/cardinality, in $[\tfrac{1}{N}, 1]$, just $\sum_i p_i^\alpha$, in $[\tfrac{1}{N^{\alpha-1}}, 1]$. The later two have the advantage that no matter what is the $N$, they always end up in $[0, 1]$.
Tsallis and RΓ©nyi Normalized Entropy
Tsallis and RΓ©nyi entropy is the same thing, up to some rescaling. All of them are functions of $\sum_i p_i^\alpha$, with the special case of $\alpha\to1$ giving Shannon entropy. Look at Tom Leinster'
Tsallis and RΓ©nyi Normalized Entropy Tsallis and RΓ©nyi entropy is the same thing, up to some rescaling. All of them are functions of $\sum_i p_i^\alpha$, with the special case of $\alpha\to1$ giving Shannon entropy. Look at Tom Leinster's "Entropy, Diversity and Cardinality (Part 2)", especially at the table comparing these properties. In short: RΓ©nyi entropies are in $[0, \log(N)]$, Tsallis entropies (called there $\alpha$-diversities) are in $[0, (1-N^{1-\alpha})/(1-\alpha)]$, $\alpha$-cardinalities are in $[1, N]$. Also, one more way to go is to use: 1/cardinality, in $[\tfrac{1}{N}, 1]$, just $\sum_i p_i^\alpha$, in $[\tfrac{1}{N^{\alpha-1}}, 1]$. The later two have the advantage that no matter what is the $N$, they always end up in $[0, 1]$.
Tsallis and RΓ©nyi Normalized Entropy Tsallis and RΓ©nyi entropy is the same thing, up to some rescaling. All of them are functions of $\sum_i p_i^\alpha$, with the special case of $\alpha\to1$ giving Shannon entropy. Look at Tom Leinster'
49,342
Tsallis and RΓ©nyi Normalized Entropy
The answer provided by Piotr is mostly correct, but there is a tiny mistake. The maximum Tsallis entropy value is actually given by the expression: $(1βˆ’N^{1βˆ’\alpha})/(\alpha-1)$. The denominator order is reversed. This is because this expression is obtained when dealing with a uniform probability distribution $ P = \{\frac{1}{N}, \frac{1}{N}, ..., \frac{1}{N}\}$ Considering the Tsallis entropy defined by the expression: \begin{align} T &= \frac{1}{\alpha-1} \sum_{j=1}^{N}(P_{j} - (P_{j})^\alpha)\\ \end{align} In this particular scenario, all instances of $ P_{j} $ will be equal to $\frac{1}{N}$. This means the sum can be reduced to $N\times(\frac{1}{N}-(\frac{1}{N})^\alpha)$. Therefore, we can rewrite this in the following way: \begin{align} T_{\max} &= \frac{1}{Ξ±-1}\times\left \{N\times\left[\frac{1}{N}-\left(\frac{1}{N}\right)^\alpha\right]\right\}\\ T_{\max} &= \frac{1}{\alpha-1}\times (1-N^{1-\alpha})\\ T_{\max} &= \frac{1βˆ’N^{1βˆ’\alpha}}{\alpha-1}\\ \end{align}
Tsallis and RΓ©nyi Normalized Entropy
The answer provided by Piotr is mostly correct, but there is a tiny mistake. The maximum Tsallis entropy value is actually given by the expression: $(1βˆ’N^{1βˆ’\alpha})/(\alpha-1)$. The denominator order
Tsallis and RΓ©nyi Normalized Entropy The answer provided by Piotr is mostly correct, but there is a tiny mistake. The maximum Tsallis entropy value is actually given by the expression: $(1βˆ’N^{1βˆ’\alpha})/(\alpha-1)$. The denominator order is reversed. This is because this expression is obtained when dealing with a uniform probability distribution $ P = \{\frac{1}{N}, \frac{1}{N}, ..., \frac{1}{N}\}$ Considering the Tsallis entropy defined by the expression: \begin{align} T &= \frac{1}{\alpha-1} \sum_{j=1}^{N}(P_{j} - (P_{j})^\alpha)\\ \end{align} In this particular scenario, all instances of $ P_{j} $ will be equal to $\frac{1}{N}$. This means the sum can be reduced to $N\times(\frac{1}{N}-(\frac{1}{N})^\alpha)$. Therefore, we can rewrite this in the following way: \begin{align} T_{\max} &= \frac{1}{Ξ±-1}\times\left \{N\times\left[\frac{1}{N}-\left(\frac{1}{N}\right)^\alpha\right]\right\}\\ T_{\max} &= \frac{1}{\alpha-1}\times (1-N^{1-\alpha})\\ T_{\max} &= \frac{1βˆ’N^{1βˆ’\alpha}}{\alpha-1}\\ \end{align}
Tsallis and RΓ©nyi Normalized Entropy The answer provided by Piotr is mostly correct, but there is a tiny mistake. The maximum Tsallis entropy value is actually given by the expression: $(1βˆ’N^{1βˆ’\alpha})/(\alpha-1)$. The denominator order
49,343
Are cross-validated prediction errors i.i.d?
I think you need to be clear what distribution you need to represent. This differers according to what the cross validation is meant for. In the case that the cross validation is meant to measure (approximate) the performance of the model obtained from this particular training set, the corresponding distribution would be the distribution of cases in the training set at hand. From that perspective, you draw almost the entire population, though without replacement. In contrast, if you are asking about the distribution of $n$ cases drawn from the population the training set was drawn from, then the cross validation resampled surrogate training sets are correlated. See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105. This is important for comparisons which algorithm performs better for a particular type of data.
Are cross-validated prediction errors i.i.d?
I think you need to be clear what distribution you need to represent. This differers according to what the cross validation is meant for. In the case that the cross validation is meant to measure (
Are cross-validated prediction errors i.i.d? I think you need to be clear what distribution you need to represent. This differers according to what the cross validation is meant for. In the case that the cross validation is meant to measure (approximate) the performance of the model obtained from this particular training set, the corresponding distribution would be the distribution of cases in the training set at hand. From that perspective, you draw almost the entire population, though without replacement. In contrast, if you are asking about the distribution of $n$ cases drawn from the population the training set was drawn from, then the cross validation resampled surrogate training sets are correlated. See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105. This is important for comparisons which algorithm performs better for a particular type of data.
Are cross-validated prediction errors i.i.d? I think you need to be clear what distribution you need to represent. This differers according to what the cross validation is meant for. In the case that the cross validation is meant to measure (
49,344
Are cross-validated prediction errors i.i.d?
They can't be independent. Consider adding one extreme outlier sample, then many of your cross validation folds will be skewed in a correlated way.
Are cross-validated prediction errors i.i.d?
They can't be independent. Consider adding one extreme outlier sample, then many of your cross validation folds will be skewed in a correlated way.
Are cross-validated prediction errors i.i.d? They can't be independent. Consider adding one extreme outlier sample, then many of your cross validation folds will be skewed in a correlated way.
Are cross-validated prediction errors i.i.d? They can't be independent. Consider adding one extreme outlier sample, then many of your cross validation folds will be skewed in a correlated way.
49,345
Is the objective to beat a random classifier when the data set is skewed using PR curves?
A random classifiers randomly selects a subset of the total data and labels it as positive. The size of said subset is associated with the recall of the random classifier. Since predictions are done entirely at random, the expected precision of such a labeling is equal to the fraction of positives in the total data set (at any recall). Hence, the PR curve of a random classifier is a horizontal line at precision=$\rho$ where $\rho$ is the fraction of positives in the total data set. The AUC is then immediately also equal to $\rho$. In PR space the AUC of a random model is directly related to the class balance. An AUC of 0.5 can mean a tremendously good model for high class skew. Always compare PR-AUC for the given class skew, don't compare it to the balanced setting. To answer your question: in general you do want to beat a random classifier, where random in PR space means having the curve I explained above. In practice your objective depends entirely on what you want to do. Obviously, being worse than random is usually a very serious problem but it doesn't necessarily matter. For instance, if your application requires a model with high recall, you don't care if said model is worse than random at low recall.
Is the objective to beat a random classifier when the data set is skewed using PR curves?
A random classifiers randomly selects a subset of the total data and labels it as positive. The size of said subset is associated with the recall of the random classifier. Since predictions are done e
Is the objective to beat a random classifier when the data set is skewed using PR curves? A random classifiers randomly selects a subset of the total data and labels it as positive. The size of said subset is associated with the recall of the random classifier. Since predictions are done entirely at random, the expected precision of such a labeling is equal to the fraction of positives in the total data set (at any recall). Hence, the PR curve of a random classifier is a horizontal line at precision=$\rho$ where $\rho$ is the fraction of positives in the total data set. The AUC is then immediately also equal to $\rho$. In PR space the AUC of a random model is directly related to the class balance. An AUC of 0.5 can mean a tremendously good model for high class skew. Always compare PR-AUC for the given class skew, don't compare it to the balanced setting. To answer your question: in general you do want to beat a random classifier, where random in PR space means having the curve I explained above. In practice your objective depends entirely on what you want to do. Obviously, being worse than random is usually a very serious problem but it doesn't necessarily matter. For instance, if your application requires a model with high recall, you don't care if said model is worse than random at low recall.
Is the objective to beat a random classifier when the data set is skewed using PR curves? A random classifiers randomly selects a subset of the total data and labels it as positive. The size of said subset is associated with the recall of the random classifier. Since predictions are done e
49,346
Asymmetric measure of non-linear dependence/correlation?
The $R^2$ of a multivariate regression model is such an asymmetric measure. The regression model of $Y$ on $X$ leads to a different $R^2$ than the regression model of $X$ on $Y$. This is because the value is computed using the proportion of vertical distance from the mean accounted for by the line of best fit using the conditional mean of $Y$ on $X$ to predict average potential outcomes for $Y$. EDIT: In the discourse below, it was shown that the $R^2$ may be conserved and "symmetric". The $R^2$ has a particular application and is useful for summarizing high dimensional predictive models. In general, dependence is a sophisticated mathematical concept, and you can rarely inform much about the dependence between two variables without making strong (and often untestable) assumptions. I think for conveying aspects of the interrelationship between two variables in an applied setting, the term "association" is much better. For smaller models, simply using the coefficient and its 95% confidence interval from a linear regression model is sufficient for reporting the first order trend in those data. These are well established association measures. Even if the trend is possibly nonlinear, a linear regression model has a coefficient that is taken to be a "rule-of-thumb" difference in outcomes for a unit difference in some regressor. These will necessarily be different for regression models treating a $Y$ variable as an outcome or a regressor. I see models of this form presented often in the literature with as many as 20 adjustment variables in large sample sizes.
Asymmetric measure of non-linear dependence/correlation?
The $R^2$ of a multivariate regression model is such an asymmetric measure. The regression model of $Y$ on $X$ leads to a different $R^2$ than the regression model of $X$ on $Y$. This is because the v
Asymmetric measure of non-linear dependence/correlation? The $R^2$ of a multivariate regression model is such an asymmetric measure. The regression model of $Y$ on $X$ leads to a different $R^2$ than the regression model of $X$ on $Y$. This is because the value is computed using the proportion of vertical distance from the mean accounted for by the line of best fit using the conditional mean of $Y$ on $X$ to predict average potential outcomes for $Y$. EDIT: In the discourse below, it was shown that the $R^2$ may be conserved and "symmetric". The $R^2$ has a particular application and is useful for summarizing high dimensional predictive models. In general, dependence is a sophisticated mathematical concept, and you can rarely inform much about the dependence between two variables without making strong (and often untestable) assumptions. I think for conveying aspects of the interrelationship between two variables in an applied setting, the term "association" is much better. For smaller models, simply using the coefficient and its 95% confidence interval from a linear regression model is sufficient for reporting the first order trend in those data. These are well established association measures. Even if the trend is possibly nonlinear, a linear regression model has a coefficient that is taken to be a "rule-of-thumb" difference in outcomes for a unit difference in some regressor. These will necessarily be different for regression models treating a $Y$ variable as an outcome or a regressor. I see models of this form presented often in the literature with as many as 20 adjustment variables in large sample sizes.
Asymmetric measure of non-linear dependence/correlation? The $R^2$ of a multivariate regression model is such an asymmetric measure. The regression model of $Y$ on $X$ leads to a different $R^2$ than the regression model of $X$ on $Y$. This is because the v
49,347
Asymmetric measure of non-linear dependence/correlation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It is possible to define nonsymmetric dependence measure R(X,Y) such that R(X,Y)=0 if and only if Y is independent of X, R(X,Y)=1 if and only if Y is a function of X. X and Y can be vector of random variables, continuous or discrete. An example would be a circle X^2 + Y^2 = 1, where neither X nor Y is a function of the other. Traditional symmetric dependence measures such as mutual information or Hellinger distance produce maximum dependence, but the new measure gives value R(X,Y)=R(Y,X)=0.5. Another example is Y=X^2. Again traditional measures give maximum value, but the new measure gives values R(X,Y)=1, R(Y, X)=0.5. Linear correlation gives 0 in both examples. The measure satisfies a new set of conditions different from Renyi's axioms for symmeyric dependence measures. In the continuous case, the measure is based on copula, thus nonparametric. Please check out my recent work on arXiv: 1502.03850, 1511.02744, 1512.07945 on bivariate, multivariate and discrete nonsymmetric dependence measures. Here are some more details: The nonsymmetric dependence measure R(X,Y) is defined as distance between the cumulative distribution of Y conditional on X and the unconditional cumulative distribution of Y. It equals zero when the two distributions are the same, which implies Y is independent of X. It takes maximum value when Y is a function of X, or the cumulative distribution of Y conditional on X has a single jump from zero to one. This can be extended to n-dimensions, where R(X1,...,Xn,Y) is defined as distance between the cumulative distribution of Y conditional on X1,...,Xn and the unconditional cumulative distribution of Y. It takes minimum value (zero) when Y is independent of X1,...,Xn. It takes maximum value when Y is a function of X1,...,Xn.
Asymmetric measure of non-linear dependence/correlation?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Asymmetric measure of non-linear dependence/correlation? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It is possible to define nonsymmetric dependence measure R(X,Y) such that R(X,Y)=0 if and only if Y is independent of X, R(X,Y)=1 if and only if Y is a function of X. X and Y can be vector of random variables, continuous or discrete. An example would be a circle X^2 + Y^2 = 1, where neither X nor Y is a function of the other. Traditional symmetric dependence measures such as mutual information or Hellinger distance produce maximum dependence, but the new measure gives value R(X,Y)=R(Y,X)=0.5. Another example is Y=X^2. Again traditional measures give maximum value, but the new measure gives values R(X,Y)=1, R(Y, X)=0.5. Linear correlation gives 0 in both examples. The measure satisfies a new set of conditions different from Renyi's axioms for symmeyric dependence measures. In the continuous case, the measure is based on copula, thus nonparametric. Please check out my recent work on arXiv: 1502.03850, 1511.02744, 1512.07945 on bivariate, multivariate and discrete nonsymmetric dependence measures. Here are some more details: The nonsymmetric dependence measure R(X,Y) is defined as distance between the cumulative distribution of Y conditional on X and the unconditional cumulative distribution of Y. It equals zero when the two distributions are the same, which implies Y is independent of X. It takes maximum value when Y is a function of X, or the cumulative distribution of Y conditional on X has a single jump from zero to one. This can be extended to n-dimensions, where R(X1,...,Xn,Y) is defined as distance between the cumulative distribution of Y conditional on X1,...,Xn and the unconditional cumulative distribution of Y. It takes minimum value (zero) when Y is independent of X1,...,Xn. It takes maximum value when Y is a function of X1,...,Xn.
Asymmetric measure of non-linear dependence/correlation? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
49,348
How does one create a confidence interval for the ratio of the means of two non-normal bounded distributions?
What it seems like you're really trying to do is: Prove that there is a linear relationship between $X_i$ and $Y_i$ Determine the constant proportion relating the two with some degree of certainty. I think the separation is important because if your assumption "$X_i/Y_i$ and $Y_i$ are not independent" was meant to mean you believe the ratio of the two is truly dependent on $Y_i$, then this is equivalent to saying that there is NOT a linear relationship between $X$ and $Y$. This also means that the target quantity $\sum{X_i} / \sum{Y_i}$ may not actually represent a quantity that exists in the system being sampled. In other words, if you can't prove #1 above then there's really no sense in trying to tackle #2 since the estimated quantity might be a poor reflection of what's going on. Hopefully this example taken from your comment will help: Let's say that $X_i$ is the number of democrats in a US county with population $Y_i$ and that the $X$ and $Y$ values are not independent. Then the relationship could be nice and straightforward like this, where the estimate you seem to be after, $\hat{p} = \frac{1}{n}\sum_{i=1}^n{X_i/Y_i}$, would be pretty accurate for most countries and would increase slightly with the overall population: Scenario 1 Slight linear relationship between $X_i/Y_i$ and $Y_i$ Above, a confidence interval for $\hat{p}$ would be useful since it could be used to predict $Y$ for a given $X$ pretty reliably since there is not a strong linear relationship between $X_i/Y_i$ and $Y_i$. On the other hand, in a scenario like the one below, the confidence interval for $\hat{p}$ is really kind of useless and since it spans such a large number of the possible proportions: Scenario 2 Strong linear relationship between $X_i/Y_i$ and $Y_i$ My point then is that if you think $X_i/Y_i$ increases or decreases with $Y_i$, then I would be very surprised if there is any theoretical bound for the confidence interval given that it's pretty straightforward to come up with ways that interval can span nearly all of [0, 1]. Overall, I think it would be smart to take a closer look at how the fraction $X_i/Y_i$ changes with $Y_i$ and if you can't convince yourself that the dependence between the two is small (possibly with a linear regression of $X$ to $Y$ and examination of the residuals), then I don't see how the estimate you're pulling now will be representative of much. OR, you could ignore all of this and jump straight to computing bootstrap confidence intervals for the statistic $\hat{p} = \frac{1}{n}\sum_{i=1}^n{X_i/Y_i}$, but again, if there is a strong dependence between this $\hat{p}$ and $Y_i$, any inference made with this estimate may not be very accurate.
How does one create a confidence interval for the ratio of the means of two non-normal bounded distr
What it seems like you're really trying to do is: Prove that there is a linear relationship between $X_i$ and $Y_i$ Determine the constant proportion relating the two with some degree of certainty.
How does one create a confidence interval for the ratio of the means of two non-normal bounded distributions? What it seems like you're really trying to do is: Prove that there is a linear relationship between $X_i$ and $Y_i$ Determine the constant proportion relating the two with some degree of certainty. I think the separation is important because if your assumption "$X_i/Y_i$ and $Y_i$ are not independent" was meant to mean you believe the ratio of the two is truly dependent on $Y_i$, then this is equivalent to saying that there is NOT a linear relationship between $X$ and $Y$. This also means that the target quantity $\sum{X_i} / \sum{Y_i}$ may not actually represent a quantity that exists in the system being sampled. In other words, if you can't prove #1 above then there's really no sense in trying to tackle #2 since the estimated quantity might be a poor reflection of what's going on. Hopefully this example taken from your comment will help: Let's say that $X_i$ is the number of democrats in a US county with population $Y_i$ and that the $X$ and $Y$ values are not independent. Then the relationship could be nice and straightforward like this, where the estimate you seem to be after, $\hat{p} = \frac{1}{n}\sum_{i=1}^n{X_i/Y_i}$, would be pretty accurate for most countries and would increase slightly with the overall population: Scenario 1 Slight linear relationship between $X_i/Y_i$ and $Y_i$ Above, a confidence interval for $\hat{p}$ would be useful since it could be used to predict $Y$ for a given $X$ pretty reliably since there is not a strong linear relationship between $X_i/Y_i$ and $Y_i$. On the other hand, in a scenario like the one below, the confidence interval for $\hat{p}$ is really kind of useless and since it spans such a large number of the possible proportions: Scenario 2 Strong linear relationship between $X_i/Y_i$ and $Y_i$ My point then is that if you think $X_i/Y_i$ increases or decreases with $Y_i$, then I would be very surprised if there is any theoretical bound for the confidence interval given that it's pretty straightforward to come up with ways that interval can span nearly all of [0, 1]. Overall, I think it would be smart to take a closer look at how the fraction $X_i/Y_i$ changes with $Y_i$ and if you can't convince yourself that the dependence between the two is small (possibly with a linear regression of $X$ to $Y$ and examination of the residuals), then I don't see how the estimate you're pulling now will be representative of much. OR, you could ignore all of this and jump straight to computing bootstrap confidence intervals for the statistic $\hat{p} = \frac{1}{n}\sum_{i=1}^n{X_i/Y_i}$, but again, if there is a strong dependence between this $\hat{p}$ and $Y_i$, any inference made with this estimate may not be very accurate.
How does one create a confidence interval for the ratio of the means of two non-normal bounded distr What it seems like you're really trying to do is: Prove that there is a linear relationship between $X_i$ and $Y_i$ Determine the constant proportion relating the two with some degree of certainty.
49,349
How does one create a confidence interval for the ratio of the means of two non-normal bounded distributions?
For your application, where what you care about is ultimately the probability that an individual has property P, it seems that one of two simpler types of analysis should suffice. These analyses do not depend on distributions of the $X_i$ or $Y_i$, but simply whether an individual has property P and, possibly, to which sub-population it belongs. If all of your sampled sub-populations are independent random samples of the population as a whole, then what you describe is simply a Bernouilli trial, like flipping a biased coin many times. Your estimate of the overall proportion is simply the ratio of all $X$ cases to the total number of cases observed. This page on Bernouilli Confidence Intervals describes how to proceed for confidence intervals. You fear, however, that the probability of having property P may differ among sub-populations depending on some characteristic of the specific sub-populations, such as their size. Then you need to know which sub-population an individual belongs to in order to estimate the individual's chance of having property P. This latter possibility seems like a good candidate for logistic regression, where you examine how the probability of an individual having property P relates to one or more covariates associated with the individual. I don't know of a reason why the size of the sub-population containing an individual can't be such a covariate. Statistical programs that perform logistic regressions will provide confidence intervals for odds ratios, which can be translated back to the probability scale. The model you use to set up the regression should ideally be based on domain knowledge. If you instead use your data set to set up the model (like deciding whether you examine a relation to the absolute size versus the log of the size of the sub-population), your confidence intervals will be too optimistic.
How does one create a confidence interval for the ratio of the means of two non-normal bounded distr
For your application, where what you care about is ultimately the probability that an individual has property P, it seems that one of two simpler types of analysis should suffice. These analyses do no
How does one create a confidence interval for the ratio of the means of two non-normal bounded distributions? For your application, where what you care about is ultimately the probability that an individual has property P, it seems that one of two simpler types of analysis should suffice. These analyses do not depend on distributions of the $X_i$ or $Y_i$, but simply whether an individual has property P and, possibly, to which sub-population it belongs. If all of your sampled sub-populations are independent random samples of the population as a whole, then what you describe is simply a Bernouilli trial, like flipping a biased coin many times. Your estimate of the overall proportion is simply the ratio of all $X$ cases to the total number of cases observed. This page on Bernouilli Confidence Intervals describes how to proceed for confidence intervals. You fear, however, that the probability of having property P may differ among sub-populations depending on some characteristic of the specific sub-populations, such as their size. Then you need to know which sub-population an individual belongs to in order to estimate the individual's chance of having property P. This latter possibility seems like a good candidate for logistic regression, where you examine how the probability of an individual having property P relates to one or more covariates associated with the individual. I don't know of a reason why the size of the sub-population containing an individual can't be such a covariate. Statistical programs that perform logistic regressions will provide confidence intervals for odds ratios, which can be translated back to the probability scale. The model you use to set up the regression should ideally be based on domain knowledge. If you instead use your data set to set up the model (like deciding whether you examine a relation to the absolute size versus the log of the size of the sub-population), your confidence intervals will be too optimistic.
How does one create a confidence interval for the ratio of the means of two non-normal bounded distr For your application, where what you care about is ultimately the probability that an individual has property P, it seems that one of two simpler types of analysis should suffice. These analyses do no
49,350
What to do about very unstable mixed-effects models
Actually, in addition to the random intercepts, you are adding 4 random effects (assuming that none of the Var1, Var2, t_before, and t_after variables are factors) that are all allowed to be correlated. That's $5$ variance components plus $(5\times4)/2 = 10$ correlations, so $15$ var-cov parameters for the random effects alone. In addition, you have $5$ fixed effects (again, assuming no factors), so we are up to $15 + 5 = 20$ parameters in total. Unless you have a large dataset, it's no surprise that convergence is an issue. Just to make sure -- all of the variables for which you are adding random effects (i.e., Var1, Var2, t_before, and t_after) should be non-constant within SiteID (otherwise it's not possible/sensible to add random effects for these variables). You could consider assuming that all of the random effects are independent. That would be: mod <- glmer(Outcome ~ Exposure + Var1 + Var2 + t_before + t_after + (1|SiteID) + (0+Var1|SiteID + (0+Var2|SiteID) + (0+t_before|SiteID) + (0+t_after|SiteID) + offset(log(PersonDays)), family=poisson, data=data, control=glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000))) Then you are down to $5$ fixed effects plus $5$ variance components, so $10$ parameters in total. Whether it is reasonable to assume independence among the random effects is something you need to consider though.
What to do about very unstable mixed-effects models
Actually, in addition to the random intercepts, you are adding 4 random effects (assuming that none of the Var1, Var2, t_before, and t_after variables are factors) that are all allowed to be correlate
What to do about very unstable mixed-effects models Actually, in addition to the random intercepts, you are adding 4 random effects (assuming that none of the Var1, Var2, t_before, and t_after variables are factors) that are all allowed to be correlated. That's $5$ variance components plus $(5\times4)/2 = 10$ correlations, so $15$ var-cov parameters for the random effects alone. In addition, you have $5$ fixed effects (again, assuming no factors), so we are up to $15 + 5 = 20$ parameters in total. Unless you have a large dataset, it's no surprise that convergence is an issue. Just to make sure -- all of the variables for which you are adding random effects (i.e., Var1, Var2, t_before, and t_after) should be non-constant within SiteID (otherwise it's not possible/sensible to add random effects for these variables). You could consider assuming that all of the random effects are independent. That would be: mod <- glmer(Outcome ~ Exposure + Var1 + Var2 + t_before + t_after + (1|SiteID) + (0+Var1|SiteID + (0+Var2|SiteID) + (0+t_before|SiteID) + (0+t_after|SiteID) + offset(log(PersonDays)), family=poisson, data=data, control=glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 500000))) Then you are down to $5$ fixed effects plus $5$ variance components, so $10$ parameters in total. Whether it is reasonable to assume independence among the random effects is something you need to consider though.
What to do about very unstable mixed-effects models Actually, in addition to the random intercepts, you are adding 4 random effects (assuming that none of the Var1, Var2, t_before, and t_after variables are factors) that are all allowed to be correlate
49,351
Confidence interval for polynomial linear regression
Polynomial regression is in effect multiple linear regression: consider $X_1=X$ and $X_2=X^2$ -- then $E(Y) = \beta_1 X + \beta_2 X^2$ is the same as $E(Y) = \beta_1 X_1 + \beta_2 X_2$. As such, methods for constructing confidence intervals for parameters (and for the mean in multiple regression) carry over directly to the polynomial case. Most regression packages will compute this for you. Yes, it can be done using the formula you suggest (if the assumptions needed for the t-interval to apply hold), and the right d.f. are used for the $t$ (the residual d.f. - which in R is available from the summary output). The R function confint can be used to construct confidence intervals for parameters from a regression model. See ?confint. In the case of a confidence interval for the conditional mean, let $X$ be the matrix of predictors, whether for polynomial regression or any other multiple regression model; let the estimated variance of the mean at $x_i=(x_{1i},x_{2i},...,x_{pi})$ be $v_i=\hat{\sigma}^2x_i(X'X)^{-1}x_i'$ and let $s_i=\sqrt v_i$ be the corresponding standard error. Let the upper $\alpha/2$ $t$ critical value for $n-p-1$ df be $t$. Then the pointwise confidence interval for the mean at $x_i$ is $\hat{y}_i\pm t\cdot s$. Also, the R function predict can be used to construct CIs for E(Y|X) - see ?predict.lm. [At least when doing polynomial regression with an intercept, it makes sense to use orthogonal polynomials but if the spread of $X$ is large compared to the mean, and the degree is low (such as quadratic), it won't be so critical (I tend to do so anyway, because it's easier to interpret the linear and quadratic).]
Confidence interval for polynomial linear regression
Polynomial regression is in effect multiple linear regression: consider $X_1=X$ and $X_2=X^2$ -- then $E(Y) = \beta_1 X + \beta_2 X^2$ is the same as $E(Y) = \beta_1 X_1 + \beta_2 X_2$. As such, metho
Confidence interval for polynomial linear regression Polynomial regression is in effect multiple linear regression: consider $X_1=X$ and $X_2=X^2$ -- then $E(Y) = \beta_1 X + \beta_2 X^2$ is the same as $E(Y) = \beta_1 X_1 + \beta_2 X_2$. As such, methods for constructing confidence intervals for parameters (and for the mean in multiple regression) carry over directly to the polynomial case. Most regression packages will compute this for you. Yes, it can be done using the formula you suggest (if the assumptions needed for the t-interval to apply hold), and the right d.f. are used for the $t$ (the residual d.f. - which in R is available from the summary output). The R function confint can be used to construct confidence intervals for parameters from a regression model. See ?confint. In the case of a confidence interval for the conditional mean, let $X$ be the matrix of predictors, whether for polynomial regression or any other multiple regression model; let the estimated variance of the mean at $x_i=(x_{1i},x_{2i},...,x_{pi})$ be $v_i=\hat{\sigma}^2x_i(X'X)^{-1}x_i'$ and let $s_i=\sqrt v_i$ be the corresponding standard error. Let the upper $\alpha/2$ $t$ critical value for $n-p-1$ df be $t$. Then the pointwise confidence interval for the mean at $x_i$ is $\hat{y}_i\pm t\cdot s$. Also, the R function predict can be used to construct CIs for E(Y|X) - see ?predict.lm. [At least when doing polynomial regression with an intercept, it makes sense to use orthogonal polynomials but if the spread of $X$ is large compared to the mean, and the degree is low (such as quadratic), it won't be so critical (I tend to do so anyway, because it's easier to interpret the linear and quadratic).]
Confidence interval for polynomial linear regression Polynomial regression is in effect multiple linear regression: consider $X_1=X$ and $X_2=X^2$ -- then $E(Y) = \beta_1 X + \beta_2 X^2$ is the same as $E(Y) = \beta_1 X_1 + \beta_2 X_2$. As such, metho
49,352
Confidence interval for polynomial linear regression
It is somewhat of a lengthy procedure to verify that the linear model obeys a t-distribution. To do so for the quadratic would be tedious. I do not think the above suggestion that one can simply substitute for the quadratic term is sound. There are methods involving Taylor expansions to make the conversion. See the following resource; http://fmwww.bc.edu/RePEc/esAUSM04/up.11216.1077841765.pdf
Confidence interval for polynomial linear regression
It is somewhat of a lengthy procedure to verify that the linear model obeys a t-distribution. To do so for the quadratic would be tedious. I do not think the above suggestion that one can simply subst
Confidence interval for polynomial linear regression It is somewhat of a lengthy procedure to verify that the linear model obeys a t-distribution. To do so for the quadratic would be tedious. I do not think the above suggestion that one can simply substitute for the quadratic term is sound. There are methods involving Taylor expansions to make the conversion. See the following resource; http://fmwww.bc.edu/RePEc/esAUSM04/up.11216.1077841765.pdf
Confidence interval for polynomial linear regression It is somewhat of a lengthy procedure to verify that the linear model obeys a t-distribution. To do so for the quadratic would be tedious. I do not think the above suggestion that one can simply subst
49,353
Does the birth date of professional boxers matter? Prove/disprove what an astrologer might predict
If your hypothesis was formulated a priori, then the data are quite strongly significant. Your null hypothesis is that astrology does not predict anything. This would mean that the probability of a boxer to be born under an "earth" sign is $0.25$, and the same is true for the "ethereal" signs. I assume that you selected these signs a priori, before looking at the actual birth dates. You want to disprove the null hypothesis, and you are interested in deviations in one particular direction: more boxers born under earth signs, and less under the ethereal signs (not vice versa). This means you can conduct one-sided tests. Here is how. Consider earth signs. Under the null hypothesis, the most probable number of boxers born under these signs out of total $67$ is $67/4$. But for any integer number $x$ between $0$ and $67$ one can compute a probability that exactly so many boxers were born under these signs. This gives a function $p(x)$, known as binomial probability density function. You can then ask, what is the probability that $27$ or more boxers would be born under an earth sign? The answer is given by a sum $\sum_{x=27}^{67} p(x)$. Computing it gives $0.004$. This is known as a p-value: a probability that you could have observed your result, or an even more extreme result, under the null hypothesis. P-value of $p=0.004$ is pretty low, and most people would call it "significant", i.e. the data seem to speak against the null hypothesis. We can do the same with the ethereal signs, arriving at the p-value of $p=0.03$ that $10$ or less boxers were born under them. This is also quite low. Note, however, that these two p-values are not independent: certainly, if more boxers were born under the earth signs, it would automatically mean that less boxers could have been born under the ethereal signs. I don't know how to compute a probability of observing $67$ or more and $10$ or less at the same time, but it is easy to simulate. Let's generate $1\:000\:000$ parallel worlds where null hypothesis is true. We can then count the number of worlds where number of earth-born boxers is $27$ or more; where number of ethereal-born boxers is $10$ or less; and where both is true. This is called a Monte Carlo simulation. Dividing the counts by $1\:000\:000$, I obtain: $0.004$, $0.03$, and $0.0008$. First two numbers are identical to the ones obtained above. The last number is the most relevant one. I would argue that $p=0.0008$ is low enough to think that maybe there is something interesting here! If one has some strong a priori reasons to doubt astrology, one would want to use a much more stringent criterium than a conventional threshold of $p<0.05$: "extraordinary claims require extraordinary evidence". But $p=0.0008$ looks quite convincing (even though can still be a fluke). Finally, let me remind you that all of the above crucially depends on the fact that you selected your Zodiac signs before looking at the data and that you selected your boxers without looking at their birth dates. If that is not true, then p-value can easily change to about $\sim 0.05$, as nicely shown here by @whuber. Matlab code: N = 1e+6; counts = [0 0 0]; n1 = binornd(67, 0.25, [N 1]); n2 = binornd(67-n1, 1/3); counts(1) = length(find(n1>=27)); counts(2) = length(find(n2<=10)); counts(3) = length(find(n1>=27 & n2<=10)); display(['Monte Carlo results: ' num2str(counts/N, 2)]) display(['Analytical solution: ' num2str(1-binocdf(26,67,0.25), 2)]) display(['Analytical solution: ' num2str(binocdf(10,67,0.25), 2)]) Running this takes 4.5 seconds on my laptop and results in Monte Carlo results: 0.0043 0.034 0.00082 Analytical solution: 0.0042 Analytical solution: 0.034
Does the birth date of professional boxers matter? Prove/disprove what an astrologer might predict
If your hypothesis was formulated a priori, then the data are quite strongly significant. Your null hypothesis is that astrology does not predict anything. This would mean that the probability of a bo
Does the birth date of professional boxers matter? Prove/disprove what an astrologer might predict If your hypothesis was formulated a priori, then the data are quite strongly significant. Your null hypothesis is that astrology does not predict anything. This would mean that the probability of a boxer to be born under an "earth" sign is $0.25$, and the same is true for the "ethereal" signs. I assume that you selected these signs a priori, before looking at the actual birth dates. You want to disprove the null hypothesis, and you are interested in deviations in one particular direction: more boxers born under earth signs, and less under the ethereal signs (not vice versa). This means you can conduct one-sided tests. Here is how. Consider earth signs. Under the null hypothesis, the most probable number of boxers born under these signs out of total $67$ is $67/4$. But for any integer number $x$ between $0$ and $67$ one can compute a probability that exactly so many boxers were born under these signs. This gives a function $p(x)$, known as binomial probability density function. You can then ask, what is the probability that $27$ or more boxers would be born under an earth sign? The answer is given by a sum $\sum_{x=27}^{67} p(x)$. Computing it gives $0.004$. This is known as a p-value: a probability that you could have observed your result, or an even more extreme result, under the null hypothesis. P-value of $p=0.004$ is pretty low, and most people would call it "significant", i.e. the data seem to speak against the null hypothesis. We can do the same with the ethereal signs, arriving at the p-value of $p=0.03$ that $10$ or less boxers were born under them. This is also quite low. Note, however, that these two p-values are not independent: certainly, if more boxers were born under the earth signs, it would automatically mean that less boxers could have been born under the ethereal signs. I don't know how to compute a probability of observing $67$ or more and $10$ or less at the same time, but it is easy to simulate. Let's generate $1\:000\:000$ parallel worlds where null hypothesis is true. We can then count the number of worlds where number of earth-born boxers is $27$ or more; where number of ethereal-born boxers is $10$ or less; and where both is true. This is called a Monte Carlo simulation. Dividing the counts by $1\:000\:000$, I obtain: $0.004$, $0.03$, and $0.0008$. First two numbers are identical to the ones obtained above. The last number is the most relevant one. I would argue that $p=0.0008$ is low enough to think that maybe there is something interesting here! If one has some strong a priori reasons to doubt astrology, one would want to use a much more stringent criterium than a conventional threshold of $p<0.05$: "extraordinary claims require extraordinary evidence". But $p=0.0008$ looks quite convincing (even though can still be a fluke). Finally, let me remind you that all of the above crucially depends on the fact that you selected your Zodiac signs before looking at the data and that you selected your boxers without looking at their birth dates. If that is not true, then p-value can easily change to about $\sim 0.05$, as nicely shown here by @whuber. Matlab code: N = 1e+6; counts = [0 0 0]; n1 = binornd(67, 0.25, [N 1]); n2 = binornd(67-n1, 1/3); counts(1) = length(find(n1>=27)); counts(2) = length(find(n2<=10)); counts(3) = length(find(n1>=27 & n2<=10)); display(['Monte Carlo results: ' num2str(counts/N, 2)]) display(['Analytical solution: ' num2str(1-binocdf(26,67,0.25), 2)]) display(['Analytical solution: ' num2str(binocdf(10,67,0.25), 2)]) Running this takes 4.5 seconds on my laptop and results in Monte Carlo results: 0.0043 0.034 0.00082 Analytical solution: 0.0042 Analytical solution: 0.034
Does the birth date of professional boxers matter? Prove/disprove what an astrologer might predict If your hypothesis was formulated a priori, then the data are quite strongly significant. Your null hypothesis is that astrology does not predict anything. This would mean that the probability of a bo
49,354
Expected survival time from log-logistic survival model in R from survreg
Seems that this was more of a coding question and might have gotten a more prompt coding response on StackOverflow, but since no close votes have been offered I put in a belated CV response. Most R regression functions have an associated predict method and survival::survreg is no exception. You need to assign the output of the model call to a named object and then run predict: sreg.model <- .Last.value predict(sreg.model) #_________ [1] 73.54492 73.54492 69.29323 69.29323 72.67421 72.67421 [7] 72.89091 72.67421 77.59404 77.59404 76.22015 75.99355 snipped So those are the expected values for each individual combinations of covariates in the original dataset. If you wanted now predictions that might be more suitable for constructing plots, you would supply a newdata argument in the form of dataframe. See ?predict.survival for a worked example. It also shows how to plot expected survival curves and if you picked out the expected 50% survival you would have the predicted median values. pct <- 1:98/100 ptime <- predict(sreg.model, newdata=data.frame(age=10:69, id=1) , type='quantile', p=pct, se=TRUE) ptime$fit[ ,50] [1] 77.59404 77.36335 77.13335 76.90403 76.67539 76.44743 [7] 76.22015 75.99355 75.76762 75.54236 75.31777 75.09384 [13] 74.87059 74.64800 74.42606 74.20479 73.98418 73.76422 #snipped
Expected survival time from log-logistic survival model in R from survreg
Seems that this was more of a coding question and might have gotten a more prompt coding response on StackOverflow, but since no close votes have been offered I put in a belated CV response. Most R re
Expected survival time from log-logistic survival model in R from survreg Seems that this was more of a coding question and might have gotten a more prompt coding response on StackOverflow, but since no close votes have been offered I put in a belated CV response. Most R regression functions have an associated predict method and survival::survreg is no exception. You need to assign the output of the model call to a named object and then run predict: sreg.model <- .Last.value predict(sreg.model) #_________ [1] 73.54492 73.54492 69.29323 69.29323 72.67421 72.67421 [7] 72.89091 72.67421 77.59404 77.59404 76.22015 75.99355 snipped So those are the expected values for each individual combinations of covariates in the original dataset. If you wanted now predictions that might be more suitable for constructing plots, you would supply a newdata argument in the form of dataframe. See ?predict.survival for a worked example. It also shows how to plot expected survival curves and if you picked out the expected 50% survival you would have the predicted median values. pct <- 1:98/100 ptime <- predict(sreg.model, newdata=data.frame(age=10:69, id=1) , type='quantile', p=pct, se=TRUE) ptime$fit[ ,50] [1] 77.59404 77.36335 77.13335 76.90403 76.67539 76.44743 [7] 76.22015 75.99355 75.76762 75.54236 75.31777 75.09384 [13] 74.87059 74.64800 74.42606 74.20479 73.98418 73.76422 #snipped
Expected survival time from log-logistic survival model in R from survreg Seems that this was more of a coding question and might have gotten a more prompt coding response on StackOverflow, but since no close votes have been offered I put in a belated CV response. Most R re
49,355
How to prove that the permutation of the points are the minimal sufficient statistics for Cauchy distribution?
Hint: Apply the ratio test (e.g. Theorem 6.2.13 in Casella and Berger's "Statistical Inference, Second Edition"), and consider the roots of the polynomial that is the denominator of the joint Cauchy distribution.
How to prove that the permutation of the points are the minimal sufficient statistics for Cauchy dis
Hint: Apply the ratio test (e.g. Theorem 6.2.13 in Casella and Berger's "Statistical Inference, Second Edition"), and consider the roots of the polynomial that is the denominator of the joint Cauchy d
How to prove that the permutation of the points are the minimal sufficient statistics for Cauchy distribution? Hint: Apply the ratio test (e.g. Theorem 6.2.13 in Casella and Berger's "Statistical Inference, Second Edition"), and consider the roots of the polynomial that is the denominator of the joint Cauchy distribution.
How to prove that the permutation of the points are the minimal sufficient statistics for Cauchy dis Hint: Apply the ratio test (e.g. Theorem 6.2.13 in Casella and Berger's "Statistical Inference, Second Edition"), and consider the roots of the polynomial that is the denominator of the joint Cauchy d
49,356
Error bars on log of big numbers
The problem isn't as profound as it may appear. Because $$\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)} = e^Y\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)-Y} $$ for $$Y = \max_i\{\phi(X_i)\},$$ an algebraically equivalent expression is $$\mu = Y -\log n + \log \sum_{i=1}^{n} e^{\phi(X_i)-Y}. $$ In this one there will be no difficulties computing the log of the sum, which necessarily lies between $1$ and $n$ (since all the exponents are non-positive). In particular, there is no chance of overflow and any underflow will be absorbed (to high precision) in the summation, where at most $\log_2 n$ bits will be lost (and almost certainly the loss in precision will be less than around $ \frac{1}{2}\log_2 n$ bits). If you have any concern about precision losses, sum the terms in ascending order of $\phi(X_i)$. The same approach applies to computing the moments needed to obtain an estimated standard deviation. Using a normal approximation may be unwise unless you are sure that all the $\exp(\phi(X_i))$ will be well within an order of magnitude of each other (which means the $\phi(X_i)$ should all lie within an interval of around $2$ or less). Even then you might need a fairly large value of $n$. If just a few of those values dominate the others, then the averaging-out that justifies this approximation will not occur, regardless of the size of $n$.
Error bars on log of big numbers
The problem isn't as profound as it may appear. Because $$\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)} = e^Y\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)-Y} $$ for $$Y = \max_i\{\phi(X_i)\},$$ an algebraically
Error bars on log of big numbers The problem isn't as profound as it may appear. Because $$\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)} = e^Y\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)-Y} $$ for $$Y = \max_i\{\phi(X_i)\},$$ an algebraically equivalent expression is $$\mu = Y -\log n + \log \sum_{i=1}^{n} e^{\phi(X_i)-Y}. $$ In this one there will be no difficulties computing the log of the sum, which necessarily lies between $1$ and $n$ (since all the exponents are non-positive). In particular, there is no chance of overflow and any underflow will be absorbed (to high precision) in the summation, where at most $\log_2 n$ bits will be lost (and almost certainly the loss in precision will be less than around $ \frac{1}{2}\log_2 n$ bits). If you have any concern about precision losses, sum the terms in ascending order of $\phi(X_i)$. The same approach applies to computing the moments needed to obtain an estimated standard deviation. Using a normal approximation may be unwise unless you are sure that all the $\exp(\phi(X_i))$ will be well within an order of magnitude of each other (which means the $\phi(X_i)$ should all lie within an interval of around $2$ or less). Even then you might need a fairly large value of $n$. If just a few of those values dominate the others, then the averaging-out that justifies this approximation will not occur, regardless of the size of $n$.
Error bars on log of big numbers The problem isn't as profound as it may appear. Because $$\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)} = e^Y\frac{1}{n} \sum_{i=1}^{n} e^{\phi(X_i)-Y} $$ for $$Y = \max_i\{\phi(X_i)\},$$ an algebraically
49,357
Propensity Score can be used as a covariate in regression?
This would be the standard propensity score estimator. For a binary treatment the conditional independence assumption (CIA) states that $$ \newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} \def\independent#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} T_i\perp\hspace{-0.28cm}\perp (Y_{i0}, Y_{i1})|X_i $$ i.e. the treatment is independent of the outcomes conditional on the observed covariates. If you are looking for the average treatment effect (ATE), the estimator would be $$\widehat{ATE} = \frac{1}{n}\sum^n_{i=1}\frac{Y_i(T_i-\widehat{P}(X_i))}{\widehat{P}(X_i)(1-\widehat{P}(X_i))}$$ because you need to do some weighting of treated and non-treated observations. For instance, under the CIA $E(Y_i|T_i=1, X_i = x) = E(Y_1|X_i = x)$, so observations with $T_i = 1, X_i = x$ are representative for all observations with $X_i = x$. However, for recovering $E(Y_1)$ from $E(Y_i|T_i=1,X_i=x)$ you need to weight the observations in the cell $X_i=x$ by $P(X_i=x)$ which is their weight in the total population. In that sense, the above propensity score estimator will give you the ATE by weighting the mean outcome for the treated and non-treated in order to take their difference like $$\begin{align} \newcommand\given[1][]{\:#1\vert\:} E(Y_{i1}-Y_{i0}) &= E\left(Y_i\frac{P(T_i=1)}{P(X_i)}\given[\huge]\normalsize T_i=1\right) - E\left(Y_i\frac{1-P(T_i=1)}{1-P(X_i)}\given[\huge]\normalsize T_i=0\right) \newline &= E\left( \frac{Y_iT_i}{P(X_i)} - \frac{Y_i(1-T_i)}{1-P(X_i)} \right) \newline &= E\left( \frac{Y_i(T_i-P(X_i))}{P(X_i)(1-P(X_i)}\right) \end{align}$$ which is the population equivalent to the estimator given above. So simply including the propensity score in a regression will not do the appropriate weighting. It will also be easier when you consider a log transform of your model. When you do the probit model to get $\widehat{P}(X_i)$ you should also include some polynomials of $X_i$ in that regression in order to capture also non-linear effects of the covariates on treatment choice.
Propensity Score can be used as a covariate in regression?
This would be the standard propensity score estimator. For a binary treatment the conditional independence assumption (CIA) states that $$ \newcommand\independent{\protect\mathpalette{\protect\indepen
Propensity Score can be used as a covariate in regression? This would be the standard propensity score estimator. For a binary treatment the conditional independence assumption (CIA) states that $$ \newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} \def\independent#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} T_i\perp\hspace{-0.28cm}\perp (Y_{i0}, Y_{i1})|X_i $$ i.e. the treatment is independent of the outcomes conditional on the observed covariates. If you are looking for the average treatment effect (ATE), the estimator would be $$\widehat{ATE} = \frac{1}{n}\sum^n_{i=1}\frac{Y_i(T_i-\widehat{P}(X_i))}{\widehat{P}(X_i)(1-\widehat{P}(X_i))}$$ because you need to do some weighting of treated and non-treated observations. For instance, under the CIA $E(Y_i|T_i=1, X_i = x) = E(Y_1|X_i = x)$, so observations with $T_i = 1, X_i = x$ are representative for all observations with $X_i = x$. However, for recovering $E(Y_1)$ from $E(Y_i|T_i=1,X_i=x)$ you need to weight the observations in the cell $X_i=x$ by $P(X_i=x)$ which is their weight in the total population. In that sense, the above propensity score estimator will give you the ATE by weighting the mean outcome for the treated and non-treated in order to take their difference like $$\begin{align} \newcommand\given[1][]{\:#1\vert\:} E(Y_{i1}-Y_{i0}) &= E\left(Y_i\frac{P(T_i=1)}{P(X_i)}\given[\huge]\normalsize T_i=1\right) - E\left(Y_i\frac{1-P(T_i=1)}{1-P(X_i)}\given[\huge]\normalsize T_i=0\right) \newline &= E\left( \frac{Y_iT_i}{P(X_i)} - \frac{Y_i(1-T_i)}{1-P(X_i)} \right) \newline &= E\left( \frac{Y_i(T_i-P(X_i))}{P(X_i)(1-P(X_i)}\right) \end{align}$$ which is the population equivalent to the estimator given above. So simply including the propensity score in a regression will not do the appropriate weighting. It will also be easier when you consider a log transform of your model. When you do the probit model to get $\widehat{P}(X_i)$ you should also include some polynomials of $X_i$ in that regression in order to capture also non-linear effects of the covariates on treatment choice.
Propensity Score can be used as a covariate in regression? This would be the standard propensity score estimator. For a binary treatment the conditional independence assumption (CIA) states that $$ \newcommand\independent{\protect\mathpalette{\protect\indepen
49,358
How to implement reduced-rank regression in R?
A set of S functions for least-squares reduced-rank can be found in the StatLib archive. See the file rrr.s and this paper: Splus function for reduced-rank regression and softly shrunk reduced-rank regression. Submitted by Magne Aldrin ([email protected]). [19/Apr/99][8/Mar/00] (14k)
How to implement reduced-rank regression in R?
A set of S functions for least-squares reduced-rank can be found in the StatLib archive. See the file rrr.s and this paper: Splus function for reduced-rank regression and softly shrunk reduced-rank
How to implement reduced-rank regression in R? A set of S functions for least-squares reduced-rank can be found in the StatLib archive. See the file rrr.s and this paper: Splus function for reduced-rank regression and softly shrunk reduced-rank regression. Submitted by Magne Aldrin ([email protected]). [19/Apr/99][8/Mar/00] (14k)
How to implement reduced-rank regression in R? A set of S functions for least-squares reduced-rank can be found in the StatLib archive. See the file rrr.s and this paper: Splus function for reduced-rank regression and softly shrunk reduced-rank
49,359
How to implement reduced-rank regression in R?
There are now R packages for reduced-rank regression: rrpack, rrr.
How to implement reduced-rank regression in R?
There are now R packages for reduced-rank regression: rrpack, rrr.
How to implement reduced-rank regression in R? There are now R packages for reduced-rank regression: rrpack, rrr.
How to implement reduced-rank regression in R? There are now R packages for reduced-rank regression: rrpack, rrr.
49,360
Deriving the maximum likelihood for the parameters in linear regression
$$ 0 = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T - \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) \tag 3 $$ $$ \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \tag a $$ Recall the Delta Matrix is defined as: $$ \Phi =\begin{pmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 0 }(x_{ 2 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ 2 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{pmatrix}$$ We can show that below $$ \sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T = \Phi^T\Phi$$ $$ \begin{equation} \begin{split} \sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 }) \\ \vdots \\ \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}+\dots \\ & +\begin{bmatrix} \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ N }) \\ \vdots \\ \phi _{ M-1 }(x_{ N }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix}\\ & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ 0 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ 0 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ 1 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ M-1 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \end{bmatrix}+\dots \\ & +\begin{bmatrix} \phi _{ 0 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ 0 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ 0 }(x_{ N })\phi _{ M-1 }(x_{ N }) \\ \phi _{ 1 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ 1 }(x_{ N })\phi _{ M-1 }(x_{ N }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ M-1 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N })\phi _{ M-1 }(x_{ N }) \end{bmatrix} \\ & = \begin{bmatrix} \sum _{ i }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \\ \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \\ \vdots & \vdots & \ddots & \vdots \\ \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \end{bmatrix} \\ & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 0 }(x_{ 2 }) & \dots & \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ 1 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ 1 }(x_{ N }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ 1 }) & \phi _{ M-1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 0 }(x_{ 2 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ 2 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix} \\ & = \Phi^T\Phi \end{split} \end{equation}$$ Therefore we get the following. $$ \textbf{w}^T \bigg(\Phi^T\Phi \bigg) = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \tag b $$ Next we transpose both sides $$ \bigg( \textbf{w}^T \bigg(\Phi^T\Phi \bigg) \bigg)^T = \bigg(\sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \bigg)^T \tag b $$ $$ \Phi^T\Phi \textbf{w} = \sum_{n=1}^N t_n \phi(\textbf{x}_n) \tag c $$ Similarly, $$\sum_{n=1}^N t_n \phi(\textbf{x}_n) = t_{ 1 }\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 }) \\ \vdots \\ \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}\quad +\quad \dots \quad +\quad t_{ N }\begin{bmatrix} \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ N }) \\ \vdots \\ \phi _{ M-1 }(x_{ N }) \end{bmatrix}\quad =\quad \Phi ^{ T }{ t }$$ Finally we get $$ \Phi^T\Phi \textbf{w} = \Phi ^{ T }{ t } \tag 5 $$
Deriving the maximum likelihood for the parameters in linear regression
$$ 0 = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T - \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) \tag 3 $$ $$ \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf
Deriving the maximum likelihood for the parameters in linear regression $$ 0 = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T - \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) \tag 3 $$ $$ \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \tag a $$ Recall the Delta Matrix is defined as: $$ \Phi =\begin{pmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 0 }(x_{ 2 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ 2 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{pmatrix}$$ We can show that below $$ \sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T = \Phi^T\Phi$$ $$ \begin{equation} \begin{split} \sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 }) \\ \vdots \\ \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}+\dots \\ & +\begin{bmatrix} \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ N }) \\ \vdots \\ \phi _{ M-1 }(x_{ N }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix}\\ & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ 0 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ 0 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ 1 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ 1 })\phi _{ 0 }(x_{ 1 }) & \phi _{ M-1 }(x_{ 1 })\phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 })\phi _{ M-1 }(x_{ 1 }) \end{bmatrix}+\dots \\ & +\begin{bmatrix} \phi _{ 0 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ 0 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ 0 }(x_{ N })\phi _{ M-1 }(x_{ N }) \\ \phi _{ 1 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ 1 }(x_{ N })\phi _{ M-1 }(x_{ N }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ N })\phi _{ 0 }(x_{ N }) & \phi _{ M-1 }(x_{ N })\phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N })\phi _{ M-1 }(x_{ N }) \end{bmatrix} \\ & = \begin{bmatrix} \sum _{ i }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ 0 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \\ \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ 1 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \\ \vdots & \vdots & \ddots & \vdots \\ \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ 0 }(x_{ i }) } & \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ 1 }(x_{ i }) } & \dots & \sum _{ i=1 }^{ N }{ \phi _{ M-1 }(x_{ i })\phi _{ M-1 }(x_{ i }) } \end{bmatrix} \\ & = \begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 0 }(x_{ 2 }) & \dots & \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ 1 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ 1 }(x_{ N }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ M-1 }(x_{ 1 }) & \phi _{ M-1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix}\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) & \phi _{ 1 }(x_{ 1 }) & \dots & \phi _{ M-1 }(x_{ 1 }) \\ \phi _{ 0 }(x_{ 2 }) & \phi _{ 1 }(x_{ 2 }) & \dots & \phi _{ M-1 }(x_{ 2 }) \\ \vdots & \vdots & \ddots & \vdots \\ \phi _{ 0 }(x_{ N }) & \phi _{ 1 }(x_{ N }) & \dots & \phi _{ M-1 }(x_{ N }) \end{bmatrix} \\ & = \Phi^T\Phi \end{split} \end{equation}$$ Therefore we get the following. $$ \textbf{w}^T \bigg(\Phi^T\Phi \bigg) = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \tag b $$ Next we transpose both sides $$ \bigg( \textbf{w}^T \bigg(\Phi^T\Phi \bigg) \bigg)^T = \bigg(\sum_{n=1}^N t_n \phi(\textbf{x}_n)^T \bigg)^T \tag b $$ $$ \Phi^T\Phi \textbf{w} = \sum_{n=1}^N t_n \phi(\textbf{x}_n) \tag c $$ Similarly, $$\sum_{n=1}^N t_n \phi(\textbf{x}_n) = t_{ 1 }\begin{bmatrix} \phi _{ 0 }(x_{ 1 }) \\ \phi _{ 1 }(x_{ 1 }) \\ \vdots \\ \phi _{ M-1 }(x_{ 1 }) \end{bmatrix}\quad +\quad \dots \quad +\quad t_{ N }\begin{bmatrix} \phi _{ 0 }(x_{ N }) \\ \phi _{ 1 }(x_{ N }) \\ \vdots \\ \phi _{ M-1 }(x_{ N }) \end{bmatrix}\quad =\quad \Phi ^{ T }{ t }$$ Finally we get $$ \Phi^T\Phi \textbf{w} = \Phi ^{ T }{ t } \tag 5 $$
Deriving the maximum likelihood for the parameters in linear regression $$ 0 = \sum_{n=1}^N t_n \phi(\textbf{x}_n)^T - \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf{x}_n)^T \bigg) \tag 3 $$ $$ \textbf{w}^T \bigg(\sum_{n=1}^N \phi(\textbf{x}_n)\phi(\textbf
49,361
Unclear area in Convolutional Neural Net
I've stumbled upon this before, and it is is generally poorly explained. It's best to think of images as three dimensional, with a width, a height and a number of channels $w \times h \times c$. An input image for instance might have three channels, one for each color. The next layer might have 50 different filters, so you can think of it again as a three dimensional structure, with 50 channels. Now how do we get from one to the other with convolutional filters? Well, as you've intuited, the filters are actually three dimensional, but they only convolve in the two dimensional pixel plane (one way to think about it is that they're as tall as the number of channels, so they can't move in that direction). Pooling is a different operation, it's a group non linearity that is meant to reduce the size of the layer. Max pooling has the property that it gives you some amount of translation invariance.
Unclear area in Convolutional Neural Net
I've stumbled upon this before, and it is is generally poorly explained. It's best to think of images as three dimensional, with a width, a height and a number of channels $w \times h \times c$. An in
Unclear area in Convolutional Neural Net I've stumbled upon this before, and it is is generally poorly explained. It's best to think of images as three dimensional, with a width, a height and a number of channels $w \times h \times c$. An input image for instance might have three channels, one for each color. The next layer might have 50 different filters, so you can think of it again as a three dimensional structure, with 50 channels. Now how do we get from one to the other with convolutional filters? Well, as you've intuited, the filters are actually three dimensional, but they only convolve in the two dimensional pixel plane (one way to think about it is that they're as tall as the number of channels, so they can't move in that direction). Pooling is a different operation, it's a group non linearity that is meant to reduce the size of the layer. Max pooling has the property that it gives you some amount of translation invariance.
Unclear area in Convolutional Neural Net I've stumbled upon this before, and it is is generally poorly explained. It's best to think of images as three dimensional, with a width, a height and a number of channels $w \times h \times c$. An in
49,362
Why is my R density plot a bell curve when all datapoints are 0?
You should explain what the intuition is that you have that the behavior runs counter to - it would make it easier to focus the explanation to address that. A kernel density estimate is the convolution of the sample probability function ($n$ point masses of size $\frac{1}{n}$) and the kernel function (itself, by default, a normal density). The result in the default case is a mixture of normal (Gaussian) densities, each with center at the data values, each with standard deviation $h$ (the bandwidth of the kernel), and weight $\frac{1}{n}$. When all the data are coincident, the resulting mixture density is a sum of $n$ weighted densities, all with the same mean and standard deviation ... which is just the kernel itself, centered at that data value. The difference in behavior you see might relate to the trim argument in ggplot2::stat_density. When the range of values is exactly zero, my guess is that it's setting trim to FALSE (or at least something other than TRUE), but when it's even a little larger than 0 it's at the default (TRUE). You'd need to look into the source to double check, but that would be my guess. If that's what's happening, you should be able to modify that behavior.
Why is my R density plot a bell curve when all datapoints are 0?
You should explain what the intuition is that you have that the behavior runs counter to - it would make it easier to focus the explanation to address that. A kernel density estimate is the convolutio
Why is my R density plot a bell curve when all datapoints are 0? You should explain what the intuition is that you have that the behavior runs counter to - it would make it easier to focus the explanation to address that. A kernel density estimate is the convolution of the sample probability function ($n$ point masses of size $\frac{1}{n}$) and the kernel function (itself, by default, a normal density). The result in the default case is a mixture of normal (Gaussian) densities, each with center at the data values, each with standard deviation $h$ (the bandwidth of the kernel), and weight $\frac{1}{n}$. When all the data are coincident, the resulting mixture density is a sum of $n$ weighted densities, all with the same mean and standard deviation ... which is just the kernel itself, centered at that data value. The difference in behavior you see might relate to the trim argument in ggplot2::stat_density. When the range of values is exactly zero, my guess is that it's setting trim to FALSE (or at least something other than TRUE), but when it's even a little larger than 0 it's at the default (TRUE). You'd need to look into the source to double check, but that would be my guess. If that's what's happening, you should be able to modify that behavior.
Why is my R density plot a bell curve when all datapoints are 0? You should explain what the intuition is that you have that the behavior runs counter to - it would make it easier to focus the explanation to address that. A kernel density estimate is the convolutio
49,363
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable
It is true that in the original papers on co-integration, all variables involved were assumed to be individually $I(1)$, and this is usually the case presented and used. But this is not restrictive. For example, Lutkepohl (1993), defines co-integration as follows: A K-dimensional process $\mathbf z_t$ is integrated of order $d$ if $\Delta^d \mathbf z_t$ is stable and $\Delta^{d-1} \mathbf z_t$ is not. ("stable" as is defined in the context of time-series analysis). In p. 351-354 , he presents co-integration for systems containing variables of different order of integration, but does not pursue the issue in depth. Hayashi (2000) ch. 10 develops the case more fully. There are at least two theoretical variants here, the one called "polynomial co-integration", the other "mutli-cointegration". I don't feel proficient enough on the issue to respond to the specific example you give in your question, but I hope these will be useful leads for you to search the literature.
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable
It is true that in the original papers on co-integration, all variables involved were assumed to be individually $I(1)$, and this is usually the case presented and used. But this is not restrictive. F
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable It is true that in the original papers on co-integration, all variables involved were assumed to be individually $I(1)$, and this is usually the case presented and used. But this is not restrictive. For example, Lutkepohl (1993), defines co-integration as follows: A K-dimensional process $\mathbf z_t$ is integrated of order $d$ if $\Delta^d \mathbf z_t$ is stable and $\Delta^{d-1} \mathbf z_t$ is not. ("stable" as is defined in the context of time-series analysis). In p. 351-354 , he presents co-integration for systems containing variables of different order of integration, but does not pursue the issue in depth. Hayashi (2000) ch. 10 develops the case more fully. There are at least two theoretical variants here, the one called "polynomial co-integration", the other "mutli-cointegration". I don't feel proficient enough on the issue to respond to the specific example you give in your question, but I hope these will be useful leads for you to search the literature.
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable It is true that in the original papers on co-integration, all variables involved were assumed to be individually $I(1)$, and this is usually the case presented and used. But this is not restrictive. F
49,364
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable
+1 for the question! It can be shown that if $y_{t}$ and $x_{1t}$ are cointegrated then the OLS estimator of that equation will be super consistent with a rate of convergance of $T^{-2} $ compared to the stationary $T^{-1} $ case. Notice further, that even if you have misspecified stationary terms terms in your regression or haven't captured all the dynamics of the true DGP the estimator will still be consistent since the stochastic trends will dominate asymptotically for $T\rightarrow\infty $ so any misspecification of stationary terms will not affect the estimator. The problem with running that regression is that the estimator can be severely biased in small samples if the stationary terms are misspecified and you cannot test hypothesis on the parameters since the estimator is not normal and the distribution depends on unknown parameters. In short: you will get super consistent estimates but you cannot test parameter significance nor say anything else about them. An easy way to test if the static relationship, (1) below, is stationary or not would be to test your estimated residuals for a unit root using an ADF-test. Remember to not include a constant since we assume a mean zero process and remember that the distribution is not the regular Dickey-Fuller distribtuion but depends on the no. of parameters in the static regression, (1) below. I guess you have heard of the Engle-Granger 2-step approach and your problem seems very similar to that approach. 1) Estimate the static regression: $y_{t}=\beta_{0}+\beta_{1}x_{1t}+\beta_{2}x_{2t}+\varepsilon_{t} $ (1), where $y_{t}\sim I\left(1\right) $, $x_{1t}\sim I\left(1\right) $ and $x_{2t}\sim I\left(0\right) $ and test the estimated residuals for a unit root. If the variables cointegrate then the residuals should be stationary. Note that the null hypothesis of a unit root corresponds to the null hypothesis of no-cointegration. 2) Run the dynamic regression: ${\Delta y_{t}=\beta_{0}+\beta_{1}\Delta x_{1t}+\beta_{2}\Delta x_{1t-1}+\beta_{3}\Delta x_{2t1}+\beta_{4}\Delta x_{2t-1}+\beta_{5}\Delta x_{3t1}+\beta_{6}\Delta x_{3t-1}+ecm_{t-1}}+u_{t} $ (2), where $ecm_{t-1}=\hat{\varepsilon}_{t-1} $, i.e. the estimated residuals from the static regression. Note that I included another stationary variable $x_{3t} $ to show that in this second step you can include whatever stationary variables you will. $ecm_{t-1} $ will show if and by how much $y_{t} $ error corrects to $x_{1t} $. Notice further that the t-values in (2) follow a normal distribution and regular inference applies. Some notes to think about: A) If $y_{t} $ and $x_{1t} $ cointegrate and you want to test that there is no need to include $x_{2t} $ in the static regression, (1), since you can include it in, (2), instead. B) Notice that the biased parameters which arise in small samples in the static regression, (1) are not a problem for an ECM, which is a rewritten ADL model, since it should be dynamically complete (this can be shown by MC simulations). C) Why would you want to estimate the static regression (1) on its own? You do not know if the variables cointegrate (if they do not cointegrate you'll get a spurious regression) and inference is invalid, hence we cannot test any hypothesis on our estimated parameters. In short: Estimate your ECM or use the Engle-Grange approach unless you want to expand your analysis to CVAR's.
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable
+1 for the question! It can be shown that if $y_{t}$ and $x_{1t}$ are cointegrated then the OLS estimator of that equation will be super consistent with a rate of convergance of $T^{-2} $ compare
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable +1 for the question! It can be shown that if $y_{t}$ and $x_{1t}$ are cointegrated then the OLS estimator of that equation will be super consistent with a rate of convergance of $T^{-2} $ compared to the stationary $T^{-1} $ case. Notice further, that even if you have misspecified stationary terms terms in your regression or haven't captured all the dynamics of the true DGP the estimator will still be consistent since the stochastic trends will dominate asymptotically for $T\rightarrow\infty $ so any misspecification of stationary terms will not affect the estimator. The problem with running that regression is that the estimator can be severely biased in small samples if the stationary terms are misspecified and you cannot test hypothesis on the parameters since the estimator is not normal and the distribution depends on unknown parameters. In short: you will get super consistent estimates but you cannot test parameter significance nor say anything else about them. An easy way to test if the static relationship, (1) below, is stationary or not would be to test your estimated residuals for a unit root using an ADF-test. Remember to not include a constant since we assume a mean zero process and remember that the distribution is not the regular Dickey-Fuller distribtuion but depends on the no. of parameters in the static regression, (1) below. I guess you have heard of the Engle-Granger 2-step approach and your problem seems very similar to that approach. 1) Estimate the static regression: $y_{t}=\beta_{0}+\beta_{1}x_{1t}+\beta_{2}x_{2t}+\varepsilon_{t} $ (1), where $y_{t}\sim I\left(1\right) $, $x_{1t}\sim I\left(1\right) $ and $x_{2t}\sim I\left(0\right) $ and test the estimated residuals for a unit root. If the variables cointegrate then the residuals should be stationary. Note that the null hypothesis of a unit root corresponds to the null hypothesis of no-cointegration. 2) Run the dynamic regression: ${\Delta y_{t}=\beta_{0}+\beta_{1}\Delta x_{1t}+\beta_{2}\Delta x_{1t-1}+\beta_{3}\Delta x_{2t1}+\beta_{4}\Delta x_{2t-1}+\beta_{5}\Delta x_{3t1}+\beta_{6}\Delta x_{3t-1}+ecm_{t-1}}+u_{t} $ (2), where $ecm_{t-1}=\hat{\varepsilon}_{t-1} $, i.e. the estimated residuals from the static regression. Note that I included another stationary variable $x_{3t} $ to show that in this second step you can include whatever stationary variables you will. $ecm_{t-1} $ will show if and by how much $y_{t} $ error corrects to $x_{1t} $. Notice further that the t-values in (2) follow a normal distribution and regular inference applies. Some notes to think about: A) If $y_{t} $ and $x_{1t} $ cointegrate and you want to test that there is no need to include $x_{2t} $ in the static regression, (1), since you can include it in, (2), instead. B) Notice that the biased parameters which arise in small samples in the static regression, (1) are not a problem for an ECM, which is a rewritten ADL model, since it should be dynamically complete (this can be shown by MC simulations). C) Why would you want to estimate the static regression (1) on its own? You do not know if the variables cointegrate (if they do not cointegrate you'll get a spurious regression) and inference is invalid, hence we cannot test any hypothesis on our estimated parameters. In short: Estimate your ECM or use the Engle-Grange approach unless you want to expand your analysis to CVAR's.
Modeling an I(1) process with a cointegrating I(1) and an I(0) variable +1 for the question! It can be shown that if $y_{t}$ and $x_{1t}$ are cointegrated then the OLS estimator of that equation will be super consistent with a rate of convergance of $T^{-2} $ compare
49,365
Why are there two forms for the Mann-Whitney U test statistic?
There are actually more than two forms of the Mann-Whitney-Wilcoxon test. Given no ties (which I will assume throughout), the two forms you have there correspond to (i) the number of times an observation in sample 1 exceeds an observation from sample 2, and (ii) the number of times an observation in sample 2 exceeds an observation from sample 1. We'd do best to distinguish those two definitions. Let's call them $U_{1>2}$ and $U_{2>1}$. Note that $U_{1>2}+U_{2>1} = n_1 n_2$, the number of pairwise comparisons between sample 1 and sample 2. $R_1$ is the sum of ranks in sample 1, one of the two common forms most associated with Wilcoxon (mentioned in the original paper) -- sometimes called W or occasionally U or T. The other form associated with Wilcoxon (in the first tables of the statistic, published shortly after) is $W=R_1- \frac{n_1(n_1 + 1)}{2}$, the sum of ranks in sample 1 minus the smallest possible value for that sum. This form is equivalent to what I called $U_{1>2}$. (More forms still are possible.) These forms are all linearly related. As a result they yield equivalent tests (they should reject or fail to reject the null for the same samples under the same conditions).
Why are there two forms for the Mann-Whitney U test statistic?
There are actually more than two forms of the Mann-Whitney-Wilcoxon test. Given no ties (which I will assume throughout), the two forms you have there correspond to (i) the number of times an observa
Why are there two forms for the Mann-Whitney U test statistic? There are actually more than two forms of the Mann-Whitney-Wilcoxon test. Given no ties (which I will assume throughout), the two forms you have there correspond to (i) the number of times an observation in sample 1 exceeds an observation from sample 2, and (ii) the number of times an observation in sample 2 exceeds an observation from sample 1. We'd do best to distinguish those two definitions. Let's call them $U_{1>2}$ and $U_{2>1}$. Note that $U_{1>2}+U_{2>1} = n_1 n_2$, the number of pairwise comparisons between sample 1 and sample 2. $R_1$ is the sum of ranks in sample 1, one of the two common forms most associated with Wilcoxon (mentioned in the original paper) -- sometimes called W or occasionally U or T. The other form associated with Wilcoxon (in the first tables of the statistic, published shortly after) is $W=R_1- \frac{n_1(n_1 + 1)}{2}$, the sum of ranks in sample 1 minus the smallest possible value for that sum. This form is equivalent to what I called $U_{1>2}$. (More forms still are possible.) These forms are all linearly related. As a result they yield equivalent tests (they should reject or fail to reject the null for the same samples under the same conditions).
Why are there two forms for the Mann-Whitney U test statistic? There are actually more than two forms of the Mann-Whitney-Wilcoxon test. Given no ties (which I will assume throughout), the two forms you have there correspond to (i) the number of times an observa
49,366
Why AUC-PR increases when the number of positives increase?
Remember that PR curves visualize a model's performance over the entire operating range, not just where its classification threshold happens to be. Your reasoning in the final paragraph seems to be based on the model's classification of test instances, rather than their ranking. PR curves are not computed based on predicted labels (positive/negative) but rather from the model's ranking of test instances based on decision values. Decision values can generally be considered in $\mathbb{R}$, though for some models these are probabilities (for instance logistic regression). Non-random model For a real model, when more positives are added to the data set from which PR curves are computed, the observed precision of the model for a given level of recall can never go down$^*$. This follows readily from the way precision is calculated ($\frac{TP}{TP+FP}$, adding positives increases $TP$ and leaves $FP$ unchanged so the precision increases). In other words, the new PR curve (computed with a higher fraction of positives) necessarily dominates the original one for the same model. $^*$assuming the added positives are distributed in the same way as the 'original' ones, so the model's recall as a function of its decision value remains the same. Random model As Anony-Mousse stated, a random result will have expected precision equal to the fraction of positives in the data set for any recall. The recall of a random model is directly linked to the fraction of data it assigns to be positive $f_{pos}$, particularly: $$recall = \frac{n_{pos}^{(pred)}}{n_{pos}^{(truth)}} = \frac{f_{pos}\times n_{total}}{n_{pos}^{(truth)}},$$ where $n_{pos}^{(pred)}$ is some fraction of the total data ($f_{pos}$ is based on the decision threshold). The expected precision of a random model is always equal to the fraction of positives in the data set. This follows directly from the definition of precision, namely the fraction of true positives in all positive predictions. In a random model the predictions are unrelated to the true label, so the expected value of its precision is by definition equal to the fraction of positives in the data. As such, the expected PR curve of a random model is essentially a horizontal line. This line spans the entire recall range (e.g. width 1) and has height equal to the expected precision (equal to the fraction of positives) so the associated AUC is equal to the fraction of positives. Conclusion As a result, increasing the fraction of positives inflates the area under the PR curve. You cannot compare PR curves (nor their AUC) computed at different levels of class balance. ROC curves do not exhibit such behaviour.
Why AUC-PR increases when the number of positives increase?
Remember that PR curves visualize a model's performance over the entire operating range, not just where its classification threshold happens to be. Your reasoning in the final paragraph seems to be ba
Why AUC-PR increases when the number of positives increase? Remember that PR curves visualize a model's performance over the entire operating range, not just where its classification threshold happens to be. Your reasoning in the final paragraph seems to be based on the model's classification of test instances, rather than their ranking. PR curves are not computed based on predicted labels (positive/negative) but rather from the model's ranking of test instances based on decision values. Decision values can generally be considered in $\mathbb{R}$, though for some models these are probabilities (for instance logistic regression). Non-random model For a real model, when more positives are added to the data set from which PR curves are computed, the observed precision of the model for a given level of recall can never go down$^*$. This follows readily from the way precision is calculated ($\frac{TP}{TP+FP}$, adding positives increases $TP$ and leaves $FP$ unchanged so the precision increases). In other words, the new PR curve (computed with a higher fraction of positives) necessarily dominates the original one for the same model. $^*$assuming the added positives are distributed in the same way as the 'original' ones, so the model's recall as a function of its decision value remains the same. Random model As Anony-Mousse stated, a random result will have expected precision equal to the fraction of positives in the data set for any recall. The recall of a random model is directly linked to the fraction of data it assigns to be positive $f_{pos}$, particularly: $$recall = \frac{n_{pos}^{(pred)}}{n_{pos}^{(truth)}} = \frac{f_{pos}\times n_{total}}{n_{pos}^{(truth)}},$$ where $n_{pos}^{(pred)}$ is some fraction of the total data ($f_{pos}$ is based on the decision threshold). The expected precision of a random model is always equal to the fraction of positives in the data set. This follows directly from the definition of precision, namely the fraction of true positives in all positive predictions. In a random model the predictions are unrelated to the true label, so the expected value of its precision is by definition equal to the fraction of positives in the data. As such, the expected PR curve of a random model is essentially a horizontal line. This line spans the entire recall range (e.g. width 1) and has height equal to the expected precision (equal to the fraction of positives) so the associated AUC is equal to the fraction of positives. Conclusion As a result, increasing the fraction of positives inflates the area under the PR curve. You cannot compare PR curves (nor their AUC) computed at different levels of class balance. ROC curves do not exhibit such behaviour.
Why AUC-PR increases when the number of positives increase? Remember that PR curves visualize a model's performance over the entire operating range, not just where its classification threshold happens to be. Your reasoning in the final paragraph seems to be ba
49,367
Why AUC-PR increases when the number of positives increase?
precision $$ prec = \frac{TP}{TP+FP} $$ and in case when we add positive values prec is not deceasing (so we find at least previous TP number of ones. When we find them more, the prec is increasing, like: $$ \frac{TP+1}{TP+1+FP} \geqslant \frac{TP}{TP+FP} $$ (is strictly greater if FP > 0) FP is not changing because we don't add zeros. recall $$ recall = \frac{TP}{TP+FN} $$ when we are adding ones, statistically some of them will be found as TP and some undiscovered as FN. The ratio of this classification probability is proportional to ratio of areas TP and FN, so if we add k ones then: $$ k\frac{TP}{TP+FN} $$ are classified as TP and $$ k\frac{FN}{TP+FN} $$ are classified as FN So let compute new recall $$ \frac{k\frac{TP}{TP+FN} + TP}{k\frac{TP}{TP+FN}+TP+k\frac{FN}{TP+FN}+FN} $$ $$ \frac{k\frac{TP}{TP+FN} + TP\frac{TP+FN}{TP+FN}}{k+TP+FN} $$ $$ \frac{TP(\frac{k}{TP+FN} + \frac{TP+FN}{TP+FN})}{k+TP+FN} $$ $$ \frac{TP(\frac{k+TP+FN}{TP+FN})}{k+TP+FN} $$ $$ \frac{TP}{TP+FN} $$ so recall does not change
Why AUC-PR increases when the number of positives increase?
precision $$ prec = \frac{TP}{TP+FP} $$ and in case when we add positive values prec is not deceasing (so we find at least previous TP number of ones. When we find them more, the prec is increasing, l
Why AUC-PR increases when the number of positives increase? precision $$ prec = \frac{TP}{TP+FP} $$ and in case when we add positive values prec is not deceasing (so we find at least previous TP number of ones. When we find them more, the prec is increasing, like: $$ \frac{TP+1}{TP+1+FP} \geqslant \frac{TP}{TP+FP} $$ (is strictly greater if FP > 0) FP is not changing because we don't add zeros. recall $$ recall = \frac{TP}{TP+FN} $$ when we are adding ones, statistically some of them will be found as TP and some undiscovered as FN. The ratio of this classification probability is proportional to ratio of areas TP and FN, so if we add k ones then: $$ k\frac{TP}{TP+FN} $$ are classified as TP and $$ k\frac{FN}{TP+FN} $$ are classified as FN So let compute new recall $$ \frac{k\frac{TP}{TP+FN} + TP}{k\frac{TP}{TP+FN}+TP+k\frac{FN}{TP+FN}+FN} $$ $$ \frac{k\frac{TP}{TP+FN} + TP\frac{TP+FN}{TP+FN}}{k+TP+FN} $$ $$ \frac{TP(\frac{k}{TP+FN} + \frac{TP+FN}{TP+FN})}{k+TP+FN} $$ $$ \frac{TP(\frac{k+TP+FN}{TP+FN})}{k+TP+FN} $$ $$ \frac{TP}{TP+FN} $$ so recall does not change
Why AUC-PR increases when the number of positives increase? precision $$ prec = \frac{TP}{TP+FP} $$ and in case when we add positive values prec is not deceasing (so we find at least previous TP number of ones. When we find them more, the prec is increasing, l
49,368
Weighted cases in a cluster analysis for cases in SPSS
Using K-means after hierarchical clustering or hierarchical clustering after K-means may be sometimes a sound trick on its own - not because of weighting. Frequency weighting of objects when clustering objects Now about weighting. To do hierarchical cluster analysis of cases with frequency weights attached to the cases (objects to cluster): Approach 1, general. Propagate objects. Multiply the weights by a constant so that the smaller individual weight becomes about 1, and then round the weights; and propagate cases according to those frequencies. For example, if you have 4 groups of cases with corresponding case weights 0.55 0.23 1.98 1.14, multiplying by 4.35 yields 2.39 1.00 8.61 4.96 and then rounding to frequencies 2 1 9 5. Propagate each case of the corresponding group this number of times. In SPSS a syntax to propagate cases is as follows. loop #i= 1 to FREQ. /*FREQ is that recalculated weighting variable xsave outfile= 'FILE.SAV' /keep= VARS. /*FILE.SAV is the dataset you save to hard disk: path and filename /*Optional /keep= VARS is the list of variables you want to save with the file /*In your case that will be of course all the features you cluster by end loop. exec. If you need 10 times greater precision in compliance to the original fractional weights, multiply by 10 before rounding, 23.9 10.0 86.1 49.6 so that the frequencies of propagation will be 24 10 86 50. However, duplicating cases these big number of times may make the dataset too big for a single hierarchical cluster analysis. So don't be too hard with precision. On the other hand, propagated big dataset you can cluster-analyze by randomly selected subsamples - several times. Then you could combine the results (several approaches are possible). [Actually, to perform such resampling with replacement in SPSS you don't need to propagate the data first. However, I will stop and won't go in details of syntax to do it.] After propagation of cases and before clustering, you may want to add tiny random noise to quantitative features - to untie identical cases. It will make results of clustering less dependent on the order of cases in the dataset. do repeat x= var1 var2 var3. /*list of quantitative variables compute x= x+rv.uniform(0,0.00001). /*a noise value between 0 and, say, 0.00001 end repeat. exec. If you are working with already built distance matrix (rather than the dataset) then propagate its rows/columns times you need. In SPSS, you may use handy matrix function !propag() for that - see my web-page. Approach 2. Use resumed agglomeration. Some implementations of hierarchical clustering (an example is my own SPSS macro for hierarchical clustering found on my web-page) allow to interrupt agglomeration and save the currently left distance matrix; that matrix has additional column with within-cluster frequencies so far. The matrix can be used as input to "resume" the clustering. Now, the fact is that some methods of agglomeration, namely, single (nearest neighbour), complete (farthest neighbour), between-group average (UPGMA), centroid and median, do not notice or make difference about what is the within-cluster density when they merge two clusters. Therefore, for these methods resuming agglomeration is equivalent to doing agglomeration with initial frequency weights attached. So, if your program has the option to interrupt/resume agglomeration you may use it, under the above methods, to "simulate" weighted input succesfully, and you don't need to propagate rows/columns of the matrix. Moreover, three methods - single, complete and median (WPGMC) are known to ignore even the within-cluster frequencies when they merge two clusters. Therefore frequency weighting (either by approach 1 or approach 2) for these methods appear needless altogether. They are insensitive to it and will give the same classification of objects without weightind as with weighting. The only difference will be in the dendrogram looks because with weighting you use more objects to combine and it should show up on the dendro. As for weighting cases in K-means clustering procedure, SPSS allows it: the procedure obeys weighting regime. This is understandable: K-means computation can easily and naturally incorporate integer or fractional weights while computing cluster means. Propagation of cases should give very similar results to clustering under weighting switched on. Two-step cluster analysis of SPSS doesn't support weighting cases, like hierarchical clustering. So the solution here is the propagation of cases described above in approach 1.
Weighted cases in a cluster analysis for cases in SPSS
Using K-means after hierarchical clustering or hierarchical clustering after K-means may be sometimes a sound trick on its own - not because of weighting. Frequency weighting of objects when clusterin
Weighted cases in a cluster analysis for cases in SPSS Using K-means after hierarchical clustering or hierarchical clustering after K-means may be sometimes a sound trick on its own - not because of weighting. Frequency weighting of objects when clustering objects Now about weighting. To do hierarchical cluster analysis of cases with frequency weights attached to the cases (objects to cluster): Approach 1, general. Propagate objects. Multiply the weights by a constant so that the smaller individual weight becomes about 1, and then round the weights; and propagate cases according to those frequencies. For example, if you have 4 groups of cases with corresponding case weights 0.55 0.23 1.98 1.14, multiplying by 4.35 yields 2.39 1.00 8.61 4.96 and then rounding to frequencies 2 1 9 5. Propagate each case of the corresponding group this number of times. In SPSS a syntax to propagate cases is as follows. loop #i= 1 to FREQ. /*FREQ is that recalculated weighting variable xsave outfile= 'FILE.SAV' /keep= VARS. /*FILE.SAV is the dataset you save to hard disk: path and filename /*Optional /keep= VARS is the list of variables you want to save with the file /*In your case that will be of course all the features you cluster by end loop. exec. If you need 10 times greater precision in compliance to the original fractional weights, multiply by 10 before rounding, 23.9 10.0 86.1 49.6 so that the frequencies of propagation will be 24 10 86 50. However, duplicating cases these big number of times may make the dataset too big for a single hierarchical cluster analysis. So don't be too hard with precision. On the other hand, propagated big dataset you can cluster-analyze by randomly selected subsamples - several times. Then you could combine the results (several approaches are possible). [Actually, to perform such resampling with replacement in SPSS you don't need to propagate the data first. However, I will stop and won't go in details of syntax to do it.] After propagation of cases and before clustering, you may want to add tiny random noise to quantitative features - to untie identical cases. It will make results of clustering less dependent on the order of cases in the dataset. do repeat x= var1 var2 var3. /*list of quantitative variables compute x= x+rv.uniform(0,0.00001). /*a noise value between 0 and, say, 0.00001 end repeat. exec. If you are working with already built distance matrix (rather than the dataset) then propagate its rows/columns times you need. In SPSS, you may use handy matrix function !propag() for that - see my web-page. Approach 2. Use resumed agglomeration. Some implementations of hierarchical clustering (an example is my own SPSS macro for hierarchical clustering found on my web-page) allow to interrupt agglomeration and save the currently left distance matrix; that matrix has additional column with within-cluster frequencies so far. The matrix can be used as input to "resume" the clustering. Now, the fact is that some methods of agglomeration, namely, single (nearest neighbour), complete (farthest neighbour), between-group average (UPGMA), centroid and median, do not notice or make difference about what is the within-cluster density when they merge two clusters. Therefore, for these methods resuming agglomeration is equivalent to doing agglomeration with initial frequency weights attached. So, if your program has the option to interrupt/resume agglomeration you may use it, under the above methods, to "simulate" weighted input succesfully, and you don't need to propagate rows/columns of the matrix. Moreover, three methods - single, complete and median (WPGMC) are known to ignore even the within-cluster frequencies when they merge two clusters. Therefore frequency weighting (either by approach 1 or approach 2) for these methods appear needless altogether. They are insensitive to it and will give the same classification of objects without weightind as with weighting. The only difference will be in the dendrogram looks because with weighting you use more objects to combine and it should show up on the dendro. As for weighting cases in K-means clustering procedure, SPSS allows it: the procedure obeys weighting regime. This is understandable: K-means computation can easily and naturally incorporate integer or fractional weights while computing cluster means. Propagation of cases should give very similar results to clustering under weighting switched on. Two-step cluster analysis of SPSS doesn't support weighting cases, like hierarchical clustering. So the solution here is the propagation of cases described above in approach 1.
Weighted cases in a cluster analysis for cases in SPSS Using K-means after hierarchical clustering or hierarchical clustering after K-means may be sometimes a sound trick on its own - not because of weighting. Frequency weighting of objects when clusterin
49,369
What does the covariance of a quaternion *mean*?
EDIT: So my actual answer is that it is difficult to visualize quaternions (and you should not feel alone in that), thus the conversion to Euler angles. The thesis is there in case you wanted the covariance laws, and to add to the rationale above. Here is some information from a thesis (the full thesis PDF link is at the bottom) You may also try locating this book: Vanicek, P. and E.J. Krakiwsky (1986): Geodesy: The Concepts, North-Holland, Amsterdam. source: www.ucalgary.ca/engo_webdocs/GL/96.20096.JSchleppe.pdf
What does the covariance of a quaternion *mean*?
EDIT: So my actual answer is that it is difficult to visualize quaternions (and you should not feel alone in that), thus the conversion to Euler angles. The thesis is there in case you wanted the cov
What does the covariance of a quaternion *mean*? EDIT: So my actual answer is that it is difficult to visualize quaternions (and you should not feel alone in that), thus the conversion to Euler angles. The thesis is there in case you wanted the covariance laws, and to add to the rationale above. Here is some information from a thesis (the full thesis PDF link is at the bottom) You may also try locating this book: Vanicek, P. and E.J. Krakiwsky (1986): Geodesy: The Concepts, North-Holland, Amsterdam. source: www.ucalgary.ca/engo_webdocs/GL/96.20096.JSchleppe.pdf
What does the covariance of a quaternion *mean*? EDIT: So my actual answer is that it is difficult to visualize quaternions (and you should not feel alone in that), thus the conversion to Euler angles. The thesis is there in case you wanted the cov
49,370
Metrics for comparing estimated lists to a 'true' list
Your first issue appears to be that for some ranks, it is considered more important to be predicted correctly (usually the upper ranks), than others. So you should be looking into weighted rank correlation coefficients that can give to the top ranks' similarities/dissimilarities greater weight. Here is some literature in case you are unfamiliar: Pinto da Costa, J.F. and Soares, C. (2005). A weighted rank measure of correlation, Australian & New Zealand Journal of Statistics, 47(4), 515–529. Authors' Abstract :Spearman's rank correlation coefficient is not entirely suitable for measuring the correlation between two rankings in some applications because it treats all ranks equally. In 2000, Blest proposed an alternative measure of correlation that gives more importance to higher ranks but has some drawbacks. This paper proposes a weighted rank measure of correlation that weights the distance between two ranks using a linear function of those ranks, giving more importance to higher ranks than lower ones. It analyses its distribution and provides a table of critical values to test whether a given value of the coefficient is significantly different from zero. The paper also summarizes a number of applications for which the new measure is more suitable than Spearman's. The limit distribution of the above measure can be found in JFP da Costa & LAC Roque (2006) : LIMIT DISTRIBUTION FOR THE WEIGHTED RANK CORRELATION COEFFICIENT $r_w$. REVSTAT – Statistical Journal Volume 4, Number 3, November 2006, 189–200 Another approach: Maturi, T.A. and E.H. Abdelfattah, 2008. A New Weighted Rank Correlation. J. Math. Stat., 4: 226-230. ...and many more if you search "weighted rank correlation coefficient". Your second issue, appears to be whether it is critical, important, or useful, to take into account the accuracy of the predicted votes rather than just the predicted rankings. I have only a tentative thought here: prediction performance metrics, usually ignore whether the prediction under-predicts or over predicts, and consider absolute or squared deviations. In your case it appears useful to assess whether the two models tend to under-predict or over-predict. Perhaps you should examine their failures to predict the correct ranking, and see whether it was due to under-predicting or over-predicting the votes. I mean, assume that person $X$ with true rank $5$ was predicted as rank $6$ by model $A$. Was this because the votes of person $X$ were under-predicted? Or they were over-predicted but some other person's votes were also over-predicted even more? The vote-distances in the true data set appears as a possible normalizing factor here. This may lead to some conclusion regarding how "robust" the comparative evaluation of the two models is, when thinking of other data sets on which they may be applied. But I admit I am just throwing ideas around. I will try to maybe do a little theoretical search/work on this, and if I manage, I will update my answer.
Metrics for comparing estimated lists to a 'true' list
Your first issue appears to be that for some ranks, it is considered more important to be predicted correctly (usually the upper ranks), than others. So you should be looking into weighted rank correl
Metrics for comparing estimated lists to a 'true' list Your first issue appears to be that for some ranks, it is considered more important to be predicted correctly (usually the upper ranks), than others. So you should be looking into weighted rank correlation coefficients that can give to the top ranks' similarities/dissimilarities greater weight. Here is some literature in case you are unfamiliar: Pinto da Costa, J.F. and Soares, C. (2005). A weighted rank measure of correlation, Australian & New Zealand Journal of Statistics, 47(4), 515–529. Authors' Abstract :Spearman's rank correlation coefficient is not entirely suitable for measuring the correlation between two rankings in some applications because it treats all ranks equally. In 2000, Blest proposed an alternative measure of correlation that gives more importance to higher ranks but has some drawbacks. This paper proposes a weighted rank measure of correlation that weights the distance between two ranks using a linear function of those ranks, giving more importance to higher ranks than lower ones. It analyses its distribution and provides a table of critical values to test whether a given value of the coefficient is significantly different from zero. The paper also summarizes a number of applications for which the new measure is more suitable than Spearman's. The limit distribution of the above measure can be found in JFP da Costa & LAC Roque (2006) : LIMIT DISTRIBUTION FOR THE WEIGHTED RANK CORRELATION COEFFICIENT $r_w$. REVSTAT – Statistical Journal Volume 4, Number 3, November 2006, 189–200 Another approach: Maturi, T.A. and E.H. Abdelfattah, 2008. A New Weighted Rank Correlation. J. Math. Stat., 4: 226-230. ...and many more if you search "weighted rank correlation coefficient". Your second issue, appears to be whether it is critical, important, or useful, to take into account the accuracy of the predicted votes rather than just the predicted rankings. I have only a tentative thought here: prediction performance metrics, usually ignore whether the prediction under-predicts or over predicts, and consider absolute or squared deviations. In your case it appears useful to assess whether the two models tend to under-predict or over-predict. Perhaps you should examine their failures to predict the correct ranking, and see whether it was due to under-predicting or over-predicting the votes. I mean, assume that person $X$ with true rank $5$ was predicted as rank $6$ by model $A$. Was this because the votes of person $X$ were under-predicted? Or they were over-predicted but some other person's votes were also over-predicted even more? The vote-distances in the true data set appears as a possible normalizing factor here. This may lead to some conclusion regarding how "robust" the comparative evaluation of the two models is, when thinking of other data sets on which they may be applied. But I admit I am just throwing ideas around. I will try to maybe do a little theoretical search/work on this, and if I manage, I will update my answer.
Metrics for comparing estimated lists to a 'true' list Your first issue appears to be that for some ranks, it is considered more important to be predicted correctly (usually the upper ranks), than others. So you should be looking into weighted rank correl
49,371
Metrics for comparing estimated lists to a 'true' list
The simple approach is simply to construct a loss function over rankings, like squared errors. However since you are concerned about ties and would like to use voting data as well, you could try to model the cumulative distribution function (CDF) of the votes, which you could do either parametrically or non-parametrically. You then have 3 fitted CDFs: the truth, Model A and Model B. You can construct a distributional loss function based on the integrated sum of square differences between the distributions in votes. You could then combine these two loss functions to construct a parametric weighted loss function of the two, such a $\alpha*L_{1}+(1-\alpha)*L_{2}$. You could then search over possible values of $\alpha$ that perform best.
Metrics for comparing estimated lists to a 'true' list
The simple approach is simply to construct a loss function over rankings, like squared errors. However since you are concerned about ties and would like to use voting data as well, you could try to
Metrics for comparing estimated lists to a 'true' list The simple approach is simply to construct a loss function over rankings, like squared errors. However since you are concerned about ties and would like to use voting data as well, you could try to model the cumulative distribution function (CDF) of the votes, which you could do either parametrically or non-parametrically. You then have 3 fitted CDFs: the truth, Model A and Model B. You can construct a distributional loss function based on the integrated sum of square differences between the distributions in votes. You could then combine these two loss functions to construct a parametric weighted loss function of the two, such a $\alpha*L_{1}+(1-\alpha)*L_{2}$. You could then search over possible values of $\alpha$ that perform best.
Metrics for comparing estimated lists to a 'true' list The simple approach is simply to construct a loss function over rankings, like squared errors. However since you are concerned about ties and would like to use voting data as well, you could try to
49,372
Prediction error in least squares with a linear model
Your last statement provides an important clue: not only would $D$ be diagonal, it would have to have $p$ units on the diagonal and zeros elsewhere. So there must be something special about $X(X^\prime X)^{-1}X^\prime$. To see what, look at the Singular Value Decomposition of $X$, $$X = U \Sigma V^\prime$$ where $U$ and $V$ are orthogonal (that is, $U^\prime U$ and $V^\prime V$ are identity matrices) and $\Sigma$ is diagonal. Use this to simplify $$(X^\prime X)^{-1} = \left( \left( U \Sigma V^\prime \right)^\prime \left( U \Sigma V^\prime \right)\right)^{-1} = \left( V \Sigma^\prime U^\prime U \Sigma V^\prime \right)^{-1} = \left( V \Sigma^\prime \Sigma V^\prime \right)^{-1}$$ and employ that to compute $$X(X^\prime X)^{-1}X^\prime =\left( U \Sigma V^\prime \right) \left( V \Sigma^\prime \Sigma V^\prime \right)^{-1}\left( U \Sigma V^\prime \right)^\prime = U\left(\Sigma\left(\Sigma^\prime \Sigma\right)^{-1}\Sigma^\prime\right) U^\prime.$$ This exhibits the covariance matrix of $\epsilon$ as being conjugate (via the similarity induced by $U$) to $\Sigma\left(\Sigma^\prime \Sigma\right)\Sigma^\prime$, which (since $\Sigma$ is diagonal) has $\text{rank}(\Sigma)$ ones along the diagonal and zeros elsewhere: in other words, the distribution of $\epsilon$ is that of an orthogonal linear combination of $\text{rank}(\Sigma) = \text{rank}(X)$ independent, identically distributed Normal variates. Orthogonal transformations such as $U$ preserve sums of squares. Provided $X$ has full rank--which is $\min(p,n)=p$--the distribution of $E$ therefore is that of the sum of $p$ squares of independent standard Normal variables, which by definition is $\chi^2(p)$. More generally, $E \sim \chi^2(\text{rank}(X)).$ This algebraic argument is one way of finding out that ordinary least squares is just Euclidean geometry: this result is a rediscovery of the Pythagorean Theorem.
Prediction error in least squares with a linear model
Your last statement provides an important clue: not only would $D$ be diagonal, it would have to have $p$ units on the diagonal and zeros elsewhere. So there must be something special about $X(X^\pri
Prediction error in least squares with a linear model Your last statement provides an important clue: not only would $D$ be diagonal, it would have to have $p$ units on the diagonal and zeros elsewhere. So there must be something special about $X(X^\prime X)^{-1}X^\prime$. To see what, look at the Singular Value Decomposition of $X$, $$X = U \Sigma V^\prime$$ where $U$ and $V$ are orthogonal (that is, $U^\prime U$ and $V^\prime V$ are identity matrices) and $\Sigma$ is diagonal. Use this to simplify $$(X^\prime X)^{-1} = \left( \left( U \Sigma V^\prime \right)^\prime \left( U \Sigma V^\prime \right)\right)^{-1} = \left( V \Sigma^\prime U^\prime U \Sigma V^\prime \right)^{-1} = \left( V \Sigma^\prime \Sigma V^\prime \right)^{-1}$$ and employ that to compute $$X(X^\prime X)^{-1}X^\prime =\left( U \Sigma V^\prime \right) \left( V \Sigma^\prime \Sigma V^\prime \right)^{-1}\left( U \Sigma V^\prime \right)^\prime = U\left(\Sigma\left(\Sigma^\prime \Sigma\right)^{-1}\Sigma^\prime\right) U^\prime.$$ This exhibits the covariance matrix of $\epsilon$ as being conjugate (via the similarity induced by $U$) to $\Sigma\left(\Sigma^\prime \Sigma\right)\Sigma^\prime$, which (since $\Sigma$ is diagonal) has $\text{rank}(\Sigma)$ ones along the diagonal and zeros elsewhere: in other words, the distribution of $\epsilon$ is that of an orthogonal linear combination of $\text{rank}(\Sigma) = \text{rank}(X)$ independent, identically distributed Normal variates. Orthogonal transformations such as $U$ preserve sums of squares. Provided $X$ has full rank--which is $\min(p,n)=p$--the distribution of $E$ therefore is that of the sum of $p$ squares of independent standard Normal variables, which by definition is $\chi^2(p)$. More generally, $E \sim \chi^2(\text{rank}(X)).$ This algebraic argument is one way of finding out that ordinary least squares is just Euclidean geometry: this result is a rediscovery of the Pythagorean Theorem.
Prediction error in least squares with a linear model Your last statement provides an important clue: not only would $D$ be diagonal, it would have to have $p$ units on the diagonal and zeros elsewhere. So there must be something special about $X(X^\pri
49,373
How to choose between exponential and gamma distributions
Fortunately, you're mistaken. The shape parameter for a gamma ($\alpha$, say) has to be $\ge 0$. http://en.wikipedia.org/wiki/Gamma_distribution The exponential has $\alpha=1$. http://en.wikipedia.org/wiki/Gamma_distribution#Others So the exponential is not at the boundary and you should be able to apply a likelihood ratio test without difficulty. (I would say, however, that hypothesis tests are not necessarily a good approach to model selection.)
How to choose between exponential and gamma distributions
Fortunately, you're mistaken. The shape parameter for a gamma ($\alpha$, say) has to be $\ge 0$. http://en.wikipedia.org/wiki/Gamma_distribution The exponential has $\alpha=1$. http://en.wikipedia.or
How to choose between exponential and gamma distributions Fortunately, you're mistaken. The shape parameter for a gamma ($\alpha$, say) has to be $\ge 0$. http://en.wikipedia.org/wiki/Gamma_distribution The exponential has $\alpha=1$. http://en.wikipedia.org/wiki/Gamma_distribution#Others So the exponential is not at the boundary and you should be able to apply a likelihood ratio test without difficulty. (I would say, however, that hypothesis tests are not necessarily a good approach to model selection.)
How to choose between exponential and gamma distributions Fortunately, you're mistaken. The shape parameter for a gamma ($\alpha$, say) has to be $\ge 0$. http://en.wikipedia.org/wiki/Gamma_distribution The exponential has $\alpha=1$. http://en.wikipedia.or
49,374
Can I still interpret a Q-Q plot that uses discrete/rounded data?
As you say, a staircase pattern is an inevitable side-effect of discreteness, but that is the only obvious limitation. The rule for quantile-quantile plots otherwise remains that departures from sameness of distributions are shown by departures from equality of quantiles. Here are some dopey examples. I simulated some Poisson distributions. In practice, it is clearly more engaging to look at real data of interest, but I focus here on the graphical principles. First, I show two samples from the same parent, a Poisson with mean 3. A nuance in the graph is the use of open circles as a plotting symbol together with jittering of points (addition of random noise) to underline that multiple pairs of quantiles are being overplotted at several positions. The line of equality is shown as a diagonal, as is common on quantile-quantile plots. As a minor variation, here is a quantile-quantile plot for a sample from a Poisson of mean 3 and one of mean 4. The mismatch between distributions is evident. Such graphics is, or should be, easy in any well-developed statistical software. For those interested, here is the Stata code used to develop the examples above: clear set scheme s1color set seed 2803 set obs 1000 gen y3_1 = rpoisson(3) label var y3_1 "Poisson mean 3, sample 1" gen y3_2 = rpoisson(3) label var y3_2 "Poisson mean 3, sample 2" gen y4 = rpoisson(4) label var y3_2 "Poisson mean 3, sample 2" qqplot y3*, jitter(2) ms(Oh) label var y4 "Poisson mean 4" qqplot y3_1 y4, jitter(2) ms(Oh) Quantile-quantile plots are also often better on transformed scales, but that is true of continuous (or not rounded) variables too. For counted variables that include zero, square roots are most common but cube roots can be useful. Otherwise logarithms remain the most useful transformation for positive discrete or rounded variables. Incidentally, quantile plots also work well for discrete and rounded data. (Quantile plots, for single distributions, can also be thought of as quantile-quantile plots with reference distribution a standard uniform: a reference line of equality is thus typically not helpful.) Here is a display for the auto data bundled with Stata: There is quite a range of variable types here: values reported as integers but all distinct, values that are measurements but in practice highly rounded, an ordered scale (1..5), a binary variable that is 0 or 1, etc. Naturally quantile plots take any numeric coding literally, but otherwise they are intelligible and even informative at showing variables known to be discrete and variables continuous in principle but quite rounded in practice. The particular quantile plots here are drawn to maximise their family resemblance to box plots, as cumulative probabilities of 0(.25)1 are labelled on the horizontal axis and corresponding values are labelled on the vertical axis. For more discussion, see Cox, N.J. 2012. Axis practice, or what goes where on a graph. Stata Journal 12: 549-561
Can I still interpret a Q-Q plot that uses discrete/rounded data?
As you say, a staircase pattern is an inevitable side-effect of discreteness, but that is the only obvious limitation. The rule for quantile-quantile plots otherwise remains that departures from same
Can I still interpret a Q-Q plot that uses discrete/rounded data? As you say, a staircase pattern is an inevitable side-effect of discreteness, but that is the only obvious limitation. The rule for quantile-quantile plots otherwise remains that departures from sameness of distributions are shown by departures from equality of quantiles. Here are some dopey examples. I simulated some Poisson distributions. In practice, it is clearly more engaging to look at real data of interest, but I focus here on the graphical principles. First, I show two samples from the same parent, a Poisson with mean 3. A nuance in the graph is the use of open circles as a plotting symbol together with jittering of points (addition of random noise) to underline that multiple pairs of quantiles are being overplotted at several positions. The line of equality is shown as a diagonal, as is common on quantile-quantile plots. As a minor variation, here is a quantile-quantile plot for a sample from a Poisson of mean 3 and one of mean 4. The mismatch between distributions is evident. Such graphics is, or should be, easy in any well-developed statistical software. For those interested, here is the Stata code used to develop the examples above: clear set scheme s1color set seed 2803 set obs 1000 gen y3_1 = rpoisson(3) label var y3_1 "Poisson mean 3, sample 1" gen y3_2 = rpoisson(3) label var y3_2 "Poisson mean 3, sample 2" gen y4 = rpoisson(4) label var y3_2 "Poisson mean 3, sample 2" qqplot y3*, jitter(2) ms(Oh) label var y4 "Poisson mean 4" qqplot y3_1 y4, jitter(2) ms(Oh) Quantile-quantile plots are also often better on transformed scales, but that is true of continuous (or not rounded) variables too. For counted variables that include zero, square roots are most common but cube roots can be useful. Otherwise logarithms remain the most useful transformation for positive discrete or rounded variables. Incidentally, quantile plots also work well for discrete and rounded data. (Quantile plots, for single distributions, can also be thought of as quantile-quantile plots with reference distribution a standard uniform: a reference line of equality is thus typically not helpful.) Here is a display for the auto data bundled with Stata: There is quite a range of variable types here: values reported as integers but all distinct, values that are measurements but in practice highly rounded, an ordered scale (1..5), a binary variable that is 0 or 1, etc. Naturally quantile plots take any numeric coding literally, but otherwise they are intelligible and even informative at showing variables known to be discrete and variables continuous in principle but quite rounded in practice. The particular quantile plots here are drawn to maximise their family resemblance to box plots, as cumulative probabilities of 0(.25)1 are labelled on the horizontal axis and corresponding values are labelled on the vertical axis. For more discussion, see Cox, N.J. 2012. Axis practice, or what goes where on a graph. Stata Journal 12: 549-561
Can I still interpret a Q-Q plot that uses discrete/rounded data? As you say, a staircase pattern is an inevitable side-effect of discreteness, but that is the only obvious limitation. The rule for quantile-quantile plots otherwise remains that departures from same
49,375
About the derivation of group Lasso
the derivation of L2 norm is follow: 1. $\frac{\beta_j}{\|\beta_j\|}$ when $\beta_j \ne 0 $. 2. any vector with $ \| \beta_j \| \le 1 $ when $beta_j = 0$. So when combing these two formula together, you can get the plus sign in the formula.
About the derivation of group Lasso
the derivation of L2 norm is follow: 1. $\frac{\beta_j}{\|\beta_j\|}$ when $\beta_j \ne 0 $. 2. any vector with $ \| \beta_j \| \le 1 $ when $beta_j = 0$. So when combing these two formula together, y
About the derivation of group Lasso the derivation of L2 norm is follow: 1. $\frac{\beta_j}{\|\beta_j\|}$ when $\beta_j \ne 0 $. 2. any vector with $ \| \beta_j \| \le 1 $ when $beta_j = 0$. So when combing these two formula together, you can get the plus sign in the formula.
About the derivation of group Lasso the derivation of L2 norm is follow: 1. $\frac{\beta_j}{\|\beta_j\|}$ when $\beta_j \ne 0 $. 2. any vector with $ \| \beta_j \| \le 1 $ when $beta_j = 0$. So when combing these two formula together, y
49,376
How to model the effect of time in a balanced repeated measures design with 2 measures each at baseline, during instruction and post instruction?
I'd suggest you use Time as a 6-level factor, and then use appropriate contrasts to compare the phases, e.g., Post vs Instruction would be examined using contrast coefficients $(0,0,-.5,-.5,+.5,+.5)$. It's potentially important to use Time itself in the model because of the repeated measures. For example, some people might want to put some kind of time-series error structure on these -- or possibly a model that assumes greater correlation within a phase than between them. Example Toy dataset for 3 subjects, 6 times: > fake = expand.grid(time=1:6, subj=letters[1:3]) > fake$y = c(18,15,30,28,48,49,19,18,27,28,49,52,19,18,27,25,48,49) Fit a model with time as a 6-level factor and subj as a random effect: > library(lme4) > fake.lmer = lmer(y ~ factor(time) + (1|subj), data = fake) I'll do it in three stages using the functions in the lsmeans package. First, get the LS~means (aka predictions) for each time > library(lsmeans) > (time.lsm = lsmeans(fake.lmer, "time")) time lsmean SE df lower.CL upper.CL 1 18.66667 0.8388705 12 16.83890 20.49444 2 17.00000 0.8388705 12 15.17223 18.82777 3 28.00000 0.8388705 12 26.17223 29.82777 4 27.00000 0.8388705 12 25.17223 28.82777 5 48.33333 0.8388705 12 46.50556 50.16110 6 50.00000 0.8388705 12 48.17223 51.82777 As a convenience, here are the predictions averaged together in each phase. The function is contrast, but it can estimate linear functions whether or not they are contrasts: > (phase.lsm = contrast(time.lsm, list(base = c(.5,.5,0,0,0,0), + instr = c(0,0,.5,.5,0,0), post = c(0,0,0,0,.5,.5)))) contrast estimate SE df t.ratio p.value base 17.83333 0.5947299 9.86 29.986 <.0001 instr 27.50000 0.5947299 9.86 46.239 <.0001 post 49.16667 0.5947299 9.86 82.671 <.0001 Now obtain pairwise comparisons of these: > pairs(phase.lsm) contrast estimate SE df t.ratio p.value base - instr -9.666667 0.83666 10 -11.554 <.0001 base - post -31.333333 0.83666 10 -37.450 <.0001 instr - post -21.666667 0.83666 10 -25.897 <.0001 P value adjustment: tukey method for a family of 3 means Note that I could have gone directly to the types of contrasts I mentioned before. For example: > contrast(time.lsm, list(`base-instr` = c(.5,.5, -.5,-.5, 0,0))) contrast estimate SE df t.ratio p.value base.instr -9.666667 0.83666 10 -11.554 <.0001
How to model the effect of time in a balanced repeated measures design with 2 measures each at basel
I'd suggest you use Time as a 6-level factor, and then use appropriate contrasts to compare the phases, e.g., Post vs Instruction would be examined using contrast coefficients $(0,0,-.5,-.5,+.5,+.5)$.
How to model the effect of time in a balanced repeated measures design with 2 measures each at baseline, during instruction and post instruction? I'd suggest you use Time as a 6-level factor, and then use appropriate contrasts to compare the phases, e.g., Post vs Instruction would be examined using contrast coefficients $(0,0,-.5,-.5,+.5,+.5)$. It's potentially important to use Time itself in the model because of the repeated measures. For example, some people might want to put some kind of time-series error structure on these -- or possibly a model that assumes greater correlation within a phase than between them. Example Toy dataset for 3 subjects, 6 times: > fake = expand.grid(time=1:6, subj=letters[1:3]) > fake$y = c(18,15,30,28,48,49,19,18,27,28,49,52,19,18,27,25,48,49) Fit a model with time as a 6-level factor and subj as a random effect: > library(lme4) > fake.lmer = lmer(y ~ factor(time) + (1|subj), data = fake) I'll do it in three stages using the functions in the lsmeans package. First, get the LS~means (aka predictions) for each time > library(lsmeans) > (time.lsm = lsmeans(fake.lmer, "time")) time lsmean SE df lower.CL upper.CL 1 18.66667 0.8388705 12 16.83890 20.49444 2 17.00000 0.8388705 12 15.17223 18.82777 3 28.00000 0.8388705 12 26.17223 29.82777 4 27.00000 0.8388705 12 25.17223 28.82777 5 48.33333 0.8388705 12 46.50556 50.16110 6 50.00000 0.8388705 12 48.17223 51.82777 As a convenience, here are the predictions averaged together in each phase. The function is contrast, but it can estimate linear functions whether or not they are contrasts: > (phase.lsm = contrast(time.lsm, list(base = c(.5,.5,0,0,0,0), + instr = c(0,0,.5,.5,0,0), post = c(0,0,0,0,.5,.5)))) contrast estimate SE df t.ratio p.value base 17.83333 0.5947299 9.86 29.986 <.0001 instr 27.50000 0.5947299 9.86 46.239 <.0001 post 49.16667 0.5947299 9.86 82.671 <.0001 Now obtain pairwise comparisons of these: > pairs(phase.lsm) contrast estimate SE df t.ratio p.value base - instr -9.666667 0.83666 10 -11.554 <.0001 base - post -31.333333 0.83666 10 -37.450 <.0001 instr - post -21.666667 0.83666 10 -25.897 <.0001 P value adjustment: tukey method for a family of 3 means Note that I could have gone directly to the types of contrasts I mentioned before. For example: > contrast(time.lsm, list(`base-instr` = c(.5,.5, -.5,-.5, 0,0))) contrast estimate SE df t.ratio p.value base.instr -9.666667 0.83666 10 -11.554 <.0001
How to model the effect of time in a balanced repeated measures design with 2 measures each at basel I'd suggest you use Time as a 6-level factor, and then use appropriate contrasts to compare the phases, e.g., Post vs Instruction would be examined using contrast coefficients $(0,0,-.5,-.5,+.5,+.5)$.
49,377
How to model the effect of time in a balanced repeated measures design with 2 measures each at baseline, during instruction and post instruction?
There seems to be many possible ways to do this, depending on what you want exactly. The idea of ANOVA or repeated measure ANOVA only makes sense (to me) if you have different treatment groups (say, half of the 38 received different instructions etc.). Since all participants belong to 1 group, it seems to me all you need is a good old paired t(z)-test. But firstly, you need to define growth. E.g. if you define growth to be the difference between the last measurement and the first, then you can run: t.test(Y6,Y1,paired=TRUE) (where Y6 and Y1 are the measurements at the corresponding time). If you define growth to the the difference between the last 2 and the first 2, then you can first derive that variable, and reduce the problem to the previous case. Ypost=(Y6+Y5)/2;Ybase=(Y1+Y2)/2 and then t.test(Ypost,Ybase,paired=TRUE) This is of course the simplest way to do the analyses, there is arguably more sophisticated ways to do things, like a linear mixed model with random participant effect and temporally correlated error structure. But without knowing what exactly you want to do, it seems best to stick with the simpler way (i.e. t.test).
How to model the effect of time in a balanced repeated measures design with 2 measures each at basel
There seems to be many possible ways to do this, depending on what you want exactly. The idea of ANOVA or repeated measure ANOVA only makes sense (to me) if you have different treatment groups (say, h
How to model the effect of time in a balanced repeated measures design with 2 measures each at baseline, during instruction and post instruction? There seems to be many possible ways to do this, depending on what you want exactly. The idea of ANOVA or repeated measure ANOVA only makes sense (to me) if you have different treatment groups (say, half of the 38 received different instructions etc.). Since all participants belong to 1 group, it seems to me all you need is a good old paired t(z)-test. But firstly, you need to define growth. E.g. if you define growth to be the difference between the last measurement and the first, then you can run: t.test(Y6,Y1,paired=TRUE) (where Y6 and Y1 are the measurements at the corresponding time). If you define growth to the the difference between the last 2 and the first 2, then you can first derive that variable, and reduce the problem to the previous case. Ypost=(Y6+Y5)/2;Ybase=(Y1+Y2)/2 and then t.test(Ypost,Ybase,paired=TRUE) This is of course the simplest way to do the analyses, there is arguably more sophisticated ways to do things, like a linear mixed model with random participant effect and temporally correlated error structure. But without knowing what exactly you want to do, it seems best to stick with the simpler way (i.e. t.test).
How to model the effect of time in a balanced repeated measures design with 2 measures each at basel There seems to be many possible ways to do this, depending on what you want exactly. The idea of ANOVA or repeated measure ANOVA only makes sense (to me) if you have different treatment groups (say, h
49,378
Alternative to forecast() and ets() in Python?
There is an open PR to add full ETS functionality to statsmodels here. I ran out of steam trying to code up all the heuristics for optimization that Hyndman suggests to make it less fragile. If someone wants to take up the torch here, I have some uncommitted code and thoughts on how to proceed.
Alternative to forecast() and ets() in Python?
There is an open PR to add full ETS functionality to statsmodels here. I ran out of steam trying to code up all the heuristics for optimization that Hyndman suggests to make it less fragile. If someon
Alternative to forecast() and ets() in Python? There is an open PR to add full ETS functionality to statsmodels here. I ran out of steam trying to code up all the heuristics for optimization that Hyndman suggests to make it less fragile. If someone wants to take up the torch here, I have some uncommitted code and thoughts on how to proceed.
Alternative to forecast() and ets() in Python? There is an open PR to add full ETS functionality to statsmodels here. I ran out of steam trying to code up all the heuristics for optimization that Hyndman suggests to make it less fragile. If someon
49,379
Two simple questions regarding GLM
The equation is a general form for the broad class of densities in the exponential family (i.e. that's the pdf). If it's the distribution for y corresponding to a fixed $x_i$, is it possible that even if the plot of $y$ against $x$ looks like a straight line, I should still use GLM instead of simple regression? The equation for the conditional density is unrelated to the form of the relationship between $y$ and $x$. It is perfectly possible to fit a linear function (via the identity link) with an exponential family conditional density. Which is to say, yes, you can still use GLMs when it's a straight line. Indeed the Gaussian is in the exponential family, so you can still do regression there also. e.g. here's a straight line fit with a Gamma response: > summary(glm(dist~speed,cars,family=Gamma(link=identity))) Call: glm(formula = dist ~ speed, family = Gamma(link = identity), data = cars) Deviance Residuals: Min 1Q Median 3Q Max -1.07986 -0.29703 -0.06053 0.22879 0.87150 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.5843 2.1292 -3.562 0.000843 *** speed 3.2106 0.2556 12.563 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1617597) Null deviance: 22.4827 on 49 degrees of freedom Residual deviance: 8.0945 on 48 degrees of freedom AIC: 411.79 Number of Fisher Scoring iterations: 8 The Gamma linear fit is in red, the least squares fit is in blue. See the discussion here as to why a gamma model, even with identity link is better in this case (essentially, none of the fitted stopping distances are negative). It's still less than perfect (it suggests a 0 and then negative stopping distance at a positive speed), but its fit is at least plausible within the range of $x$ values we actually have, which is a very useful property to have. (Of course, even better would be to fit a more plausible model.) The book suggests that when assuming response y follows a gamma distribution, it is a common practise to use a logarithmic link function. In insurance and many other financial applications certainly. Partly that's because the relationships involving things like money tend to be multiplicative, and are broadly understood in that form. I made a histogram of y (with frequency on y-axis) and it looks like a gamma curve fits well. It may look like that, but it's not necessarily very meaningful, and doesn't relate to the assumption. Does that essentially imply that I should choose f(y) to be gamma? I kinda doubt it because I suppose yi|X=xi and Y are essentially two different things. You're correct to doubt it. It's the conditional distribution that's assumed to be gamma. If $y$ depends on $x$, the unconditional distribution of $y$ will be a mixture of those conditional distributions and may not be meaningful. It could look completely different from gamma; a different pattern of $x$ values could change the y-histogram dramatically, while leaving the conditional distributions unchanged.
Two simple questions regarding GLM
The equation is a general form for the broad class of densities in the exponential family (i.e. that's the pdf). If it's the distribution for y corresponding to a fixed $x_i$, is it possible that ev
Two simple questions regarding GLM The equation is a general form for the broad class of densities in the exponential family (i.e. that's the pdf). If it's the distribution for y corresponding to a fixed $x_i$, is it possible that even if the plot of $y$ against $x$ looks like a straight line, I should still use GLM instead of simple regression? The equation for the conditional density is unrelated to the form of the relationship between $y$ and $x$. It is perfectly possible to fit a linear function (via the identity link) with an exponential family conditional density. Which is to say, yes, you can still use GLMs when it's a straight line. Indeed the Gaussian is in the exponential family, so you can still do regression there also. e.g. here's a straight line fit with a Gamma response: > summary(glm(dist~speed,cars,family=Gamma(link=identity))) Call: glm(formula = dist ~ speed, family = Gamma(link = identity), data = cars) Deviance Residuals: Min 1Q Median 3Q Max -1.07986 -0.29703 -0.06053 0.22879 0.87150 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -7.5843 2.1292 -3.562 0.000843 *** speed 3.2106 0.2556 12.563 < 2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1617597) Null deviance: 22.4827 on 49 degrees of freedom Residual deviance: 8.0945 on 48 degrees of freedom AIC: 411.79 Number of Fisher Scoring iterations: 8 The Gamma linear fit is in red, the least squares fit is in blue. See the discussion here as to why a gamma model, even with identity link is better in this case (essentially, none of the fitted stopping distances are negative). It's still less than perfect (it suggests a 0 and then negative stopping distance at a positive speed), but its fit is at least plausible within the range of $x$ values we actually have, which is a very useful property to have. (Of course, even better would be to fit a more plausible model.) The book suggests that when assuming response y follows a gamma distribution, it is a common practise to use a logarithmic link function. In insurance and many other financial applications certainly. Partly that's because the relationships involving things like money tend to be multiplicative, and are broadly understood in that form. I made a histogram of y (with frequency on y-axis) and it looks like a gamma curve fits well. It may look like that, but it's not necessarily very meaningful, and doesn't relate to the assumption. Does that essentially imply that I should choose f(y) to be gamma? I kinda doubt it because I suppose yi|X=xi and Y are essentially two different things. You're correct to doubt it. It's the conditional distribution that's assumed to be gamma. If $y$ depends on $x$, the unconditional distribution of $y$ will be a mixture of those conditional distributions and may not be meaningful. It could look completely different from gamma; a different pattern of $x$ values could change the y-histogram dramatically, while leaving the conditional distributions unchanged.
Two simple questions regarding GLM The equation is a general form for the broad class of densities in the exponential family (i.e. that's the pdf). If it's the distribution for y corresponding to a fixed $x_i$, is it possible that ev
49,380
Two simple questions regarding GLM
That is the equation for the distribution of $y$. Its mean $E[y]$ is a function thereof, when we set that mean equal to $\beta^TX_i$ we call it $E[y_i|X_i]$ The logarithmic link is the "canonical link" for the Poisson. More info here. This setup is developed in chapter 4 of Categorical Data Analysis by Agresti. It's the book I used in my GLM class and it's also not bad for self-study, imo.
Two simple questions regarding GLM
That is the equation for the distribution of $y$. Its mean $E[y]$ is a function thereof, when we set that mean equal to $\beta^TX_i$ we call it $E[y_i|X_i]$ The logarithmic link is the "canonical link
Two simple questions regarding GLM That is the equation for the distribution of $y$. Its mean $E[y]$ is a function thereof, when we set that mean equal to $\beta^TX_i$ we call it $E[y_i|X_i]$ The logarithmic link is the "canonical link" for the Poisson. More info here. This setup is developed in chapter 4 of Categorical Data Analysis by Agresti. It's the book I used in my GLM class and it's also not bad for self-study, imo.
Two simple questions regarding GLM That is the equation for the distribution of $y$. Its mean $E[y]$ is a function thereof, when we set that mean equal to $\beta^TX_i$ we call it $E[y_i|X_i]$ The logarithmic link is the "canonical link
49,381
Does the vanishing gradient in RNNs present a problem?
First let's restate the problem of vanishing gradients. Suppose you have a normal multilayer perceptron with sigmoidal hidden units. This is trained by back-propagation. When there are many hidden layers the error gradient weakens as it moves from the back of the network to the front, because the derivative the sigmoid weakens towards the poles. The updates as you move to the front of the network will contain less information. RNNs amplify this problem because they are trained by back-propagation through time (BPTT). Effectively the number of layers that is traversed by back-propagation grows dramatically. The long short term memory (LSTM) architecture to avoids the problem of vanishing gradients by introducing error gating. This allows it to learn long term (100+ step) dependencies between data points through "error carousels." A more recent trend in training neural networks is to use rectified linear units, which are more robust towards the vanishing gradient problem. RNNs with sparsity penalization and rectified linear unit apparently work well. See Advances In Optimizing Recurrent Networks. Historically neural networks performance greatly depended on many optimization tricks and the selection of many hyperparameters. In the case of RNN you'd be wise to also implement rmsprop and Nesterov’s accelerated gradient. Thankfully, the recent developments in dropout training have made neural networks more robust towards overfitting. Apparently there is some work towards making dropout work with RNNs. See On Fast Dropout and its Applicability to Recurrent Networks
Does the vanishing gradient in RNNs present a problem?
First let's restate the problem of vanishing gradients. Suppose you have a normal multilayer perceptron with sigmoidal hidden units. This is trained by back-propagation. When there are many hidden lay
Does the vanishing gradient in RNNs present a problem? First let's restate the problem of vanishing gradients. Suppose you have a normal multilayer perceptron with sigmoidal hidden units. This is trained by back-propagation. When there are many hidden layers the error gradient weakens as it moves from the back of the network to the front, because the derivative the sigmoid weakens towards the poles. The updates as you move to the front of the network will contain less information. RNNs amplify this problem because they are trained by back-propagation through time (BPTT). Effectively the number of layers that is traversed by back-propagation grows dramatically. The long short term memory (LSTM) architecture to avoids the problem of vanishing gradients by introducing error gating. This allows it to learn long term (100+ step) dependencies between data points through "error carousels." A more recent trend in training neural networks is to use rectified linear units, which are more robust towards the vanishing gradient problem. RNNs with sparsity penalization and rectified linear unit apparently work well. See Advances In Optimizing Recurrent Networks. Historically neural networks performance greatly depended on many optimization tricks and the selection of many hyperparameters. In the case of RNN you'd be wise to also implement rmsprop and Nesterov’s accelerated gradient. Thankfully, the recent developments in dropout training have made neural networks more robust towards overfitting. Apparently there is some work towards making dropout work with RNNs. See On Fast Dropout and its Applicability to Recurrent Networks
Does the vanishing gradient in RNNs present a problem? First let's restate the problem of vanishing gradients. Suppose you have a normal multilayer perceptron with sigmoidal hidden units. This is trained by back-propagation. When there are many hidden lay
49,382
Statistical Significance or Unambiguous Direction of Influence?
I think this way about the relationship between NHSTs and CIs in general, but don't know of any references that describe everything the same way off the top of my head...Seems there's bound to be some out there though, as resolving directional ambiguity of an effect is the most compelling reason to perform a NHST that I know of. However, statistical significance doesn't really truly completely absolutely resolve this ambiguity. As you say (I'm sure most/all of this has been said before), the resolution is only probabilistic, and that probability is $1-\alpha$ in the Neyman–Pearson framework or $1-p$ in the Fisherian framework (AFAIK; I could be misrepresenting the latter). Thus it isn't even a particularly foolproof "barest minimum requirement". The choice of $\alpha$ is at worst arbitrary, usually a matter of scientific convention, and at best a principled and measured departure from convention (e.g., "I have lots of power, and a false alarm error would cause major harm"). Being a matter of convention isn't terrible, but it isn't quite foolproof either, and causes some known problems for the big picture because we give it so many chances to go wrong. Consider how easily people lose sight of the non-finality of a result, and how much hinges on crossing that threshold. Arguably, this is the flaw in your interpretation: one could still make practical use of barer minimums. People often choose to replicate studies with $p=.06$, and if they can afford to, why shouldn't they? These people should probably choose $\alpha>.05$, or multiple $\alpha$s corresponding to different decisions: E.g., $p<\alpha_1=.04$: reject $\rm H_0$ comfortably and proceed to build on $\rm H_A$, ideally while continuing to gather replicative evidence $\alpha_1\le p<\alpha_2\approx[.15,.30]$ (somewhere in there depending on how willing one is to replicate): interpret with caution and seek replication before building on $\rm H_A$ $p>\alpha_2$, given sufficient power: abandon hope of rejecting $\rm H_0$ meaningfully and reevaluate life choices – unless the null result is actually useful, of course. Power is tied inextricably into the meaning of NHST results too...but maybe these are all the issues you intended to set aside by stipulating provisional acceptance of frequentist NHST methodology. I admit, the Neyman–Pearson $\alpha$ may even be optimal – if not foolproof – as a choice of general, if simplistic statistical practice. Some simplification of the interpretive process probably helps broaden the accessibility of inferential statistics, which might only be more esoteric and still no less error-prone in general practice if tests were interpreted less dichotomously as Fisher seems to have intended. I'm a fan of confidence intervals in general, and might be doing them a disservice by admitting that Neyman–Pearson NHST might deserve its place in entry-level / mainstream statistics, but my only issues (for now) with your interpretation are my issues with NHST in general. Oh, and another "gap": this doesn't really apply to nulls other than zero. Granted, those are unusual; the power of convention (or the fear of defying reviewers' expectations) is such that probably more than 95% of analysts stick to both $\alpha=.05$ and $\rm H_0:\mu=0$...but this isn't mandatory, and I often find myself reminding people of that. The adjustment to what you've said that's necessary to accommodate other nulls isn't hard though. It would just be a question of whether the effect is probably ($1-\alpha$) in the hypothesized direction and of at least the hypothesized size.
Statistical Significance or Unambiguous Direction of Influence?
I think this way about the relationship between NHSTs and CIs in general, but don't know of any references that describe everything the same way off the top of my head...Seems there's bound to be some
Statistical Significance or Unambiguous Direction of Influence? I think this way about the relationship between NHSTs and CIs in general, but don't know of any references that describe everything the same way off the top of my head...Seems there's bound to be some out there though, as resolving directional ambiguity of an effect is the most compelling reason to perform a NHST that I know of. However, statistical significance doesn't really truly completely absolutely resolve this ambiguity. As you say (I'm sure most/all of this has been said before), the resolution is only probabilistic, and that probability is $1-\alpha$ in the Neyman–Pearson framework or $1-p$ in the Fisherian framework (AFAIK; I could be misrepresenting the latter). Thus it isn't even a particularly foolproof "barest minimum requirement". The choice of $\alpha$ is at worst arbitrary, usually a matter of scientific convention, and at best a principled and measured departure from convention (e.g., "I have lots of power, and a false alarm error would cause major harm"). Being a matter of convention isn't terrible, but it isn't quite foolproof either, and causes some known problems for the big picture because we give it so many chances to go wrong. Consider how easily people lose sight of the non-finality of a result, and how much hinges on crossing that threshold. Arguably, this is the flaw in your interpretation: one could still make practical use of barer minimums. People often choose to replicate studies with $p=.06$, and if they can afford to, why shouldn't they? These people should probably choose $\alpha>.05$, or multiple $\alpha$s corresponding to different decisions: E.g., $p<\alpha_1=.04$: reject $\rm H_0$ comfortably and proceed to build on $\rm H_A$, ideally while continuing to gather replicative evidence $\alpha_1\le p<\alpha_2\approx[.15,.30]$ (somewhere in there depending on how willing one is to replicate): interpret with caution and seek replication before building on $\rm H_A$ $p>\alpha_2$, given sufficient power: abandon hope of rejecting $\rm H_0$ meaningfully and reevaluate life choices – unless the null result is actually useful, of course. Power is tied inextricably into the meaning of NHST results too...but maybe these are all the issues you intended to set aside by stipulating provisional acceptance of frequentist NHST methodology. I admit, the Neyman–Pearson $\alpha$ may even be optimal – if not foolproof – as a choice of general, if simplistic statistical practice. Some simplification of the interpretive process probably helps broaden the accessibility of inferential statistics, which might only be more esoteric and still no less error-prone in general practice if tests were interpreted less dichotomously as Fisher seems to have intended. I'm a fan of confidence intervals in general, and might be doing them a disservice by admitting that Neyman–Pearson NHST might deserve its place in entry-level / mainstream statistics, but my only issues (for now) with your interpretation are my issues with NHST in general. Oh, and another "gap": this doesn't really apply to nulls other than zero. Granted, those are unusual; the power of convention (or the fear of defying reviewers' expectations) is such that probably more than 95% of analysts stick to both $\alpha=.05$ and $\rm H_0:\mu=0$...but this isn't mandatory, and I often find myself reminding people of that. The adjustment to what you've said that's necessary to accommodate other nulls isn't hard though. It would just be a question of whether the effect is probably ($1-\alpha$) in the hypothesized direction and of at least the hypothesized size.
Statistical Significance or Unambiguous Direction of Influence? I think this way about the relationship between NHSTs and CIs in general, but don't know of any references that describe everything the same way off the top of my head...Seems there's bound to be some
49,383
In inverse theory, how do I transform the averaging kernel matrix to a new grid?
This is considered in Calisesi et al. (2005). They derive that $$\mathbf{A_{z_i}} = \mathbf{W_i^* A_x W_i} \, ,$$ where $\mathbf{A_{z_i}}$ is the averaging kernel for the new grid, $\mathbf{W_i}$ is the interpolation matrix with $\mathbf{W_i^*}$ its Moore-Penrose pseudo-inverse, and $\mathbf{A_x}$ is the averaging kernel matrix for the full state vector $\mathbf{x}$. For the inverse transformation, $$\mathbf{A_x} = \mathbf{W_i A_{z_i} W_i^*} + \mathbf{\epsilon_{A_i}}\, ,$$ where $\mathbf{\epsilon_{A_i}} = \mathbf{A_x} - \mathbf{W_i W_i^* A_x W_i W_i^*}$. Across independent numerical grids, $$\mathbf{A_{z_1}} = \mathbf{W_{12} A_{z_2} W_{21}} + \mathbf{W_1^* \epsilon_{A_2} W_1} \, .$$ For a full derevation, see Calisesi et al. (2005). Calisesi, Y., V. T. Soebijanta and R. van Oss (2005), Regridding of remote soundings: Formulation and application to ozone profile comparison, J. Geophys. Res., 110, D23306, doi:10.1029/2005JD006122
In inverse theory, how do I transform the averaging kernel matrix to a new grid?
This is considered in Calisesi et al. (2005). They derive that $$\mathbf{A_{z_i}} = \mathbf{W_i^* A_x W_i} \, ,$$ where $\mathbf{A_{z_i}}$ is the averaging kernel for the new grid, $\mathbf{W_i}$ is
In inverse theory, how do I transform the averaging kernel matrix to a new grid? This is considered in Calisesi et al. (2005). They derive that $$\mathbf{A_{z_i}} = \mathbf{W_i^* A_x W_i} \, ,$$ where $\mathbf{A_{z_i}}$ is the averaging kernel for the new grid, $\mathbf{W_i}$ is the interpolation matrix with $\mathbf{W_i^*}$ its Moore-Penrose pseudo-inverse, and $\mathbf{A_x}$ is the averaging kernel matrix for the full state vector $\mathbf{x}$. For the inverse transformation, $$\mathbf{A_x} = \mathbf{W_i A_{z_i} W_i^*} + \mathbf{\epsilon_{A_i}}\, ,$$ where $\mathbf{\epsilon_{A_i}} = \mathbf{A_x} - \mathbf{W_i W_i^* A_x W_i W_i^*}$. Across independent numerical grids, $$\mathbf{A_{z_1}} = \mathbf{W_{12} A_{z_2} W_{21}} + \mathbf{W_1^* \epsilon_{A_2} W_1} \, .$$ For a full derevation, see Calisesi et al. (2005). Calisesi, Y., V. T. Soebijanta and R. van Oss (2005), Regridding of remote soundings: Formulation and application to ozone profile comparison, J. Geophys. Res., 110, D23306, doi:10.1029/2005JD006122
In inverse theory, how do I transform the averaging kernel matrix to a new grid? This is considered in Calisesi et al. (2005). They derive that $$\mathbf{A_{z_i}} = \mathbf{W_i^* A_x W_i} \, ,$$ where $\mathbf{A_{z_i}}$ is the averaging kernel for the new grid, $\mathbf{W_i}$ is
49,384
How to cope with missing data in logistic regression?
I am afraid you cannot expect to find some "canned" solution to your problem. Most methods for handling missing data assumes "missing at random" or even "missing completely at random" (you can google those terms!). Your problem seems definitely to be a problem of informative missingness. Then you will need to model the mechanism of missingness, and maybe model the "second highest bid" as a response, given some covariables (which might include the winning bid). From there you can try to build a custom model. You can google for "informative missingness" to get some ideas.
How to cope with missing data in logistic regression?
I am afraid you cannot expect to find some "canned" solution to your problem. Most methods for handling missing data assumes "missing at random" or even "missing completely at random" (you can google
How to cope with missing data in logistic regression? I am afraid you cannot expect to find some "canned" solution to your problem. Most methods for handling missing data assumes "missing at random" or even "missing completely at random" (you can google those terms!). Your problem seems definitely to be a problem of informative missingness. Then you will need to model the mechanism of missingness, and maybe model the "second highest bid" as a response, given some covariables (which might include the winning bid). From there you can try to build a custom model. You can google for "informative missingness" to get some ideas.
How to cope with missing data in logistic regression? I am afraid you cannot expect to find some "canned" solution to your problem. Most methods for handling missing data assumes "missing at random" or even "missing completely at random" (you can google
49,385
How to cope with missing data in logistic regression?
@Kjetil gave a good answer. One possible simple alternative, if you have enough auctions, is to run two models: One with the data that has both highest and second highest and one that has just the highest. An advantage of this approach would be that each model will be considerably simpler than a full model with both. But a disadvantage is that you won't be able to use the 2nd highest bid at all unless you actually have it.
How to cope with missing data in logistic regression?
@Kjetil gave a good answer. One possible simple alternative, if you have enough auctions, is to run two models: One with the data that has both highest and second highest and one that has just the hig
How to cope with missing data in logistic regression? @Kjetil gave a good answer. One possible simple alternative, if you have enough auctions, is to run two models: One with the data that has both highest and second highest and one that has just the highest. An advantage of this approach would be that each model will be considerably simpler than a full model with both. But a disadvantage is that you won't be able to use the 2nd highest bid at all unless you actually have it.
How to cope with missing data in logistic regression? @Kjetil gave a good answer. One possible simple alternative, if you have enough auctions, is to run two models: One with the data that has both highest and second highest and one that has just the hig
49,386
Confidence interval for poisson distributed data
This answer is based on the clarification offered in comments: I'd like to make a statement such as ... "I am 68% sure the mean is between $3.1βˆ’Οƒ_βˆ’$ and $3.1+Οƒ_+$", and I want to calculate $Οƒ_+$ and $Οƒ_βˆ’$. I think that at least in the physics world this is called a confidence interval. Let's take it as given that in response to my question "confidence interval for what?" you responded that want a confidence interval for the mean (and as made clear, not some other interval for values from the distribution). There's one issue to clear up first - "I am 68% sure the mean is between" isn't really the usual interpretation placed on a confidence interval. Rather, it's that if you repeated the procedure that generated the interval many times, 68% of such intervals would contain the parameter. Now to address the confidence interval for the mean. I agree with your calculation of mean and sd of the data: > x=c(1,2,3,5,1,2,2,3,7,2,3,4,1,5,7,6,4,1,2,2,3,9,2,1,2,2,3) > mean(x);sd(x) [1] 3.148148 [1] 2.106833 However, the mean doesn't have the same sd as the population the data was drawn from. The standard error of the mean is $\sigma/\sqrt{n}$. We could estimate that from the sample sd (though if the data were truly Poisson, this isn't the most efficient method): > sd(x)/sqrt(length(x)) [1] 0.4054603 If we assumed that the sample mean was approximately normally distributed (but did not take advantage of the possible Poisson assumption for the original data), and assumed that $\sigma=s$ (invoking Slutsky, in effect) then an approximate 68% interval for the mean would be $3.15\pm 0.41$. However, the sample isn't really large enough for Slutsky. A better interval would take account of the uncertainty in $\hat \sigma$, which is to say, a 68% t$_{26}$-interval for the mean would be $3.15\pm 1.013843\times 0.41$ which is just a fraction wider. Now, as for whether the sample size is large enough to apply the normal theory CI we just used, that depends on your criteria. Simulations at similar Poisson means (in particular, ones deliberately chosen to be somewhat smaller than the observed one) at this sample size suggest that using a t-interval will work quite well for similar Poisson rates and 27 observations or more. If we take account of the fact that the data are (supposedly) Poisson, we can get a more efficient estimate of the standard deviation and an interval for $\mu$, but if there's any risk the Poisson assumption could be wrong - a chance of overdispersion caused by some homogeneity of Poisson parameters, say - then the t-interval would probably be better. Nevertheless, we should consider that specific question - "how to get a confidence interval for the population mean of Poisson variables" -- but this more specific question has already been answered here on CV - for example, see the fine answers here.
Confidence interval for poisson distributed data
This answer is based on the clarification offered in comments: I'd like to make a statement such as ... "I am 68% sure the mean is between $3.1βˆ’Οƒ_βˆ’$ and $3.1+Οƒ_+$", and I want to calculate $Οƒ_+$ and
Confidence interval for poisson distributed data This answer is based on the clarification offered in comments: I'd like to make a statement such as ... "I am 68% sure the mean is between $3.1βˆ’Οƒ_βˆ’$ and $3.1+Οƒ_+$", and I want to calculate $Οƒ_+$ and $Οƒ_βˆ’$. I think that at least in the physics world this is called a confidence interval. Let's take it as given that in response to my question "confidence interval for what?" you responded that want a confidence interval for the mean (and as made clear, not some other interval for values from the distribution). There's one issue to clear up first - "I am 68% sure the mean is between" isn't really the usual interpretation placed on a confidence interval. Rather, it's that if you repeated the procedure that generated the interval many times, 68% of such intervals would contain the parameter. Now to address the confidence interval for the mean. I agree with your calculation of mean and sd of the data: > x=c(1,2,3,5,1,2,2,3,7,2,3,4,1,5,7,6,4,1,2,2,3,9,2,1,2,2,3) > mean(x);sd(x) [1] 3.148148 [1] 2.106833 However, the mean doesn't have the same sd as the population the data was drawn from. The standard error of the mean is $\sigma/\sqrt{n}$. We could estimate that from the sample sd (though if the data were truly Poisson, this isn't the most efficient method): > sd(x)/sqrt(length(x)) [1] 0.4054603 If we assumed that the sample mean was approximately normally distributed (but did not take advantage of the possible Poisson assumption for the original data), and assumed that $\sigma=s$ (invoking Slutsky, in effect) then an approximate 68% interval for the mean would be $3.15\pm 0.41$. However, the sample isn't really large enough for Slutsky. A better interval would take account of the uncertainty in $\hat \sigma$, which is to say, a 68% t$_{26}$-interval for the mean would be $3.15\pm 1.013843\times 0.41$ which is just a fraction wider. Now, as for whether the sample size is large enough to apply the normal theory CI we just used, that depends on your criteria. Simulations at similar Poisson means (in particular, ones deliberately chosen to be somewhat smaller than the observed one) at this sample size suggest that using a t-interval will work quite well for similar Poisson rates and 27 observations or more. If we take account of the fact that the data are (supposedly) Poisson, we can get a more efficient estimate of the standard deviation and an interval for $\mu$, but if there's any risk the Poisson assumption could be wrong - a chance of overdispersion caused by some homogeneity of Poisson parameters, say - then the t-interval would probably be better. Nevertheless, we should consider that specific question - "how to get a confidence interval for the population mean of Poisson variables" -- but this more specific question has already been answered here on CV - for example, see the fine answers here.
Confidence interval for poisson distributed data This answer is based on the clarification offered in comments: I'd like to make a statement such as ... "I am 68% sure the mean is between $3.1βˆ’Οƒ_βˆ’$ and $3.1+Οƒ_+$", and I want to calculate $Οƒ_+$ and
49,387
Confidence interval for poisson distributed data
It was mentioned in the comments on the original post but here it is more explicitly. The bootstrap is simple to use and has a ton of nice asymptotic theory behind it, e.g. Shao and Tu, 1995. Here's some R code which does what I think you want: the_data = c(1,2,3,5,1,2,2,3,7,2,3,4,1,5,7,6,4,1,2,2,3,9,2,1,2,2,3) n_resamples = 1000 n_data = length(the_data) bootstrap_mean = NULL for(ii in 1:n_resamples){ bootstrap_sample = the_data[sample(1:n_data, size = n_data, replace = T)] bootstrap_mean = c(bootstrap_mean, mean(bootstrap_sample)) } plot(density(bootstrap_mean), main = "") ## 68% bootstrap confidence interval lower_bound = 0.16 upper_bound = 0.84 quantile(bootstrap_mean, probs = c(lower_bound, upper_bound)) For one run I get: 16% 84% 2.740741 3.592593
Confidence interval for poisson distributed data
It was mentioned in the comments on the original post but here it is more explicitly. The bootstrap is simple to use and has a ton of nice asymptotic theory behind it, e.g. Shao and Tu, 1995. Here's
Confidence interval for poisson distributed data It was mentioned in the comments on the original post but here it is more explicitly. The bootstrap is simple to use and has a ton of nice asymptotic theory behind it, e.g. Shao and Tu, 1995. Here's some R code which does what I think you want: the_data = c(1,2,3,5,1,2,2,3,7,2,3,4,1,5,7,6,4,1,2,2,3,9,2,1,2,2,3) n_resamples = 1000 n_data = length(the_data) bootstrap_mean = NULL for(ii in 1:n_resamples){ bootstrap_sample = the_data[sample(1:n_data, size = n_data, replace = T)] bootstrap_mean = c(bootstrap_mean, mean(bootstrap_sample)) } plot(density(bootstrap_mean), main = "") ## 68% bootstrap confidence interval lower_bound = 0.16 upper_bound = 0.84 quantile(bootstrap_mean, probs = c(lower_bound, upper_bound)) For one run I get: 16% 84% 2.740741 3.592593
Confidence interval for poisson distributed data It was mentioned in the comments on the original post but here it is more explicitly. The bootstrap is simple to use and has a ton of nice asymptotic theory behind it, e.g. Shao and Tu, 1995. Here's
49,388
Confidence interval for poisson distributed data
For skewed distributions the confidence interval is tricky. One way to proceed is by having equal quantiles from tails. So, for instance, if you wish to have 95% confidence interval, you'd get 2.5% and 97.5% quantiles. Your comment about $\pm\sigma$ being 68% CI in physics is only true when you assume normal-ish distribution. The Poisson distribution is quite not like normal, it's asymmetric(skewed) and has a lower bound as you noted. If you really want 68%, then get 15% and 84% quantiles.
Confidence interval for poisson distributed data
For skewed distributions the confidence interval is tricky. One way to proceed is by having equal quantiles from tails. So, for instance, if you wish to have 95% confidence interval, you'd get 2.5% an
Confidence interval for poisson distributed data For skewed distributions the confidence interval is tricky. One way to proceed is by having equal quantiles from tails. So, for instance, if you wish to have 95% confidence interval, you'd get 2.5% and 97.5% quantiles. Your comment about $\pm\sigma$ being 68% CI in physics is only true when you assume normal-ish distribution. The Poisson distribution is quite not like normal, it's asymmetric(skewed) and has a lower bound as you noted. If you really want 68%, then get 15% and 84% quantiles.
Confidence interval for poisson distributed data For skewed distributions the confidence interval is tricky. One way to proceed is by having equal quantiles from tails. So, for instance, if you wish to have 95% confidence interval, you'd get 2.5% an
49,389
Is there a statistical measure for how much a variable fluctuates over time?
Variance of the first derivative would mean looking for variations in derivative of your variable. Rather, I would recommend taking derivative of your variable with respect to time and see the results. This result would be rate of change of your variable with respect to time. rate_of_change = d(variable)/d(time) Note : This rate_of_change should be calculated by taking difference between consecutive values of the variable and then dividing this number by difference between time of those two samples(take both the differences in same order). This way the rate_of_change will reflect instantaneous fluctuations in the variable over time. Further you can explore taking population standard deviation of the rate of change and then taking a moving standard deviation as well. Additionally one can take moving average of the variable across certain time duration and then do the same rate of change analysis as mentioned earlier. This new rate_of_change values would represent fluctuations in the variable of interest over predefined duration of time. Larger the duration for moving average, more global would be the rate of change in variable.
Is there a statistical measure for how much a variable fluctuates over time?
Variance of the first derivative would mean looking for variations in derivative of your variable. Rather, I would recommend taking derivative of your variable with respect to time and see the results
Is there a statistical measure for how much a variable fluctuates over time? Variance of the first derivative would mean looking for variations in derivative of your variable. Rather, I would recommend taking derivative of your variable with respect to time and see the results. This result would be rate of change of your variable with respect to time. rate_of_change = d(variable)/d(time) Note : This rate_of_change should be calculated by taking difference between consecutive values of the variable and then dividing this number by difference between time of those two samples(take both the differences in same order). This way the rate_of_change will reflect instantaneous fluctuations in the variable over time. Further you can explore taking population standard deviation of the rate of change and then taking a moving standard deviation as well. Additionally one can take moving average of the variable across certain time duration and then do the same rate of change analysis as mentioned earlier. This new rate_of_change values would represent fluctuations in the variable of interest over predefined duration of time. Larger the duration for moving average, more global would be the rate of change in variable.
Is there a statistical measure for how much a variable fluctuates over time? Variance of the first derivative would mean looking for variations in derivative of your variable. Rather, I would recommend taking derivative of your variable with respect to time and see the results
49,390
Convergence in distribution of sum implies marginal convergence?
It is a particular case of the accompanying law theorem. Let $f$ be a bounded uniformly continuous function on $\mathbf R$. Since $$|E[f(X_n)]-E[f(X)]|\leqslant |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X_n+cY)]-E[f(X+cY)]|+|E[f(X+cY)-E[f(X)]]|$$ and $X_n+cY\to X+cY$ in distribution, we obtain for each positive $c$, $$\limsup_{n\to +\infty}|E[f(X_n)]-E[f(X)]|\leqslant \limsup_{n\to +\infty} |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X+cY)-E[f(X)]]|.$$ Fix a positive $\varepsilon$ and pick $\delta$ such that $|f(x+y)-f(y)|\leqslant \varepsilon$ if $|x|\lt\delta$. Then $$|f(X_n)-f(X_n+cY)|\chi_{\{|cY|\lt \delta\}}\leqslant\varepsilon,\mbox{ and }$$ $$E\left[|f(X_n)-f(X_n+cY)|\chi_{\{|cY|\geqslant \delta\}}\right]\leqslant 2\sup_t|f(t)|\cdot \mathbb P\{|Y|\geqslant \delta/c\},$$ and we deduce that for each positive $\varepsilon$ and each positive $c$, $$\limsup_{n\to +\infty}|E[f(X_n)]-E[f(X)]|\leqslant \varepsilon+2\sup_t|f(t)|\cdot P\{|Y|\geqslant \delta/c\}+|E[f(X+cY)-E[f(X)]]|.$$ Letting $c\to 0$ then $\varepsilon\to 0$ we get $X_n\to X$ in distribution.
Convergence in distribution of sum implies marginal convergence?
It is a particular case of the accompanying law theorem. Let $f$ be a bounded uniformly continuous function on $\mathbf R$. Since $$|E[f(X_n)]-E[f(X)]|\leqslant |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X_n+cY)
Convergence in distribution of sum implies marginal convergence? It is a particular case of the accompanying law theorem. Let $f$ be a bounded uniformly continuous function on $\mathbf R$. Since $$|E[f(X_n)]-E[f(X)]|\leqslant |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X_n+cY)]-E[f(X+cY)]|+|E[f(X+cY)-E[f(X)]]|$$ and $X_n+cY\to X+cY$ in distribution, we obtain for each positive $c$, $$\limsup_{n\to +\infty}|E[f(X_n)]-E[f(X)]|\leqslant \limsup_{n\to +\infty} |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X+cY)-E[f(X)]]|.$$ Fix a positive $\varepsilon$ and pick $\delta$ such that $|f(x+y)-f(y)|\leqslant \varepsilon$ if $|x|\lt\delta$. Then $$|f(X_n)-f(X_n+cY)|\chi_{\{|cY|\lt \delta\}}\leqslant\varepsilon,\mbox{ and }$$ $$E\left[|f(X_n)-f(X_n+cY)|\chi_{\{|cY|\geqslant \delta\}}\right]\leqslant 2\sup_t|f(t)|\cdot \mathbb P\{|Y|\geqslant \delta/c\},$$ and we deduce that for each positive $\varepsilon$ and each positive $c$, $$\limsup_{n\to +\infty}|E[f(X_n)]-E[f(X)]|\leqslant \varepsilon+2\sup_t|f(t)|\cdot P\{|Y|\geqslant \delta/c\}+|E[f(X+cY)-E[f(X)]]|.$$ Letting $c\to 0$ then $\varepsilon\to 0$ we get $X_n\to X$ in distribution.
Convergence in distribution of sum implies marginal convergence? It is a particular case of the accompanying law theorem. Let $f$ be a bounded uniformly continuous function on $\mathbf R$. Since $$|E[f(X_n)]-E[f(X)]|\leqslant |E[f(X_n)]-E[f(X_n+cY)]|+|E[f(X_n+cY)
49,391
Convergence in distribution of sum implies marginal convergence?
I may prove it under the assumption that $\mathrm{E}\left|Y\right|<\infty$. In order to prove $X_{n}\rightarrow_{d}X$, we wish to show that $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\right)$ for all bounded, Lipschitz functions $f$ (this is Portmanteau lemma). Hereafter let $f$ be an arbitrary bounded Lipschitz function satisfying $\left|f\left(x\right)-f\left(y\right)\right|\leq L\left|x-y\right|$ for some finite constant $L$ ($L$ could depend on $f$). We have \begin{align*} & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X\right)\right|\\ = & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)+\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)+\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|\\ \leq & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|+\left|\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)\right|+\left|\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|. \end{align*}The term $\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)\rightarrow0$ for $X_{n}+cY\rightarrow_{d}X+cY$. Moreover, \begin{eqnarray*} \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|\leq\mathrm{E}\left|f\left(X_{n}\right)-f\left(X_{n}+cY\right)\right| & \leq & Lc\mathrm{E}\left|Y\right| \end{eqnarray*} for every positive constant $c$. Let $c\downarrow0$, $Lc\mathrm{E}\left|Y\right|\rightarrow0$ when $\mathrm{E}\left|Y\right|<\infty$. Thus, we conclude $\left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|\rightarrow0$. Similarly, $\left|\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|\rightarrow0$. Hence we have shown $\left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X\right)\right|\rightarrow0$. Thus, $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\right)$ and $X_{n}\rightarrow_{d}X$.
Convergence in distribution of sum implies marginal convergence?
I may prove it under the assumption that $\mathrm{E}\left|Y\right|<\infty$. In order to prove $X_{n}\rightarrow_{d}X$, we wish to show that $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\
Convergence in distribution of sum implies marginal convergence? I may prove it under the assumption that $\mathrm{E}\left|Y\right|<\infty$. In order to prove $X_{n}\rightarrow_{d}X$, we wish to show that $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\right)$ for all bounded, Lipschitz functions $f$ (this is Portmanteau lemma). Hereafter let $f$ be an arbitrary bounded Lipschitz function satisfying $\left|f\left(x\right)-f\left(y\right)\right|\leq L\left|x-y\right|$ for some finite constant $L$ ($L$ could depend on $f$). We have \begin{align*} & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X\right)\right|\\ = & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)+\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)+\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|\\ \leq & \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|+\left|\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)\right|+\left|\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|. \end{align*}The term $\mathrm{E}f\left(X_{n}+cY\right)-\mathrm{E}f\left(X+cY\right)\rightarrow0$ for $X_{n}+cY\rightarrow_{d}X+cY$. Moreover, \begin{eqnarray*} \left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|\leq\mathrm{E}\left|f\left(X_{n}\right)-f\left(X_{n}+cY\right)\right| & \leq & Lc\mathrm{E}\left|Y\right| \end{eqnarray*} for every positive constant $c$. Let $c\downarrow0$, $Lc\mathrm{E}\left|Y\right|\rightarrow0$ when $\mathrm{E}\left|Y\right|<\infty$. Thus, we conclude $\left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X_{n}+cY\right)\right|\rightarrow0$. Similarly, $\left|\mathrm{E}f\left(X+cY\right)-\mathrm{E}f\left(X\right)\right|\rightarrow0$. Hence we have shown $\left|\mathrm{E}f\left(X_{n}\right)-\mathrm{E}f\left(X\right)\right|\rightarrow0$. Thus, $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\right)$ and $X_{n}\rightarrow_{d}X$.
Convergence in distribution of sum implies marginal convergence? I may prove it under the assumption that $\mathrm{E}\left|Y\right|<\infty$. In order to prove $X_{n}\rightarrow_{d}X$, we wish to show that $\mathrm{E}f\left(X_{n}\right)\rightarrow\mathrm{E}f\left(X\
49,392
How do you create variables reflecting the lead and lag impact of holidays / calendar effects in a time-series analysis?
Create a predictor variable (zeroes except a 1 at the beginning of the exceptional period and then specify a poylynomial of order k where k is the expected length. This will form the long response that you are looking for. Make sure that you also accommodate individually tailored windows of response around each major event and level shifts or local time trends.
How do you create variables reflecting the lead and lag impact of holidays / calendar effects in a t
Create a predictor variable (zeroes except a 1 at the beginning of the exceptional period and then specify a poylynomial of order k where k is the expected length. This will form the long response tha
How do you create variables reflecting the lead and lag impact of holidays / calendar effects in a time-series analysis? Create a predictor variable (zeroes except a 1 at the beginning of the exceptional period and then specify a poylynomial of order k where k is the expected length. This will form the long response that you are looking for. Make sure that you also accommodate individually tailored windows of response around each major event and level shifts or local time trends.
How do you create variables reflecting the lead and lag impact of holidays / calendar effects in a t Create a predictor variable (zeroes except a 1 at the beginning of the exceptional period and then specify a poylynomial of order k where k is the expected length. This will form the long response tha
49,393
Training and testing on Unbalanced Data Set
Reampling should be applied to training set only, and then you should test on a subset having the same class distribution as the population. If you oversample the minority class in the test set, you may get a higher hit rate i.e. better performances than in the real case, where positive instances are rare.
Training and testing on Unbalanced Data Set
Reampling should be applied to training set only, and then you should test on a subset having the same class distribution as the population. If you oversample the minority class in the test set, you m
Training and testing on Unbalanced Data Set Reampling should be applied to training set only, and then you should test on a subset having the same class distribution as the population. If you oversample the minority class in the test set, you may get a higher hit rate i.e. better performances than in the real case, where positive instances are rare.
Training and testing on Unbalanced Data Set Reampling should be applied to training set only, and then you should test on a subset having the same class distribution as the population. If you oversample the minority class in the test set, you m
49,394
How to speed up Kernel density estimation
One trick which I used when I implemented KDEs is to limit the effect of kernel to some values. Suppose you have a sample $x = {x_1, x_2, .. , x_n}$, and some points where you want to estimate the kernel $k = {k_1, k_2, .., k_n}$. Now, without loosing of generality, we can consider $k$ values as being sorted. If not, then you simply sort them and change indexes. Consider now what a KDE does. Basically, for each $x_i$ you have a probability mass which you want to spread with a symmetric kernel function to the left and to the right. You are interested only to evaluate that probability mass dispersion only onto the values of $k$. Obviously, if you evaluate this kernel function onto all values of $k$, than you will end up with an execution time of $O(|k||x|)$ (not $O(|k|^{|n|})$ as you suggested, consider 20 samples with 20 estimation points is $20^{20}$ = 104 septillion 857 sextillion 600 quintillion operations). My trick is to not disperse this probability mass to all the points, but only to some of them, the ones which are closer to the given $x_i$ for which the kernel function is evaluated. For some kernel density functions, like uniform, triangular, Epanechnikov this is a natural concept, since for some distance to the left or to the right, the spreaded probability mass equals $0$ so, it does not affect the kernel density estimation at all. For some other kernel functions like Gaussian I established some limits. For example for Gaussian I established the limit as a function of standard deviations. I do not remember the exact value (you can take a look into my code since is publicly available), but outside of a range of some standard deviations to the left of $x_0$ and to the right of it I considered contributed values as being $0$. Now, the implementation idea is somehow straightforward to follow in practice. I enriched my kernel functions with a minimum value and maximum value (in other words with a distance to the left or to the right of evaluated $x_i$), and using binary search on vector $k$, I find the range of values from $k$ which has to be updated. Obviously this can produce errors. But, if you have enough points the effect I found is not a problem in practice. And if you implement it well, using range around $x_i$ as a parameter (I did not do it, but I might if I will need that), you will have a parameter which can be use a a compromise between the precision of the evaluation of KDE and speed. Note also that for some kernel functions this parameter has no effect, because the KDE is precise.
How to speed up Kernel density estimation
One trick which I used when I implemented KDEs is to limit the effect of kernel to some values. Suppose you have a sample $x = {x_1, x_2, .. , x_n}$, and some points where you want to estimate the ke
How to speed up Kernel density estimation One trick which I used when I implemented KDEs is to limit the effect of kernel to some values. Suppose you have a sample $x = {x_1, x_2, .. , x_n}$, and some points where you want to estimate the kernel $k = {k_1, k_2, .., k_n}$. Now, without loosing of generality, we can consider $k$ values as being sorted. If not, then you simply sort them and change indexes. Consider now what a KDE does. Basically, for each $x_i$ you have a probability mass which you want to spread with a symmetric kernel function to the left and to the right. You are interested only to evaluate that probability mass dispersion only onto the values of $k$. Obviously, if you evaluate this kernel function onto all values of $k$, than you will end up with an execution time of $O(|k||x|)$ (not $O(|k|^{|n|})$ as you suggested, consider 20 samples with 20 estimation points is $20^{20}$ = 104 septillion 857 sextillion 600 quintillion operations). My trick is to not disperse this probability mass to all the points, but only to some of them, the ones which are closer to the given $x_i$ for which the kernel function is evaluated. For some kernel density functions, like uniform, triangular, Epanechnikov this is a natural concept, since for some distance to the left or to the right, the spreaded probability mass equals $0$ so, it does not affect the kernel density estimation at all. For some other kernel functions like Gaussian I established some limits. For example for Gaussian I established the limit as a function of standard deviations. I do not remember the exact value (you can take a look into my code since is publicly available), but outside of a range of some standard deviations to the left of $x_0$ and to the right of it I considered contributed values as being $0$. Now, the implementation idea is somehow straightforward to follow in practice. I enriched my kernel functions with a minimum value and maximum value (in other words with a distance to the left or to the right of evaluated $x_i$), and using binary search on vector $k$, I find the range of values from $k$ which has to be updated. Obviously this can produce errors. But, if you have enough points the effect I found is not a problem in practice. And if you implement it well, using range around $x_i$ as a parameter (I did not do it, but I might if I will need that), you will have a parameter which can be use a a compromise between the precision of the evaluation of KDE and speed. Note also that for some kernel functions this parameter has no effect, because the KDE is precise.
How to speed up Kernel density estimation One trick which I used when I implemented KDEs is to limit the effect of kernel to some values. Suppose you have a sample $x = {x_1, x_2, .. , x_n}$, and some points where you want to estimate the ke
49,395
How to speed up Kernel density estimation
Suppose you have $x_1,\dots,x_m$ data points, called source points hereafter, and $z_1,\dots,z_n$ points you want to estimate at, called query points hereafter. A naive implementation of $$ \hat f(z_j)=\frac 1{mh}\sum_{i=1}^mK\left(\frac{x_i-z_j}{h}\right)$$ is then $\mathcal{O}(mn)$ if you want to evaluate it for all $n$ query points. Options in 1D In 1D then sort both the source and query points in $\mathcal{O}(m\log m)$ and $\mathcal{O}(n\log n)$ time. Then pass through both simultaneously and do not evaluate pairs where $K\left((x_i-z_j)/h\right)<\epsilon$ for some small $\epsilon$. For the latter you do not need to check a lot of values as the data is sorted and you pass from left to right. Thus, this step is $\mathcal O(m)$ or $\mathcal O(n)$ depending on which is largest. This is even exact in some cases where the kernel has finite support (like the Epanechnikov kernel). An even faster option if you have substantial amount of data is to use binning. Options in > 1D One option in higher dimensions is the dual-tree method like the one suggested by Gray, Alexander G., and Andrew W. Moore. "Rapid Evaluation of Multiple Density Models." In AISTATS. 2003. On example is to use a k-d tree for both the source and query points. Then you travers down both trees simultaneously exploiting that: $K\left(\frac{x_i-z_j}{h}\right)$ is virtually zero for points in the current node of source points and query points. Thus, you can use an approximation when it is below a certain threshold. the distance between source points are small in some nodes in the source point tree. Thus, $K\left(\frac{x_i-z_j}{h}\right)$ is almost a constant for a node with query points sufficiently far away. Again, you can use an approximation when the distance is sufficiently small in the source node and query node is far away. The above may not even introduce errors if the kernel has finite support. The algorithm will have $\mathcal{O}(m\log m)$ and $\mathcal{O}(n\log n)$ run-times (depending on which is largest). I have an example here of very similar problem that occurs in particle smoothers. Here is an example of run-times of the naive method versus the dual-tree method ## method ## N Dual-tree Naive Dual-tree 1 ## 384 0.01341 0.0008045 0.00529 ## 768 0.04257 0.0028020 0.01418 ## 1536 0.07590 0.0103289 0.02291 ## 3072 0.08270 0.0395034 0.03428 ## 6144 0.07206 0.1071989 0.05398 ## 12288 0.03143 0.4850711 0.09752 ## 24576 0.05360 1.7199726 0.18629 ## 49152 0.10605 6.9812358 0.36645 ## 98304 0.19286 31.1321333 0.77592 ## 196608 0.39916 NA NA ## 393216 0.86169 NA NA ## 786432 1.80240 NA NA ## 1572864 3.66839 NA NA ## 3145728 8.10332 NA NA N in the above is $N=m=n$ and the run-times are in seconds. The last column is a single threaded version.
How to speed up Kernel density estimation
Suppose you have $x_1,\dots,x_m$ data points, called source points hereafter, and $z_1,\dots,z_n$ points you want to estimate at, called query points hereafter. A naive implementation of $$ \hat f(z_j
How to speed up Kernel density estimation Suppose you have $x_1,\dots,x_m$ data points, called source points hereafter, and $z_1,\dots,z_n$ points you want to estimate at, called query points hereafter. A naive implementation of $$ \hat f(z_j)=\frac 1{mh}\sum_{i=1}^mK\left(\frac{x_i-z_j}{h}\right)$$ is then $\mathcal{O}(mn)$ if you want to evaluate it for all $n$ query points. Options in 1D In 1D then sort both the source and query points in $\mathcal{O}(m\log m)$ and $\mathcal{O}(n\log n)$ time. Then pass through both simultaneously and do not evaluate pairs where $K\left((x_i-z_j)/h\right)<\epsilon$ for some small $\epsilon$. For the latter you do not need to check a lot of values as the data is sorted and you pass from left to right. Thus, this step is $\mathcal O(m)$ or $\mathcal O(n)$ depending on which is largest. This is even exact in some cases where the kernel has finite support (like the Epanechnikov kernel). An even faster option if you have substantial amount of data is to use binning. Options in > 1D One option in higher dimensions is the dual-tree method like the one suggested by Gray, Alexander G., and Andrew W. Moore. "Rapid Evaluation of Multiple Density Models." In AISTATS. 2003. On example is to use a k-d tree for both the source and query points. Then you travers down both trees simultaneously exploiting that: $K\left(\frac{x_i-z_j}{h}\right)$ is virtually zero for points in the current node of source points and query points. Thus, you can use an approximation when it is below a certain threshold. the distance between source points are small in some nodes in the source point tree. Thus, $K\left(\frac{x_i-z_j}{h}\right)$ is almost a constant for a node with query points sufficiently far away. Again, you can use an approximation when the distance is sufficiently small in the source node and query node is far away. The above may not even introduce errors if the kernel has finite support. The algorithm will have $\mathcal{O}(m\log m)$ and $\mathcal{O}(n\log n)$ run-times (depending on which is largest). I have an example here of very similar problem that occurs in particle smoothers. Here is an example of run-times of the naive method versus the dual-tree method ## method ## N Dual-tree Naive Dual-tree 1 ## 384 0.01341 0.0008045 0.00529 ## 768 0.04257 0.0028020 0.01418 ## 1536 0.07590 0.0103289 0.02291 ## 3072 0.08270 0.0395034 0.03428 ## 6144 0.07206 0.1071989 0.05398 ## 12288 0.03143 0.4850711 0.09752 ## 24576 0.05360 1.7199726 0.18629 ## 49152 0.10605 6.9812358 0.36645 ## 98304 0.19286 31.1321333 0.77592 ## 196608 0.39916 NA NA ## 393216 0.86169 NA NA ## 786432 1.80240 NA NA ## 1572864 3.66839 NA NA ## 3145728 8.10332 NA NA N in the above is $N=m=n$ and the run-times are in seconds. The last column is a single threaded version.
How to speed up Kernel density estimation Suppose you have $x_1,\dots,x_m$ data points, called source points hereafter, and $z_1,\dots,z_n$ points you want to estimate at, called query points hereafter. A naive implementation of $$ \hat f(z_j
49,396
When is Maximum Likelihood the same as Least Squares
Levenberg-Marquardt is a general (nonlinear) optimization technique. It is not specific to LS, although that is probably its widest use. Looking at your referenced paper, they are (mostly) fitting state-space models with additive Normal errors. Forming the likelihood function yields $\ln L(\theta|X=x) = K - \ln \sigma -\frac{1}{2\sigma^2}\sum\left(x_j-f(\theta)\right)^2$, which is a nonlinear least squares problem.
When is Maximum Likelihood the same as Least Squares
Levenberg-Marquardt is a general (nonlinear) optimization technique. It is not specific to LS, although that is probably its widest use. Looking at your referenced paper, they are (mostly) fitting s
When is Maximum Likelihood the same as Least Squares Levenberg-Marquardt is a general (nonlinear) optimization technique. It is not specific to LS, although that is probably its widest use. Looking at your referenced paper, they are (mostly) fitting state-space models with additive Normal errors. Forming the likelihood function yields $\ln L(\theta|X=x) = K - \ln \sigma -\frac{1}{2\sigma^2}\sum\left(x_j-f(\theta)\right)^2$, which is a nonlinear least squares problem.
When is Maximum Likelihood the same as Least Squares Levenberg-Marquardt is a general (nonlinear) optimization technique. It is not specific to LS, although that is probably its widest use. Looking at your referenced paper, they are (mostly) fitting s
49,397
What is the meaning of a large p-value?
How you should 'use' the p-value depends on how you have designed your study with regard to the analyses you will run. I discuss two different philosophies about p-values in my answer here: When to use Fisher and Neyman-Pearson framework? You may find it helpful to read that. If you have, for example, run a power analysis and intend to use the p-value to make a final decision, you should not use close to the line ('marginally significant') as a meaningful category. It is fine to use a different alpha than $0.05$ (such as $0.10$), but once you decided on it and set your study up accordingly, you should stick with it. In addition, you cannot use a large p-value as evidence for the null hypothesis. I discussed that idea in my answer here: Why do statisticians say a non-significant result means "you cannot reject the null" as opposed to accepting the null hypothesis? Reading that answer may be helpful to you as well.
What is the meaning of a large p-value?
How you should 'use' the p-value depends on how you have designed your study with regard to the analyses you will run. I discuss two different philosophies about p-values in my answer here: When to u
What is the meaning of a large p-value? How you should 'use' the p-value depends on how you have designed your study with regard to the analyses you will run. I discuss two different philosophies about p-values in my answer here: When to use Fisher and Neyman-Pearson framework? You may find it helpful to read that. If you have, for example, run a power analysis and intend to use the p-value to make a final decision, you should not use close to the line ('marginally significant') as a meaningful category. It is fine to use a different alpha than $0.05$ (such as $0.10$), but once you decided on it and set your study up accordingly, you should stick with it. In addition, you cannot use a large p-value as evidence for the null hypothesis. I discussed that idea in my answer here: Why do statisticians say a non-significant result means "you cannot reject the null" as opposed to accepting the null hypothesis? Reading that answer may be helpful to you as well.
What is the meaning of a large p-value? How you should 'use' the p-value depends on how you have designed your study with regard to the analyses you will run. I discuss two different philosophies about p-values in my answer here: When to u
49,398
What is the meaning of a large p-value?
In my view, everything boils down to assumptions, that is, how well the models fits them. If it does agree with all of them, treat the p-value as a probability. Then you can compare p-value of 0.06 with 0.99 by concluding which of the two are more likely. Also, a lot depend on circumstances: in some cases, marginal significance shouldn't be ignored, because as you stated, rejection region can be set rather arbitrarily. But if the models satisfies the assumptions, then you should not seek to reject your hypothesis to some arbitrary level but rather investigate, how likely is the outcome that you got.
What is the meaning of a large p-value?
In my view, everything boils down to assumptions, that is, how well the models fits them. If it does agree with all of them, treat the p-value as a probability. Then you can compare p-value of 0.06 wi
What is the meaning of a large p-value? In my view, everything boils down to assumptions, that is, how well the models fits them. If it does agree with all of them, treat the p-value as a probability. Then you can compare p-value of 0.06 with 0.99 by concluding which of the two are more likely. Also, a lot depend on circumstances: in some cases, marginal significance shouldn't be ignored, because as you stated, rejection region can be set rather arbitrarily. But if the models satisfies the assumptions, then you should not seek to reject your hypothesis to some arbitrary level but rather investigate, how likely is the outcome that you got.
What is the meaning of a large p-value? In my view, everything boils down to assumptions, that is, how well the models fits them. If it does agree with all of them, treat the p-value as a probability. Then you can compare p-value of 0.06 wi
49,399
What is the meaning of a large p-value?
It is true that the acceptance range for the p-value of a hypothesis test is rather arbitrary, but nevertheless a lower p-value means that the test result can be accepted with more certainty, because the p-value essentially defines the confidence interval for the estimate, so a narrower confidence interval should be regarded more significant for the test.
What is the meaning of a large p-value?
It is true that the acceptance range for the p-value of a hypothesis test is rather arbitrary, but nevertheless a lower p-value means that the test result can be accepted with more certainty, because
What is the meaning of a large p-value? It is true that the acceptance range for the p-value of a hypothesis test is rather arbitrary, but nevertheless a lower p-value means that the test result can be accepted with more certainty, because the p-value essentially defines the confidence interval for the estimate, so a narrower confidence interval should be regarded more significant for the test.
What is the meaning of a large p-value? It is true that the acceptance range for the p-value of a hypothesis test is rather arbitrary, but nevertheless a lower p-value means that the test result can be accepted with more certainty, because
49,400
Overfitting in Genetic Programming
I can "train" my GP on some time period (let's say 2000-2010) and evaluate it's fitness on another time period (let's say 2010-2014). However, won't this just introduce overfitting on the time period from 2010-2014. The GP will just gravitate towards finding good programs on the testing time period. Yes that is a fundamental problem which is triggered by a very fundamental difference between evolution (as basis for the heuristics behind GA) and GA for this kind of optimization: in evolution, each generation consists of new unknown test cases. Recommendation specifically for iterative optimization schemes: restrict the number of generations as much as possible The underlying problem that causes @DikranMarsupial to conclude that optimization is the root of all evil is that algorithms searching for the maximum performance will "skim" variance, i.e. they find solutions which are a lucky combination of training & test set splits and the model hyperparameters. For GAs, you'll accumulate such solutions in the elite, so you may want to switch off the elite or at least re-evaluate with new cross validation splits. One general recommendation for any kind of data driven optimization that helps with these variance issues is: Use a proper scoring rule for performance evaluation. Proper scoring rules are well behaved in the sense that they react continuously to continuous changes in the model, and they often also exhibit less variance than the dichtomized (error counting) loss functions. Two other basic cautionary steps I'd recommend are to check Compare some kind of "baseline" model's performance (e.g. a model where you set the hyperparameters as your knowledge of data and application field suggests) against "good" results of the optimizer with a statistical test (e.g. McNemar's) You can also check beforehand whether you have a realistic chance to obtain a model where you can actually prove an increase in performance (e.g. if your baseline model already has 90% correct, is there any chance given the total number of test cases you have even to prove that even observing 100 % correct test cases acctually corresponds to a better model? as the GA performs a data driven optimization, you anyways need to validate the final set of hyperparameters by another validation (loop). Calculate the same performance measure you use internally for the GA also for the outer validataion: the difference is an indicator of overfitting. While this doesn't help if there was overfitting, you can at least detect that there are problems. Let me know if you need literature - in that case: do you read German (I used a GA in my Diplom thesis - that's how I found out about all those problems...)
Overfitting in Genetic Programming
I can "train" my GP on some time period (let's say 2000-2010) and evaluate it's fitness on another time period (let's say 2010-2014). However, won't this just introduce overfitting on the time period
Overfitting in Genetic Programming I can "train" my GP on some time period (let's say 2000-2010) and evaluate it's fitness on another time period (let's say 2010-2014). However, won't this just introduce overfitting on the time period from 2010-2014. The GP will just gravitate towards finding good programs on the testing time period. Yes that is a fundamental problem which is triggered by a very fundamental difference between evolution (as basis for the heuristics behind GA) and GA for this kind of optimization: in evolution, each generation consists of new unknown test cases. Recommendation specifically for iterative optimization schemes: restrict the number of generations as much as possible The underlying problem that causes @DikranMarsupial to conclude that optimization is the root of all evil is that algorithms searching for the maximum performance will "skim" variance, i.e. they find solutions which are a lucky combination of training & test set splits and the model hyperparameters. For GAs, you'll accumulate such solutions in the elite, so you may want to switch off the elite or at least re-evaluate with new cross validation splits. One general recommendation for any kind of data driven optimization that helps with these variance issues is: Use a proper scoring rule for performance evaluation. Proper scoring rules are well behaved in the sense that they react continuously to continuous changes in the model, and they often also exhibit less variance than the dichtomized (error counting) loss functions. Two other basic cautionary steps I'd recommend are to check Compare some kind of "baseline" model's performance (e.g. a model where you set the hyperparameters as your knowledge of data and application field suggests) against "good" results of the optimizer with a statistical test (e.g. McNemar's) You can also check beforehand whether you have a realistic chance to obtain a model where you can actually prove an increase in performance (e.g. if your baseline model already has 90% correct, is there any chance given the total number of test cases you have even to prove that even observing 100 % correct test cases acctually corresponds to a better model? as the GA performs a data driven optimization, you anyways need to validate the final set of hyperparameters by another validation (loop). Calculate the same performance measure you use internally for the GA also for the outer validataion: the difference is an indicator of overfitting. While this doesn't help if there was overfitting, you can at least detect that there are problems. Let me know if you need literature - in that case: do you read German (I used a GA in my Diplom thesis - that's how I found out about all those problems...)
Overfitting in Genetic Programming I can "train" my GP on some time period (let's say 2000-2010) and evaluate it's fitness on another time period (let's say 2010-2014). However, won't this just introduce overfitting on the time period