idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
50,201
How to detect changes in amplitude?
Changes in variance occur quite often in time series.We employ a search process based upon R. Tsay's innovative work to find the point in time that the variance of the errors has changed. This leads directly to Generalized Least Squares or otherwise known as Weighted Least Squares. His work appeared in the Journal of Forecasting Vol 7 1-20 1988 and has been largely ignored by the major developers of commercial time series software but not by all .In our world we become aware of innovative research and then we implement the important improvements in analysis. This paper is very important. Note that one has to form an ARIMA model free of Anomalies (Pulses , Level Shifts, Seasonal Pulses and appropriately dertended/demeaned ) and then employ his approach otherwise false positives/false negatives would ensue. It would appear that you have at least two points in time where the variance (of the errors) has substantively changed.
How to detect changes in amplitude?
Changes in variance occur quite often in time series.We employ a search process based upon R. Tsay's innovative work to find the point in time that the variance of the errors has changed. This leads
How to detect changes in amplitude? Changes in variance occur quite often in time series.We employ a search process based upon R. Tsay's innovative work to find the point in time that the variance of the errors has changed. This leads directly to Generalized Least Squares or otherwise known as Weighted Least Squares. His work appeared in the Journal of Forecasting Vol 7 1-20 1988 and has been largely ignored by the major developers of commercial time series software but not by all .In our world we become aware of innovative research and then we implement the important improvements in analysis. This paper is very important. Note that one has to form an ARIMA model free of Anomalies (Pulses , Level Shifts, Seasonal Pulses and appropriately dertended/demeaned ) and then employ his approach otherwise false positives/false negatives would ensue. It would appear that you have at least two points in time where the variance (of the errors) has substantively changed.
How to detect changes in amplitude? Changes in variance occur quite often in time series.We employ a search process based upon R. Tsay's innovative work to find the point in time that the variance of the errors has changed. This leads
50,202
How to get a confidence interval on parameters that were fitted using multiple functions and datasets at once?
The easiest thing for you to do would be to string the data sets together (rbind() in R jargon, append in Stata jargon), so that you have variables $x_1,y_1$ coming from the first data set, $x_2,y_2$ coming from the second data set, and an identifier $I$ that takes the value of 0 for the first data set and the value of 1 for the second data set. Construct your response variable $r=y_1 (1-I) + y_2 I$ that would correspond to $y_1$ or $y_2$ depending what data set a particular record came from. Likewise, construct the expanded fit function $f=f(x_1,P) (1-I) + f'(x_2,P) I$ . Then treat the whole thing as if it were a single data set. Example: the first data set is x y 2.3 5.6 1.7 4.5 0.8 2.2 The second data set is x y 11.0 25.4 8.3 21.2 7.5 19.3 The combined data set is x1 y1 x2 y2 I r 2.3 5.6 0 0 0 5.6 1.7 4.5 0 0 0 4.5 0.8 2.2 0 0 0 2.2 0 0 11.0 25.4 1 25.4 0 0 8.3 21.2 1 21.2 0 0 7.5 19.3 1 19.3 I would expect your standard errors and confidence intervals to be wrong with this method though. You assume implicitly that the variance of the error terms is constant (a homoskedastic nonlinear regression model), and that is a strong assumption to make. You would be better off with a so-called sandwich formula, for which you need the second derivatives, as well. You can probably find all the theory in Ron Gallant's book or in Bates and White's book, but I doubt you'd be able to handle these references. Bruce Hansen's lecture notes (Ch. 6 in particular) might be helpful, too.
How to get a confidence interval on parameters that were fitted using multiple functions and dataset
The easiest thing for you to do would be to string the data sets together (rbind() in R jargon, append in Stata jargon), so that you have variables $x_1,y_1$ coming from the first data set, $x_2,y_2$
How to get a confidence interval on parameters that were fitted using multiple functions and datasets at once? The easiest thing for you to do would be to string the data sets together (rbind() in R jargon, append in Stata jargon), so that you have variables $x_1,y_1$ coming from the first data set, $x_2,y_2$ coming from the second data set, and an identifier $I$ that takes the value of 0 for the first data set and the value of 1 for the second data set. Construct your response variable $r=y_1 (1-I) + y_2 I$ that would correspond to $y_1$ or $y_2$ depending what data set a particular record came from. Likewise, construct the expanded fit function $f=f(x_1,P) (1-I) + f'(x_2,P) I$ . Then treat the whole thing as if it were a single data set. Example: the first data set is x y 2.3 5.6 1.7 4.5 0.8 2.2 The second data set is x y 11.0 25.4 8.3 21.2 7.5 19.3 The combined data set is x1 y1 x2 y2 I r 2.3 5.6 0 0 0 5.6 1.7 4.5 0 0 0 4.5 0.8 2.2 0 0 0 2.2 0 0 11.0 25.4 1 25.4 0 0 8.3 21.2 1 21.2 0 0 7.5 19.3 1 19.3 I would expect your standard errors and confidence intervals to be wrong with this method though. You assume implicitly that the variance of the error terms is constant (a homoskedastic nonlinear regression model), and that is a strong assumption to make. You would be better off with a so-called sandwich formula, for which you need the second derivatives, as well. You can probably find all the theory in Ron Gallant's book or in Bates and White's book, but I doubt you'd be able to handle these references. Bruce Hansen's lecture notes (Ch. 6 in particular) might be helpful, too.
How to get a confidence interval on parameters that were fitted using multiple functions and dataset The easiest thing for you to do would be to string the data sets together (rbind() in R jargon, append in Stata jargon), so that you have variables $x_1,y_1$ coming from the first data set, $x_2,y_2$
50,203
Books about incremental data clustering
In a field that is this actively researched, a book will be quickly out of date. Just as with regular clustering: most books still discuss just hierarchical clustering, k-means and EM. There is a book by C.C.Aggarwal, "Data streams: models and algorithms". Chapter 2 is on clustering. It is better to check for recent publications in this field, in particular survey articles. There is one survey from 2009: Alireza Rezaei Mahdiraji, "Clustering data stream: A survey of algorithms". But you will want look at newer methods than these, too.
Books about incremental data clustering
In a field that is this actively researched, a book will be quickly out of date. Just as with regular clustering: most books still discuss just hierarchical clustering, k-means and EM. There is a book
Books about incremental data clustering In a field that is this actively researched, a book will be quickly out of date. Just as with regular clustering: most books still discuss just hierarchical clustering, k-means and EM. There is a book by C.C.Aggarwal, "Data streams: models and algorithms". Chapter 2 is on clustering. It is better to check for recent publications in this field, in particular survey articles. There is one survey from 2009: Alireza Rezaei Mahdiraji, "Clustering data stream: A survey of algorithms". But you will want look at newer methods than these, too.
Books about incremental data clustering In a field that is this actively researched, a book will be quickly out of date. Just as with regular clustering: most books still discuss just hierarchical clustering, k-means and EM. There is a book
50,204
Books about incremental data clustering
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Not a book, but a paper on this area. Ailon, Nir, Ragesh Jaiswal & Claire Monteleoni. 2009. Streaming k-means approximation. In Advances in Neural Information Processing Systems.
Books about incremental data clustering
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Books about incremental data clustering Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Not a book, but a paper on this area. Ailon, Nir, Ragesh Jaiswal & Claire Monteleoni. 2009. Streaming k-means approximation. In Advances in Neural Information Processing Systems.
Books about incremental data clustering Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
50,205
Books about incremental data clustering
I recommend you chapter 8 from "Cluster Analysis: Basic Concepts and Algorithms" It provides a very definite overview on clustering, including Agglomerative Clustering, that we can count as incremental clustering.
Books about incremental data clustering
I recommend you chapter 8 from "Cluster Analysis: Basic Concepts and Algorithms" It provides a very definite overview on clustering, including Agglomerative Clustering, that we can count as increment
Books about incremental data clustering I recommend you chapter 8 from "Cluster Analysis: Basic Concepts and Algorithms" It provides a very definite overview on clustering, including Agglomerative Clustering, that we can count as incremental clustering.
Books about incremental data clustering I recommend you chapter 8 from "Cluster Analysis: Basic Concepts and Algorithms" It provides a very definite overview on clustering, including Agglomerative Clustering, that we can count as increment
50,206
How should I use prop.test function?
If what you mean to test is whether more people reported an increase than the combined number who reported a decrease or no difference (which is what I think you mean) then your first version is closer to the correct one. Your null hypothesis in that case is that people choose 50-50 between "increase" and "no increase, or decrease" and you are open to evidence either way (greater or less than 50% choose increase). However, you actually are interested in testing the alternative hypothesis that >50% chose it, so you need a one sided test. You can call this explicitly in prop.test by stating that your alternative hypothesis is only for p being greater than 0.5: prop.test(30,36, p=0.5, "greater") It's worth pointing out though that there is nothing special about the 0.5 proportion here - why have you chosen it as the cut-off point for your alternative hypothesis? For example, why not have as a null hypothesis that a third of people choose each option? or any other set of probabilities, perhaps based on an experiment of having people fill in the survey having received a placebo. Having said that, there is strong intuitive appeal in the 0.5 cut-off point and there is no doubt your experiment shows statistically significant evidence that more than 50% do report an increase. The only question is, how does this compare to the percentage who report an increase under other circumstances? (which is not what you've asked here, so I won't worry about it).
How should I use prop.test function?
If what you mean to test is whether more people reported an increase than the combined number who reported a decrease or no difference (which is what I think you mean) then your first version is close
How should I use prop.test function? If what you mean to test is whether more people reported an increase than the combined number who reported a decrease or no difference (which is what I think you mean) then your first version is closer to the correct one. Your null hypothesis in that case is that people choose 50-50 between "increase" and "no increase, or decrease" and you are open to evidence either way (greater or less than 50% choose increase). However, you actually are interested in testing the alternative hypothesis that >50% chose it, so you need a one sided test. You can call this explicitly in prop.test by stating that your alternative hypothesis is only for p being greater than 0.5: prop.test(30,36, p=0.5, "greater") It's worth pointing out though that there is nothing special about the 0.5 proportion here - why have you chosen it as the cut-off point for your alternative hypothesis? For example, why not have as a null hypothesis that a third of people choose each option? or any other set of probabilities, perhaps based on an experiment of having people fill in the survey having received a placebo. Having said that, there is strong intuitive appeal in the 0.5 cut-off point and there is no doubt your experiment shows statistically significant evidence that more than 50% do report an increase. The only question is, how does this compare to the percentage who report an increase under other circumstances? (which is not what you've asked here, so I won't worry about it).
How should I use prop.test function? If what you mean to test is whether more people reported an increase than the combined number who reported a decrease or no difference (which is what I think you mean) then your first version is close
50,207
Counting distinct elements when memory is limited
Fellow CVer @rrenaud cited this paper as a key reference for number of unique value estimation. He suggested also to check out the Good Turing frequency estimator, which was developed to estimate the proportion of elements that occur n times, including the case where n = 1 (i.e., unique values). Here is a link to @rrenaud's answer to the similar question. Have fun!
Counting distinct elements when memory is limited
Fellow CVer @rrenaud cited this paper as a key reference for number of unique value estimation. He suggested also to check out the Good Turing frequency estimator, which was developed to estimate the
Counting distinct elements when memory is limited Fellow CVer @rrenaud cited this paper as a key reference for number of unique value estimation. He suggested also to check out the Good Turing frequency estimator, which was developed to estimate the proportion of elements that occur n times, including the case where n = 1 (i.e., unique values). Here is a link to @rrenaud's answer to the similar question. Have fun!
Counting distinct elements when memory is limited Fellow CVer @rrenaud cited this paper as a key reference for number of unique value estimation. He suggested also to check out the Good Turing frequency estimator, which was developed to estimate the
50,208
How to predict a binary outcome with unbalanced repeated measures data?
The question seems to centre mainly on the concern that the data are not normally distributed. There is no requirement, condition, or assumption that any of the data be normally distributed
How to predict a binary outcome with unbalanced repeated measures data?
The question seems to centre mainly on the concern that the data are not normally distributed. There is no requirement, condition, or assumption that any of the data be normally distributed
How to predict a binary outcome with unbalanced repeated measures data? The question seems to centre mainly on the concern that the data are not normally distributed. There is no requirement, condition, or assumption that any of the data be normally distributed
How to predict a binary outcome with unbalanced repeated measures data? The question seems to centre mainly on the concern that the data are not normally distributed. There is no requirement, condition, or assumption that any of the data be normally distributed
50,209
How can I sample from a log transformed distribution using uniform distribution?
As I understand it, you've generally discretized to create a set of $n$ points, $x_1, \dots, x_n$, with probability $p_1, \dots, p_n$, and you then calculate the cumulative probabilities, say $c_i = \sum_{j=1}^i p_j$. So you can draw $U \sim Uniform(0,1)$ and then take $X = x_{i^*}$ where $i^* = \min_i \{i:c_i \ge U\}$, or something like that. But your current problem is that the $p_i$ are so small that you want to just work with $a_i = \log p_i$. One approach would be to sort the $a_i$ from largest to smallest and then calculate partial sums using something like the following addlog function, which calculates $\log(f + g)$ on the basis of $a = \log(f)$ and $b = \log(g)$. addlog(a, b, THRESH=200.0) { if(b > a + THRESH) return(b); else if(a > b + THRESH) return(a); else return(a + log1p(exp(b-a))); } where log1p(x) returns log(1+x). But, really, I would think that you should focus on the $x_i$ for which $p_i$ is large enough that you don't need to worry about underflow, and neglect the $x_i$ with exceedingly small $p_i$. If all of the $p_i$ are small, then it seems that you should grid more coarsely. In most applications, it should be sufficient to discretize to 1000 or so values, I would think.
How can I sample from a log transformed distribution using uniform distribution?
As I understand it, you've generally discretized to create a set of $n$ points, $x_1, \dots, x_n$, with probability $p_1, \dots, p_n$, and you then calculate the cumulative probabilities, say $c_i = \
How can I sample from a log transformed distribution using uniform distribution? As I understand it, you've generally discretized to create a set of $n$ points, $x_1, \dots, x_n$, with probability $p_1, \dots, p_n$, and you then calculate the cumulative probabilities, say $c_i = \sum_{j=1}^i p_j$. So you can draw $U \sim Uniform(0,1)$ and then take $X = x_{i^*}$ where $i^* = \min_i \{i:c_i \ge U\}$, or something like that. But your current problem is that the $p_i$ are so small that you want to just work with $a_i = \log p_i$. One approach would be to sort the $a_i$ from largest to smallest and then calculate partial sums using something like the following addlog function, which calculates $\log(f + g)$ on the basis of $a = \log(f)$ and $b = \log(g)$. addlog(a, b, THRESH=200.0) { if(b > a + THRESH) return(b); else if(a > b + THRESH) return(a); else return(a + log1p(exp(b-a))); } where log1p(x) returns log(1+x). But, really, I would think that you should focus on the $x_i$ for which $p_i$ is large enough that you don't need to worry about underflow, and neglect the $x_i$ with exceedingly small $p_i$. If all of the $p_i$ are small, then it seems that you should grid more coarsely. In most applications, it should be sufficient to discretize to 1000 or so values, I would think.
How can I sample from a log transformed distribution using uniform distribution? As I understand it, you've generally discretized to create a set of $n$ points, $x_1, \dots, x_n$, with probability $p_1, \dots, p_n$, and you then calculate the cumulative probabilities, say $c_i = \
50,210
Correlating time stamps
I'm presuming the rows in the way you've presented data don't necessarily mean anything ie there is no necessary link between the third yawn, third whisper, and third stretch. What you are interested in with the third yawn is "how close is this in time to any whisper - not just the third whisper". For each yawn I would calculate the time to the nearest whisper and the time to the nearest stretch. And similarly for each whisper (calculate time to nearest stretch and time to nearest yawn); and for each stretch. Then I would calculate some kind of indicator statistics of the proximity of each behaviour to each of the other two - something like the trimmed mean distance in time to the nearest behaviour of the other type. (There will be six of these indicators, not just three, because the average time from a yawn to its nearest stretch is not the same as the average time from a stretch to its nearest yawn.) This already will give you some sense of which behaviours are clustered together, but you also should check that this isn't plausibly due just to chance. To check that, I would create simulated data generated by a model under the null hypothesis of no relation. Doing this would require generating data for each behaviour's time from a plausible null model, probably based on resampling the times between each event (eg between each yawn) to create a new set of time stamps for hypothetical null model events. Then calculate the same indicator statistic for this null model and compare to the indicator from your genuine data. By repeating this simulation a number of times, you could find out whether the indicator from your data is sufficiently different from the null model's simulated data (smaller average time from each yawn to the nearest stretch, for example) to count as statistically significant evidence against your null hypothesis.
Correlating time stamps
I'm presuming the rows in the way you've presented data don't necessarily mean anything ie there is no necessary link between the third yawn, third whisper, and third stretch. What you are interested
Correlating time stamps I'm presuming the rows in the way you've presented data don't necessarily mean anything ie there is no necessary link between the third yawn, third whisper, and third stretch. What you are interested in with the third yawn is "how close is this in time to any whisper - not just the third whisper". For each yawn I would calculate the time to the nearest whisper and the time to the nearest stretch. And similarly for each whisper (calculate time to nearest stretch and time to nearest yawn); and for each stretch. Then I would calculate some kind of indicator statistics of the proximity of each behaviour to each of the other two - something like the trimmed mean distance in time to the nearest behaviour of the other type. (There will be six of these indicators, not just three, because the average time from a yawn to its nearest stretch is not the same as the average time from a stretch to its nearest yawn.) This already will give you some sense of which behaviours are clustered together, but you also should check that this isn't plausibly due just to chance. To check that, I would create simulated data generated by a model under the null hypothesis of no relation. Doing this would require generating data for each behaviour's time from a plausible null model, probably based on resampling the times between each event (eg between each yawn) to create a new set of time stamps for hypothetical null model events. Then calculate the same indicator statistic for this null model and compare to the indicator from your genuine data. By repeating this simulation a number of times, you could find out whether the indicator from your data is sufficiently different from the null model's simulated data (smaller average time from each yawn to the nearest stretch, for example) to count as statistically significant evidence against your null hypothesis.
Correlating time stamps I'm presuming the rows in the way you've presented data don't necessarily mean anything ie there is no necessary link between the third yawn, third whisper, and third stretch. What you are interested
50,211
Correlating time stamps
I've similar problem my solution was naive - create new variables representing each minute of the day if given activite took place then mark that minute by 1 : yawning -> yawning ... ... 2:21-2:22 2:21 1 3:42-3:45 2:22 1 9:20-925 2:23 0 14:45-14:32 . . 3:42 1 . 3:43 1 . . 45:40-45-43 . so we have now new time series, which we could analyse by more standard methods, this worked really good, I've tested in on simulated data on below logit model, where x is 0-1 variable, z - "driving" variable : p(x(t+1)=1|p(x)=1)=exp(x+B1*z)/denominator the same for y, the closer was B2 to B1 the better dependence between x and y measured by Hamming distance. Methodological problem : what to do if total time of activity_11 during the day is 10 times higher then that of activity_2 ? Sometimes it doesn't matter, sometimes some weighted distance is needed - in the case when we want to build distance matrix.
Correlating time stamps
I've similar problem my solution was naive - create new variables representing each minute of the day if given activite took place then mark that minute by 1 : yawning -> yawning ...
Correlating time stamps I've similar problem my solution was naive - create new variables representing each minute of the day if given activite took place then mark that minute by 1 : yawning -> yawning ... ... 2:21-2:22 2:21 1 3:42-3:45 2:22 1 9:20-925 2:23 0 14:45-14:32 . . 3:42 1 . 3:43 1 . . 45:40-45-43 . so we have now new time series, which we could analyse by more standard methods, this worked really good, I've tested in on simulated data on below logit model, where x is 0-1 variable, z - "driving" variable : p(x(t+1)=1|p(x)=1)=exp(x+B1*z)/denominator the same for y, the closer was B2 to B1 the better dependence between x and y measured by Hamming distance. Methodological problem : what to do if total time of activity_11 during the day is 10 times higher then that of activity_2 ? Sometimes it doesn't matter, sometimes some weighted distance is needed - in the case when we want to build distance matrix.
Correlating time stamps I've similar problem my solution was naive - create new variables representing each minute of the day if given activite took place then mark that minute by 1 : yawning -> yawning ...
50,212
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R
There is one method of calculating Caliński & Harabasz (1974) index for the same distance matrix, so if two R functions show different results one of them is wrong. Hence your question is off-topic. Look how Caliński & Harabasz index is calculated, in their original paper [1] or e.g. here. Then check the source code of both R functions, find a bug and report it to package creators. Here are fpc and clusterSim example sites on GitHub where you can view the source code: https://github.com/cran/fpc/tree/master/R, https://github.com/cran/clusterSim. [1] Caliński, T., and J. Harabasz. "A dendrite method for cluster analysis." Communications in Statistics. Vol. 3, No. 1, 1974, pp. 1–27.
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R
There is one method of calculating Caliński & Harabasz (1974) index for the same distance matrix, so if two R functions show different results one of them is wrong. Hence your question is off-topic.
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R There is one method of calculating Caliński & Harabasz (1974) index for the same distance matrix, so if two R functions show different results one of them is wrong. Hence your question is off-topic. Look how Caliński & Harabasz index is calculated, in their original paper [1] or e.g. here. Then check the source code of both R functions, find a bug and report it to package creators. Here are fpc and clusterSim example sites on GitHub where you can view the source code: https://github.com/cran/fpc/tree/master/R, https://github.com/cran/clusterSim. [1] Caliński, T., and J. Harabasz. "A dendrite method for cluster analysis." Communications in Statistics. Vol. 3, No. 1, 1974, pp. 1–27.
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R There is one method of calculating Caliński & Harabasz (1974) index for the same distance matrix, so if two R functions show different results one of them is wrong. Hence your question is off-topic.
50,213
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R
Using a synthetic, two dimensional dataset of 200 points, euclidean distance and complete linkage I am not able to reproduce the discrepancies which you encountered. Also the clusterCrit package and another implementation return the same values > # fpc > ch1 <- calinhara(X, pc, cn=max(pc)) > # clusterSim > ch2 <- index.G1 (X,pc,d=NULL,centrotypes="centroids") > # clusterCrit > ch3 <- as.numeric(intCriteria(X,pc,"Calinski_Harabasz")) > > cat('fpc: ', ch1, '\nclusterSim: ', ch2, '\nclusterCrit: ', ch3) fpc: 369.0315 clusterSim: 369.0315 clusterCrit: 369.0315 Python >>> itn.calinski_harabasz(X, pc) 369.0315384638188
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R
Using a synthetic, two dimensional dataset of 200 points, euclidean distance and complete linkage I am not able to reproduce the discrepancies which you encountered. Also the clusterCrit package and a
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R Using a synthetic, two dimensional dataset of 200 points, euclidean distance and complete linkage I am not able to reproduce the discrepancies which you encountered. Also the clusterCrit package and another implementation return the same values > # fpc > ch1 <- calinhara(X, pc, cn=max(pc)) > # clusterSim > ch2 <- index.G1 (X,pc,d=NULL,centrotypes="centroids") > # clusterCrit > ch3 <- as.numeric(intCriteria(X,pc,"Calinski_Harabasz")) > > cat('fpc: ', ch1, '\nclusterSim: ', ch2, '\nclusterCrit: ', ch3) fpc: 369.0315 clusterSim: 369.0315 clusterCrit: 369.0315 Python >>> itn.calinski_harabasz(X, pc) 369.0315384638188
Inconsistency in calculating the Calinski-Harabasz index for a given clustering in R Using a synthetic, two dimensional dataset of 200 points, euclidean distance and complete linkage I am not able to reproduce the discrepancies which you encountered. Also the clusterCrit package and a
50,214
Discriminating periodic signals from aperiodic ones
I think that this is actually a difficult research question. As mentioned by @cardinal, the FT suffers from major drawbacks. If I recall, the distribution of the square module of the coefficients is a scaled $\chi^2$ with 1 degree of freedom. This might be used to test that your signal is a white noise, but rejection will not tell you that it is periodic. Wavelets turn out to be an incredibly powerful tool to study noisy pseudo-periodic (and long memory) signals. Unfortunately they are not usually used to detect periodicity and I am not aware of simple descriptors of periodicity. I came across that paper which addresses just that question, but if you are not familiar with the wavelet theory it might be hard to read.
Discriminating periodic signals from aperiodic ones
I think that this is actually a difficult research question. As mentioned by @cardinal, the FT suffers from major drawbacks. If I recall, the distribution of the square module of the coefficients is a
Discriminating periodic signals from aperiodic ones I think that this is actually a difficult research question. As mentioned by @cardinal, the FT suffers from major drawbacks. If I recall, the distribution of the square module of the coefficients is a scaled $\chi^2$ with 1 degree of freedom. This might be used to test that your signal is a white noise, but rejection will not tell you that it is periodic. Wavelets turn out to be an incredibly powerful tool to study noisy pseudo-periodic (and long memory) signals. Unfortunately they are not usually used to detect periodicity and I am not aware of simple descriptors of periodicity. I came across that paper which addresses just that question, but if you are not familiar with the wavelet theory it might be hard to read.
Discriminating periodic signals from aperiodic ones I think that this is actually a difficult research question. As mentioned by @cardinal, the FT suffers from major drawbacks. If I recall, the distribution of the square module of the coefficients is a
50,215
Discriminating periodic signals from aperiodic ones
Generally it is best to detect periodicity in the frequency domain. However, if for example there is a twelve month period and the time step is one month then at lag 12 and multiples of it you should see high correlations. If there is no periodic components there would be no peaks at a particular lag and its multiples. So maybe as a partial answer to your question this could possibly work. But I don't think there is any very definitive approach.
Discriminating periodic signals from aperiodic ones
Generally it is best to detect periodicity in the frequency domain. However, if for example there is a twelve month period and the time step is one month then at lag 12 and multiples of it you should
Discriminating periodic signals from aperiodic ones Generally it is best to detect periodicity in the frequency domain. However, if for example there is a twelve month period and the time step is one month then at lag 12 and multiples of it you should see high correlations. If there is no periodic components there would be no peaks at a particular lag and its multiples. So maybe as a partial answer to your question this could possibly work. But I don't think there is any very definitive approach.
Discriminating periodic signals from aperiodic ones Generally it is best to detect periodicity in the frequency domain. However, if for example there is a twelve month period and the time step is one month then at lag 12 and multiples of it you should
50,216
Convergence proof for perceptron algorithm with margin
Well, the answer depends upon exactly which algorithm you have in mind. I would take a look in Brian Ripley's 1996 book, Pattern Recognition and Neural Networks, page 116. Here is a (very simple) proof of the convergence of Rosenblatt's perceptron learning algorithm if that is the algorithm you have in mind. This is replicated as Exercise 4.6 in Elements of Statistical Learning. The algorithm presented on the Wikipedia page looks a little different from the algorithm in Ripley's book, but from what I can tell they are up to some initial normalizations doing the same. Please note that convergence means that for a linearly separable data set the algorithm reach a fixed point after a finite number of iterations, but the value of the fixed point depends upon the start value for the algorithm.
Convergence proof for perceptron algorithm with margin
Well, the answer depends upon exactly which algorithm you have in mind. I would take a look in Brian Ripley's 1996 book, Pattern Recognition and Neural Networks, page 116. Here is a (very simple) proo
Convergence proof for perceptron algorithm with margin Well, the answer depends upon exactly which algorithm you have in mind. I would take a look in Brian Ripley's 1996 book, Pattern Recognition and Neural Networks, page 116. Here is a (very simple) proof of the convergence of Rosenblatt's perceptron learning algorithm if that is the algorithm you have in mind. This is replicated as Exercise 4.6 in Elements of Statistical Learning. The algorithm presented on the Wikipedia page looks a little different from the algorithm in Ripley's book, but from what I can tell they are up to some initial normalizations doing the same. Please note that convergence means that for a linearly separable data set the algorithm reach a fixed point after a finite number of iterations, but the value of the fixed point depends upon the start value for the algorithm.
Convergence proof for perceptron algorithm with margin Well, the answer depends upon exactly which algorithm you have in mind. I would take a look in Brian Ripley's 1996 book, Pattern Recognition and Neural Networks, page 116. Here is a (very simple) proo
50,217
Is there a package for R that allows smoothing splines in GEE? [closed]
The splines package has natural splines (ns), B-splines (bs), and a few other types. You can just use them as transformations for the predictors in the model: geese(y ~ ns(x, 3) + z, ...)
Is there a package for R that allows smoothing splines in GEE? [closed]
The splines package has natural splines (ns), B-splines (bs), and a few other types. You can just use them as transformations for the predictors in the model: geese(y ~ ns(x, 3) + z, ...)
Is there a package for R that allows smoothing splines in GEE? [closed] The splines package has natural splines (ns), B-splines (bs), and a few other types. You can just use them as transformations for the predictors in the model: geese(y ~ ns(x, 3) + z, ...)
Is there a package for R that allows smoothing splines in GEE? [closed] The splines package has natural splines (ns), B-splines (bs), and a few other types. You can just use them as transformations for the predictors in the model: geese(y ~ ns(x, 3) + z, ...)
50,218
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab
You can use Spiegelhalter's test (1983, not the 'omnibus test' from 1977): function pval = spiegel_test(x) % compute pvalue under null of x normally distributed; % x should be a vector; % D. J. Spiegelhalter, 'Diagnostic tests of distributional shape,' % Biometrika, 1983 xm = mean(x); xs = std(x); xz = (x - xm) ./ xs; xz2 = xz.^2; N = sum(xz2 .* log(xz2)); n = numel(x); ts = (N - 0.73 * n) / (0.8969 * sqrt(n)); %under the null, ts ~ N(0,1) pval = 1 - abs(erf(ts / sqrt(2))); %2-sided test. if only Matlab had R's pnorm function ... I include code to test this under the null and under a few alternatives: % under H0: pvals = nan(10000,1); for tt=1:numel(pvals); pvals(tt) = spiegel_test(randn(300,1)); end mean(pvals < 0.05) I get something like: ans = 0.0512 Under some alternatives: %under Ha (using a Tukey g-distribution) g = 0.4; pvals = nan(10000,1); for tt=1:numel(pvals); pvals(tt) = spiegel_test((exp(g * randn(300,1)) - 1)/g); end mean(pvals < 0.05) %under Ha (using a Tukey h-distribution) h = 0.1; pvals = nan(10000,1); for tt=1:numel(pvals); x = randn(300,1); pvals(tt) = spiegel_test(x .* exp(0.5 * h * x.^2)); end mean(pvals < 0.05) I get: ans = 0.8494 ans = 0.8959 This test discards the knowledge that the mean must equal zero, so is perhaps less powerful than other tests. Spiegelhalter notes this test performs reasonably well for sample sizes greater than about 25, and is designed to test against symmetric alternatives (e.g. the Tukey h-distribution). It is less powerful against asymmetric alternatives.
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab
You can use Spiegelhalter's test (1983, not the 'omnibus test' from 1977): function pval = spiegel_test(x) % compute pvalue under null of x normally distributed; % x should be a vector; % D. J. Spiege
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab You can use Spiegelhalter's test (1983, not the 'omnibus test' from 1977): function pval = spiegel_test(x) % compute pvalue under null of x normally distributed; % x should be a vector; % D. J. Spiegelhalter, 'Diagnostic tests of distributional shape,' % Biometrika, 1983 xm = mean(x); xs = std(x); xz = (x - xm) ./ xs; xz2 = xz.^2; N = sum(xz2 .* log(xz2)); n = numel(x); ts = (N - 0.73 * n) / (0.8969 * sqrt(n)); %under the null, ts ~ N(0,1) pval = 1 - abs(erf(ts / sqrt(2))); %2-sided test. if only Matlab had R's pnorm function ... I include code to test this under the null and under a few alternatives: % under H0: pvals = nan(10000,1); for tt=1:numel(pvals); pvals(tt) = spiegel_test(randn(300,1)); end mean(pvals < 0.05) I get something like: ans = 0.0512 Under some alternatives: %under Ha (using a Tukey g-distribution) g = 0.4; pvals = nan(10000,1); for tt=1:numel(pvals); pvals(tt) = spiegel_test((exp(g * randn(300,1)) - 1)/g); end mean(pvals < 0.05) %under Ha (using a Tukey h-distribution) h = 0.1; pvals = nan(10000,1); for tt=1:numel(pvals); x = randn(300,1); pvals(tt) = spiegel_test(x .* exp(0.5 * h * x.^2)); end mean(pvals < 0.05) I get: ans = 0.8494 ans = 0.8959 This test discards the knowledge that the mean must equal zero, so is perhaps less powerful than other tests. Spiegelhalter notes this test performs reasonably well for sample sizes greater than about 25, and is designed to test against symmetric alternatives (e.g. the Tukey h-distribution). It is less powerful against asymmetric alternatives.
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab You can use Spiegelhalter's test (1983, not the 'omnibus test' from 1977): function pval = spiegel_test(x) % compute pvalue under null of x normally distributed; % x should be a vector; % D. J. Spiege
50,219
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab
See https://www.mathworks.com/matlabcentral/fileexchange/60147-normality-test-package. This package automatically runs 10 goodness of fit tests: normalitytest(X) Make sure X is a row vector. This function provides ten Normality tests that are not altogether available under one compact routine as a compiled Matlab function. All tests are coded to provide p-values for those normality tests, and the this function gives the results as an output table. Included tests are: Kolmogorov-Smirnov test (Limiting form (KS-Lim), Stephens Method (KS-S), Marsaglia Method (KS-M), Lilliefors test (KS-L)), Anderson-Darling (AD) test, Cramer-Von Mises (CvM) test, Shapiro-Wilk (SW) test, Shapiro-Francia (SF) test, Jarque-Bera (JB) test, D’Agostino and Pearson (DAP) test. Tests are not meant for big data. Most tests does not work for data bigger than 900. The related paper (DOI: 10.22237/jmasm/1509496200) can be found at https://digitalcommons.wayne.edu/jmasm/vol16/iss2/30/
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab
See https://www.mathworks.com/matlabcentral/fileexchange/60147-normality-test-package. This package automatically runs 10 goodness of fit tests: normalitytest(X) Make sure X is a row vector. This f
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab See https://www.mathworks.com/matlabcentral/fileexchange/60147-normality-test-package. This package automatically runs 10 goodness of fit tests: normalitytest(X) Make sure X is a row vector. This function provides ten Normality tests that are not altogether available under one compact routine as a compiled Matlab function. All tests are coded to provide p-values for those normality tests, and the this function gives the results as an output table. Included tests are: Kolmogorov-Smirnov test (Limiting form (KS-Lim), Stephens Method (KS-S), Marsaglia Method (KS-M), Lilliefors test (KS-L)), Anderson-Darling (AD) test, Cramer-Von Mises (CvM) test, Shapiro-Wilk (SW) test, Shapiro-Francia (SF) test, Jarque-Bera (JB) test, D’Agostino and Pearson (DAP) test. Tests are not meant for big data. Most tests does not work for data bigger than 900. The related paper (DOI: 10.22237/jmasm/1509496200) can be found at https://digitalcommons.wayne.edu/jmasm/vol16/iss2/30/
Testing if data comes from a normal distribution with mean 0 and unknown variance in Matlab See https://www.mathworks.com/matlabcentral/fileexchange/60147-normality-test-package. This package automatically runs 10 goodness of fit tests: normalitytest(X) Make sure X is a row vector. This f
50,220
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered?
The method described in the Benjami-Hochberg paper does not have multiple q-values. What do you mean by "one can then 'order' the q-values?'. One fixes at the onset, a q-value (say 0.05). This means we want to control the FDR at the level q. That is, the expected ratio of incorrectly rejected to rejected hypothesis will be less than q. With $q$ fixed, we set $P_{(i)}$ to be the the sorted p-values. The method then rejects $H_{(i)}$ for $i = 1,2,\ldots,k$ where $k$ is the largest $i$ for which $P_{(i)} \leq \frac{i}{m}q$.
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered
The method described in the Benjami-Hochberg paper does not have multiple q-values. What do you mean by "one can then 'order' the q-values?'. One fixes at the onset, a q-value (say 0.05). This means w
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered? The method described in the Benjami-Hochberg paper does not have multiple q-values. What do you mean by "one can then 'order' the q-values?'. One fixes at the onset, a q-value (say 0.05). This means we want to control the FDR at the level q. That is, the expected ratio of incorrectly rejected to rejected hypothesis will be less than q. With $q$ fixed, we set $P_{(i)}$ to be the the sorted p-values. The method then rejects $H_{(i)}$ for $i = 1,2,\ldots,k$ where $k$ is the largest $i$ for which $P_{(i)} \leq \frac{i}{m}q$.
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered The method described in the Benjami-Hochberg paper does not have multiple q-values. What do you mean by "one can then 'order' the q-values?'. One fixes at the onset, a q-value (say 0.05). This means w
50,221
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered?
The reason for the change in order is that the q-value measures a fundamentally different thing than the p-value. q-value is the false detection rate (FDR) at a given level of statistical significance. Let's say your 5th lowest observed p-value was 0.02 and by using some statistical method you estimated that you would get on average 2 false positive detections by using this threshold for significance. This would give an FDR of $2/5 = 0.4$. Let's then say that the 8th lowest p-value was 0.045 and the estimated number of false positives was 3. Then $q = 3/8 = 0.375$. The order of q-values doesn't match the p-values.
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered
The reason for the change in order is that the q-value measures a fundamentally different thing than the p-value. q-value is the false detection rate (FDR) at a given level of statistical significance
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered? The reason for the change in order is that the q-value measures a fundamentally different thing than the p-value. q-value is the false detection rate (FDR) at a given level of statistical significance. Let's say your 5th lowest observed p-value was 0.02 and by using some statistical method you estimated that you would get on average 2 false positive detections by using this threshold for significance. This would give an FDR of $2/5 = 0.4$. Let's then say that the 8th lowest p-value was 0.045 and the estimated number of false positives was 3. Then $q = 3/8 = 0.375$. The order of q-values doesn't match the p-values.
False discovery rate & q-values: how are q-values to be interpreted when rank of p-values is altered The reason for the change in order is that the q-value measures a fundamentally different thing than the p-value. q-value is the false detection rate (FDR) at a given level of statistical significance
50,222
Analysing ratios of variables
For question 1 you can use standard methods if the denominator is bounded away from 0 (as mentioned in the comments). If your sample size is not large relative to the potential skewness in the ratios then you probably do not want to use normal based methods (t-tests, anova), but resampling methods (bootstrap, permutation tests) would be worth investigating. For questions 2, 3, & 4 read "Spurious Correlation and the Fallacy of the Ratio Standard Revisited" by Richard Kronmal (1993) Journal of the Royal Statistical Society. Series A (vol 156, no 3, pp. 379-392). Someone else will need to comment on 5.
Analysing ratios of variables
For question 1 you can use standard methods if the denominator is bounded away from 0 (as mentioned in the comments). If your sample size is not large relative to the potential skewness in the ratios
Analysing ratios of variables For question 1 you can use standard methods if the denominator is bounded away from 0 (as mentioned in the comments). If your sample size is not large relative to the potential skewness in the ratios then you probably do not want to use normal based methods (t-tests, anova), but resampling methods (bootstrap, permutation tests) would be worth investigating. For questions 2, 3, & 4 read "Spurious Correlation and the Fallacy of the Ratio Standard Revisited" by Richard Kronmal (1993) Journal of the Royal Statistical Society. Series A (vol 156, no 3, pp. 379-392). Someone else will need to comment on 5.
Analysing ratios of variables For question 1 you can use standard methods if the denominator is bounded away from 0 (as mentioned in the comments). If your sample size is not large relative to the potential skewness in the ratios
50,223
Sampling from a marginal when full density is given
You are right! The formulation you describe has a general name to it. The theorem is called de-Finetti's theorem for exchangeable sequences, and is the fundamental theorem behind the Bayesian philosophy and ideas. Specifically, $x_i$s are exchangeable, that is, any permutation of $x_i$s will have the same distribution as $f(x_1,\ldots,x_n)$. In your strategy, I would be a bit more comfortable if you had drawn $c$ for every $i$. That is, generate $c$ from $f(c)$ and then generate $x_i$, and repeat this for $i=1\ldots n$. But I think if you generate enough $\mathbf {x}$s, this should not be a problem.
Sampling from a marginal when full density is given
You are right! The formulation you describe has a general name to it. The theorem is called de-Finetti's theorem for exchangeable sequences, and is the fundamental theorem behind the Bayesian philosop
Sampling from a marginal when full density is given You are right! The formulation you describe has a general name to it. The theorem is called de-Finetti's theorem for exchangeable sequences, and is the fundamental theorem behind the Bayesian philosophy and ideas. Specifically, $x_i$s are exchangeable, that is, any permutation of $x_i$s will have the same distribution as $f(x_1,\ldots,x_n)$. In your strategy, I would be a bit more comfortable if you had drawn $c$ for every $i$. That is, generate $c$ from $f(c)$ and then generate $x_i$, and repeat this for $i=1\ldots n$. But I think if you generate enough $\mathbf {x}$s, this should not be a problem.
Sampling from a marginal when full density is given You are right! The formulation you describe has a general name to it. The theorem is called de-Finetti's theorem for exchangeable sequences, and is the fundamental theorem behind the Bayesian philosop
50,224
Probabilities in case-controlled studies
Your proposal makes sense in this context. The Naive Bayes formulation (using the same language as Wikipedia) is: $P(C|F_1,\ldots,F_n) \propto P(C) \prod_{i=1}^n P(F_i|C)$ The $P(F_i|C)$ terms are estimated from the data, but instead of estimating $P(C)$ from the data (study prevalence), you use a different measure (population prevalence). This is identical to your proposal above.
Probabilities in case-controlled studies
Your proposal makes sense in this context. The Naive Bayes formulation (using the same language as Wikipedia) is: $P(C|F_1,\ldots,F_n) \propto P(C) \prod_{i=1}^n P(F_i|C)$ The $P(F_i|C)$ terms are est
Probabilities in case-controlled studies Your proposal makes sense in this context. The Naive Bayes formulation (using the same language as Wikipedia) is: $P(C|F_1,\ldots,F_n) \propto P(C) \prod_{i=1}^n P(F_i|C)$ The $P(F_i|C)$ terms are estimated from the data, but instead of estimating $P(C)$ from the data (study prevalence), you use a different measure (population prevalence). This is identical to your proposal above.
Probabilities in case-controlled studies Your proposal makes sense in this context. The Naive Bayes formulation (using the same language as Wikipedia) is: $P(C|F_1,\ldots,F_n) \propto P(C) \prod_{i=1}^n P(F_i|C)$ The $P(F_i|C)$ terms are est
50,225
Probabilities in case-controlled studies
After a few days, I decided it may be best to use an alternative method. What I did was sample the data such that it reflected the reported distributions in the population. I repeated this a number of times, each time randomly sampling in appropriate proportions, and took the average performance on the classifier. I continued to use the case-control design to find the features that I wanted, however in the validation step and subsequent performance reporting I used the sampling method. This seemed to me a simpler and more straight forward alternative to using a Bayes Factor.
Probabilities in case-controlled studies
After a few days, I decided it may be best to use an alternative method. What I did was sample the data such that it reflected the reported distributions in the population. I repeated this a number of
Probabilities in case-controlled studies After a few days, I decided it may be best to use an alternative method. What I did was sample the data such that it reflected the reported distributions in the population. I repeated this a number of times, each time randomly sampling in appropriate proportions, and took the average performance on the classifier. I continued to use the case-control design to find the features that I wanted, however in the validation step and subsequent performance reporting I used the sampling method. This seemed to me a simpler and more straight forward alternative to using a Bayes Factor.
Probabilities in case-controlled studies After a few days, I decided it may be best to use an alternative method. What I did was sample the data such that it reflected the reported distributions in the population. I repeated this a number of
50,226
Predicted probabilities from a multinomial regression model using zelig and R
The first thing to do is to construct the "linear predictors" or "logits" for each category for each prediction. So you have your model equation: $$\eta_{ir}=\sum_{j=1}^{p}X_{ij}\hat{\beta}_{jr}\;\; (i=1,\dots,m\;\; r=1,\dots,R)$$ Where for notational convenience, the above is to be understood to have $\hat{\beta}_{jR}=\eta_{iR}=0$ (as this is the reference category), and $\hat{\beta}_{jr}=0$ if variable $j$ was not included in the linear predictor for class $r$. So you will have an $m\times R$ matrix of logits. You then exponentiate to form predicted odds ratios and renormalise to form predicted probabilities. Note that the predicted odds ratios can be calculated by a simple matrix operation if your data is sufficiently organised: $$\bf{O}=\exp(\boldsymbol{\eta})=\exp(\bf{X}\boldsymbol{\beta})$$ of the $m\times p$ prediction matrix $\bf{X}\equiv\it{\{X_{ij}\}}$ with the $p\times R$ estimated co-efficient matrix $\boldsymbol{\beta}\equiv\it{\{\hat{\beta}_{jr}\}}$, and $\exp(.)$ is defined component wise (i.e. not the matrix exponential). The matrix $\bf{O}$ is an "odds ratio" matrix, with the last column should be all ones. If we take the $m\times 1$ vector $\bf{T}=O1_{R}$ This gives the normalisation constant for each prediction "row" of odds ratios. Now create the $(m\times m)$ diagonal matrix defined by $W_{kk}\equiv T_{k}^{-1}$, and the predicted probability matrix is given by: $$\bf{P}=\bf{W}\bf{O}$$ So in the example you give the matrix $\boldsymbol{\beta}$ would look like this: $$\begin{array}{c|c} Int:1 & Int:2 & Int:3 & Int:4 & Int:5 & 0 \\ \hline Dev:1 & Dev:2 & Dev:3 & Dev:4 & Dev:5 & 0 \\ \hline Vote:1 & Vote:2 & Vote:3 & Vote:4 & Vote:5 & 0 \\ \hline \end{array}$$ With the (roughly rounded) values plugged in we get: $$\boldsymbol{\beta}=\begin{array}{c|c} 3.68 & -0.90 & -1.56 & -1.13 & -1.68 & 0 \\ \hline -0.18 & 0.06 & -0.04 & 0.08 & -0.07 & 0 \\ \hline 0.02 & 0.02 & 0.04 & -0.04 & -0.01 & 0 \\ \hline \hline \end{array}$$ And the $\bf{X}$ matrix would look like this: $$\bf{X}=\begin{array}{c|c} 1 & 0.72 & 2\\ \hline 1 & 1.02 & 5\\ \hline 1 & 1.02 & 4\\ \hline 1 & 1.20 & 7\\ \hline 1 & 0.72 & 10\\ \hline 1 & 1.20 & 27\\ \hline 1 & 0.56 & 5\\ \hline 1 & 1.20 & 9\\ \hline 1 & 0.72 & 17\\ \hline 1 & 0.56 & 16\\ \hline \end{array}$$ So some R-code to do this would simply be (with the matrices $\bf{X}$ and $\boldsymbol{\beta}$ defined as above). The main parts are reading in the data, and padding it with 1s and 0s for $\bf{X}$ and $\boldsymbol{\beta}$: beta<-cbind(as.matrix( structure(c(3.68021133487111, -0.903496528862169, -1.56339830041814, -1.13238307296064, -1.67706243532044, -0.177585202845615, 0.0611115470557421, -0.0458373863009504, 0.0881133593132653, -0.0686190052488972, 0.0163917121907627, 0.0165232098847022, 0.0373815294869855, -0.0353209839724262, -0.00698911507852077), .Names = c("(Intercept):1", "(Intercept):2", "(Intercept):3", "(Intercept):4", "(Intercept):5", "Deviance:1", "Deviance:2", "Deviance:3", "Deviance:4", "Deviance:5", "Votes:1", "Votes:2", "Votes:3", "Votes:4", "Votes:5") , .Dim=c(3L,5L)) ),0) X<-cbind(1,as.matrix( structure(c(0.71847390030784, 1.01838748408701, 1.01838748408701, 1.20499277373001, 0.71847390030784, 1.20499277373001, 0.56393315893118, 1.20499277373001, 0.71847390030784, 0.56393315893118, 2, 5, 4, 7, 10, 27, 5, 9, 17, 16), .Dim = c(10L, 2L)) )) P<-diag(as.vector(exp(X %*% beta) %*% as.matrix(rep(1,ncol(beta))))^-1) %*% exp(X %*% beta)
Predicted probabilities from a multinomial regression model using zelig and R
The first thing to do is to construct the "linear predictors" or "logits" for each category for each prediction. So you have your model equation: $$\eta_{ir}=\sum_{j=1}^{p}X_{ij}\hat{\beta}_{jr}\;\;
Predicted probabilities from a multinomial regression model using zelig and R The first thing to do is to construct the "linear predictors" or "logits" for each category for each prediction. So you have your model equation: $$\eta_{ir}=\sum_{j=1}^{p}X_{ij}\hat{\beta}_{jr}\;\; (i=1,\dots,m\;\; r=1,\dots,R)$$ Where for notational convenience, the above is to be understood to have $\hat{\beta}_{jR}=\eta_{iR}=0$ (as this is the reference category), and $\hat{\beta}_{jr}=0$ if variable $j$ was not included in the linear predictor for class $r$. So you will have an $m\times R$ matrix of logits. You then exponentiate to form predicted odds ratios and renormalise to form predicted probabilities. Note that the predicted odds ratios can be calculated by a simple matrix operation if your data is sufficiently organised: $$\bf{O}=\exp(\boldsymbol{\eta})=\exp(\bf{X}\boldsymbol{\beta})$$ of the $m\times p$ prediction matrix $\bf{X}\equiv\it{\{X_{ij}\}}$ with the $p\times R$ estimated co-efficient matrix $\boldsymbol{\beta}\equiv\it{\{\hat{\beta}_{jr}\}}$, and $\exp(.)$ is defined component wise (i.e. not the matrix exponential). The matrix $\bf{O}$ is an "odds ratio" matrix, with the last column should be all ones. If we take the $m\times 1$ vector $\bf{T}=O1_{R}$ This gives the normalisation constant for each prediction "row" of odds ratios. Now create the $(m\times m)$ diagonal matrix defined by $W_{kk}\equiv T_{k}^{-1}$, and the predicted probability matrix is given by: $$\bf{P}=\bf{W}\bf{O}$$ So in the example you give the matrix $\boldsymbol{\beta}$ would look like this: $$\begin{array}{c|c} Int:1 & Int:2 & Int:3 & Int:4 & Int:5 & 0 \\ \hline Dev:1 & Dev:2 & Dev:3 & Dev:4 & Dev:5 & 0 \\ \hline Vote:1 & Vote:2 & Vote:3 & Vote:4 & Vote:5 & 0 \\ \hline \end{array}$$ With the (roughly rounded) values plugged in we get: $$\boldsymbol{\beta}=\begin{array}{c|c} 3.68 & -0.90 & -1.56 & -1.13 & -1.68 & 0 \\ \hline -0.18 & 0.06 & -0.04 & 0.08 & -0.07 & 0 \\ \hline 0.02 & 0.02 & 0.04 & -0.04 & -0.01 & 0 \\ \hline \hline \end{array}$$ And the $\bf{X}$ matrix would look like this: $$\bf{X}=\begin{array}{c|c} 1 & 0.72 & 2\\ \hline 1 & 1.02 & 5\\ \hline 1 & 1.02 & 4\\ \hline 1 & 1.20 & 7\\ \hline 1 & 0.72 & 10\\ \hline 1 & 1.20 & 27\\ \hline 1 & 0.56 & 5\\ \hline 1 & 1.20 & 9\\ \hline 1 & 0.72 & 17\\ \hline 1 & 0.56 & 16\\ \hline \end{array}$$ So some R-code to do this would simply be (with the matrices $\bf{X}$ and $\boldsymbol{\beta}$ defined as above). The main parts are reading in the data, and padding it with 1s and 0s for $\bf{X}$ and $\boldsymbol{\beta}$: beta<-cbind(as.matrix( structure(c(3.68021133487111, -0.903496528862169, -1.56339830041814, -1.13238307296064, -1.67706243532044, -0.177585202845615, 0.0611115470557421, -0.0458373863009504, 0.0881133593132653, -0.0686190052488972, 0.0163917121907627, 0.0165232098847022, 0.0373815294869855, -0.0353209839724262, -0.00698911507852077), .Names = c("(Intercept):1", "(Intercept):2", "(Intercept):3", "(Intercept):4", "(Intercept):5", "Deviance:1", "Deviance:2", "Deviance:3", "Deviance:4", "Deviance:5", "Votes:1", "Votes:2", "Votes:3", "Votes:4", "Votes:5") , .Dim=c(3L,5L)) ),0) X<-cbind(1,as.matrix( structure(c(0.71847390030784, 1.01838748408701, 1.01838748408701, 1.20499277373001, 0.71847390030784, 1.20499277373001, 0.56393315893118, 1.20499277373001, 0.71847390030784, 0.56393315893118, 2, 5, 4, 7, 10, 27, 5, 9, 17, 16), .Dim = c(10L, 2L)) )) P<-diag(as.vector(exp(X %*% beta) %*% as.matrix(rep(1,ncol(beta))))^-1) %*% exp(X %*% beta)
Predicted probabilities from a multinomial regression model using zelig and R The first thing to do is to construct the "linear predictors" or "logits" for each category for each prediction. So you have your model equation: $$\eta_{ir}=\sum_{j=1}^{p}X_{ij}\hat{\beta}_{jr}\;\;
50,227
Comparing numbers of p-values from many linear models
Prior to answering your question - is the distribution of effect of the genes justify using a linear model (e.g: are they distributed more or less normally?) Now to your question - I might offer to go a different way about it. It sounds like what you are asking for is to measure the correlation (e.g similarity of behavior) between the different conditions. A simple way to do that is to take the mean (or maybe the trimmed mean or median) of the 8 replications and then you'd have 10K triplets you can use for creating a correlation matrix (between the 3 conditions). The second step would then be to answer the question if one correlation (say between A and B) is significantly higher then the other two correlations (say, between --A and C-- and --B and C--). Here you can use the following nice online tool or you can code it yourself in R using the information given here. Cheers, Tal
Comparing numbers of p-values from many linear models
Prior to answering your question - is the distribution of effect of the genes justify using a linear model (e.g: are they distributed more or less normally?) Now to your question - I might offer to g
Comparing numbers of p-values from many linear models Prior to answering your question - is the distribution of effect of the genes justify using a linear model (e.g: are they distributed more or less normally?) Now to your question - I might offer to go a different way about it. It sounds like what you are asking for is to measure the correlation (e.g similarity of behavior) between the different conditions. A simple way to do that is to take the mean (or maybe the trimmed mean or median) of the 8 replications and then you'd have 10K triplets you can use for creating a correlation matrix (between the 3 conditions). The second step would then be to answer the question if one correlation (say between A and B) is significantly higher then the other two correlations (say, between --A and C-- and --B and C--). Here you can use the following nice online tool or you can code it yourself in R using the information given here. Cheers, Tal
Comparing numbers of p-values from many linear models Prior to answering your question - is the distribution of effect of the genes justify using a linear model (e.g: are they distributed more or less normally?) Now to your question - I might offer to g
50,228
Median of medians as robust mean of means?
If all the samples come from the same distribution, then yes the median of the sample medians is a fairly robust estimate of the median of the underlying distribution (though this need not be the same as the mean), since the median of a sample from a continuous distribution has probability 0.5 of being below (or above) the population median. Added Here is some illustrative R code. It takes a sample from a normal distribution and a case with outliers where 1% of data is 10,000 times bigger than it should be. It looks at the various statistics for the overall sample data (50,000 points) and then by the centre (mean or median) of the statistics of the 10,000 samples with 5 points in each sample. library(matrixStats) wholestats <- function(x,n) { mea <- sum(x)/n var <- sum((x-mea)^2)/(n-1) sdv <- sqrt(var) qun <- quantile(x, probs=c(0.25,0.5,0.75)) mad <- median(abs(x-qun[2])) c(mean=mea, variance=var, st.dev=sdv, median=qun[2], IQR=qun[3]-qun[1], MAD=mad) } rowstats <- function(x,b) { rmea <- rowSums(x)/b rvar <- rowSums((x-rmea)^2)/(b-1) rsdv <- sqrt(rvar) rqun <- rowQuantiles(x, probs=c(0.25,0.5,0.75)) rmad <- rowMedians(abs(x-rqun[,2])) c(mean=mean(rmea), variance=mean(rvar), st.dev=mean(rsdv), median=median(rqun[,2]), IQR=median(rqun[,3]-rqun[,1]), MAD=median(rmad)) } a <- 10000 # number of samples b <- 5 # samplesize set.seed(1) d <- array(rnorm(a*b), dim=c(a,b)) doutlier <- array(d * ifelse(runif(a*b)>0.99, 10000, 1) , dim=c(a,b)) The median based statistics as expected are more robust, though they fail to show that the heavy tailed outlier variant is heavy tailed. > wholestats(d,a*b) mean variance st.dev median.50% IQR.75% MAD -0.002440456 1.011306552 1.005637386 -0.001610677 1.357029247 0.678706371 > wholestats(doutlier,a*b) mean variance st.dev median.50% IQR.75% MAD -3.425664e+00 9.591583e+05 9.793663e+02 -1.610677e-03 1.373658e+00 6.871415e-01 > rowstats(d,b) mean variance st.dev median IQR MAD -0.002440456 1.014611308 0.947630870 0.003460172 0.917642167 0.510115277 > rowstats(doutlier,b) mean variance st.dev median IQR MAD -3.425664e+00 9.607212e+05 1.685929e+02 3.460172e-03 9.301795e-01 5.175084e-01
Median of medians as robust mean of means?
If all the samples come from the same distribution, then yes the median of the sample medians is a fairly robust estimate of the median of the underlying distribution (though this need not be the same
Median of medians as robust mean of means? If all the samples come from the same distribution, then yes the median of the sample medians is a fairly robust estimate of the median of the underlying distribution (though this need not be the same as the mean), since the median of a sample from a continuous distribution has probability 0.5 of being below (or above) the population median. Added Here is some illustrative R code. It takes a sample from a normal distribution and a case with outliers where 1% of data is 10,000 times bigger than it should be. It looks at the various statistics for the overall sample data (50,000 points) and then by the centre (mean or median) of the statistics of the 10,000 samples with 5 points in each sample. library(matrixStats) wholestats <- function(x,n) { mea <- sum(x)/n var <- sum((x-mea)^2)/(n-1) sdv <- sqrt(var) qun <- quantile(x, probs=c(0.25,0.5,0.75)) mad <- median(abs(x-qun[2])) c(mean=mea, variance=var, st.dev=sdv, median=qun[2], IQR=qun[3]-qun[1], MAD=mad) } rowstats <- function(x,b) { rmea <- rowSums(x)/b rvar <- rowSums((x-rmea)^2)/(b-1) rsdv <- sqrt(rvar) rqun <- rowQuantiles(x, probs=c(0.25,0.5,0.75)) rmad <- rowMedians(abs(x-rqun[,2])) c(mean=mean(rmea), variance=mean(rvar), st.dev=mean(rsdv), median=median(rqun[,2]), IQR=median(rqun[,3]-rqun[,1]), MAD=median(rmad)) } a <- 10000 # number of samples b <- 5 # samplesize set.seed(1) d <- array(rnorm(a*b), dim=c(a,b)) doutlier <- array(d * ifelse(runif(a*b)>0.99, 10000, 1) , dim=c(a,b)) The median based statistics as expected are more robust, though they fail to show that the heavy tailed outlier variant is heavy tailed. > wholestats(d,a*b) mean variance st.dev median.50% IQR.75% MAD -0.002440456 1.011306552 1.005637386 -0.001610677 1.357029247 0.678706371 > wholestats(doutlier,a*b) mean variance st.dev median.50% IQR.75% MAD -3.425664e+00 9.591583e+05 9.793663e+02 -1.610677e-03 1.373658e+00 6.871415e-01 > rowstats(d,b) mean variance st.dev median IQR MAD -0.002440456 1.014611308 0.947630870 0.003460172 0.917642167 0.510115277 > rowstats(doutlier,b) mean variance st.dev median IQR MAD -3.425664e+00 9.607212e+05 1.685929e+02 3.460172e-03 9.301795e-01 5.175084e-01
Median of medians as robust mean of means? If all the samples come from the same distribution, then yes the median of the sample medians is a fairly robust estimate of the median of the underlying distribution (though this need not be the same
50,229
Data collection and storage for time series analysis
Option #2 is much more flexible than #1, particularly if you plan on using Excel pivot tables and/or R packages such as Hadley Wickham's excellent reshape package. I would store the data so that each row contains measured (event-level and contender-level) variables and any variables necessary to uniquely identify an instance of the measured variables (contender ID, event ID, measurement occasion ID [e.g., half-second increments]) for a single measurement occasion within an event. This allows for the most flexible reshaping of data into any other format desired, a process Wickham describes as melting and casting. You can export the data into an comma-separated value (CSV) spreadsheet, which of course can be read into Excel and most other statistical software. If you have long-format data in Excel, aggregating, summarizing, and tabulating data is also easy using Pivot Tables. This enables you to create different views of the data that might be of interest to your client, such that as the data are updated you can update these useful views as well. IMO, the most robust solution for very large amounts of structured data is one you didn't mention: store them in a relational database (using, e.g., MS Access or open-source databases such as PostgreSQL) and use Structured Query Language (SQL) to perform the above operations. Here, your data would be broken up into separate tables containing information that is unique to events (e.g., event ID, event type, etc.), contenders (e.g., contender ID, contender name, etc.), unique event-contender combinations (since a single contender might participate in more than one event, and each event certainly has more than one contender), and the measured data in half-second intervals. This avoids storing redundant data and allows you to enforce the integrity of the data that you articulated in your question as data are added, deleted, or updated. There are methods for calling SQL queries from Excel, R, Matlab, and other statistical programs to extract just the information your client wants. A useful introductory text on relational database theory and application is "Inside Relational Databases" by Whitehorn and Marklyn.
Data collection and storage for time series analysis
Option #2 is much more flexible than #1, particularly if you plan on using Excel pivot tables and/or R packages such as Hadley Wickham's excellent reshape package. I would store the data so that each
Data collection and storage for time series analysis Option #2 is much more flexible than #1, particularly if you plan on using Excel pivot tables and/or R packages such as Hadley Wickham's excellent reshape package. I would store the data so that each row contains measured (event-level and contender-level) variables and any variables necessary to uniquely identify an instance of the measured variables (contender ID, event ID, measurement occasion ID [e.g., half-second increments]) for a single measurement occasion within an event. This allows for the most flexible reshaping of data into any other format desired, a process Wickham describes as melting and casting. You can export the data into an comma-separated value (CSV) spreadsheet, which of course can be read into Excel and most other statistical software. If you have long-format data in Excel, aggregating, summarizing, and tabulating data is also easy using Pivot Tables. This enables you to create different views of the data that might be of interest to your client, such that as the data are updated you can update these useful views as well. IMO, the most robust solution for very large amounts of structured data is one you didn't mention: store them in a relational database (using, e.g., MS Access or open-source databases such as PostgreSQL) and use Structured Query Language (SQL) to perform the above operations. Here, your data would be broken up into separate tables containing information that is unique to events (e.g., event ID, event type, etc.), contenders (e.g., contender ID, contender name, etc.), unique event-contender combinations (since a single contender might participate in more than one event, and each event certainly has more than one contender), and the measured data in half-second intervals. This avoids storing redundant data and allows you to enforce the integrity of the data that you articulated in your question as data are added, deleted, or updated. There are methods for calling SQL queries from Excel, R, Matlab, and other statistical programs to extract just the information your client wants. A useful introductory text on relational database theory and application is "Inside Relational Databases" by Whitehorn and Marklyn.
Data collection and storage for time series analysis Option #2 is much more flexible than #1, particularly if you plan on using Excel pivot tables and/or R packages such as Hadley Wickham's excellent reshape package. I would store the data so that each
50,230
Data collection and storage for time series analysis
In my experience, #1 is the better option. If you store the data in any flatfile setup (as you're suggesting) and don't put the rows as your time variable, it becomes that much harder to import into selected programs. For example, I work primarily in Fortran/C, with secondary applications occasionally done in R or MATLAB. To be compatible with all of these, I use ASCII flatfiles to store most of my data, with fixed-column width, fixed-precision reporting. Any time I have to work with something that isn't set up in this way, it always ends up being a hassle, regardless of how sexy or novel the method for storing the data was. Having empty columns isn't actually a problem, so long as you figure out a method for flagging them properly. Leaving them blank isn't actually the best option, as this can read (i.e. in Fortran) as a 0, which is decidedly different for most applications to an empty value. If you think your client will want to use any sort of programming language-based analysis, you'll want to try to come up with a consistent way to store / flag missing values. For example, if all of your data samples are positive real numbers, then storing a -99.99 is a good way to flag an entry as missing. Conclusion: figure out what your client is likely to require. If you really don't know, then go with #1, because it is the most general and the easiest to read in for multiple programs and programming languages. Remember to store the dimension information at the top of the file if you're using ASCII flatfiles, or in a defined data block if you're using binary files.
Data collection and storage for time series analysis
In my experience, #1 is the better option. If you store the data in any flatfile setup (as you're suggesting) and don't put the rows as your time variable, it becomes that much harder to import into s
Data collection and storage for time series analysis In my experience, #1 is the better option. If you store the data in any flatfile setup (as you're suggesting) and don't put the rows as your time variable, it becomes that much harder to import into selected programs. For example, I work primarily in Fortran/C, with secondary applications occasionally done in R or MATLAB. To be compatible with all of these, I use ASCII flatfiles to store most of my data, with fixed-column width, fixed-precision reporting. Any time I have to work with something that isn't set up in this way, it always ends up being a hassle, regardless of how sexy or novel the method for storing the data was. Having empty columns isn't actually a problem, so long as you figure out a method for flagging them properly. Leaving them blank isn't actually the best option, as this can read (i.e. in Fortran) as a 0, which is decidedly different for most applications to an empty value. If you think your client will want to use any sort of programming language-based analysis, you'll want to try to come up with a consistent way to store / flag missing values. For example, if all of your data samples are positive real numbers, then storing a -99.99 is a good way to flag an entry as missing. Conclusion: figure out what your client is likely to require. If you really don't know, then go with #1, because it is the most general and the easiest to read in for multiple programs and programming languages. Remember to store the dimension information at the top of the file if you're using ASCII flatfiles, or in a defined data block if you're using binary files.
Data collection and storage for time series analysis In my experience, #1 is the better option. If you store the data in any flatfile setup (as you're suggesting) and don't put the rows as your time variable, it becomes that much harder to import into s
50,231
Weighted discrete measurements of a value changing over time
Sounds like you might want to look at (Weighted) Moving Average.
Weighted discrete measurements of a value changing over time
Sounds like you might want to look at (Weighted) Moving Average.
Weighted discrete measurements of a value changing over time Sounds like you might want to look at (Weighted) Moving Average.
Weighted discrete measurements of a value changing over time Sounds like you might want to look at (Weighted) Moving Average.
50,232
Weighted discrete measurements of a value changing over time
There might be a bit of confusion here with some imprecise statistical jargon. If you have data points that have been measured/reported with different precision/reliability/variability then one turns naturally to Generalized Least Squares where one transforms/weights the data by adjusting for the relative variability . Search for Weighted Least Squares for example. Now given that one has weighted/transformed observed data one might we faced with another weighting issue. When you have correlated observations over space and/or time (taken at fixed intervals either time or space) one is advised to form an adaptive/autoregressive/auto-projected model called an ARIMA Model. Please review my answer to Seeking certain type of ARIMA explanation which suggests that an ARIMA is simply a weighted average of previous values. For example y(t)=.5*y(t-1)+.25*y(t-2)+.125*y(t-3) +.... or y(t)=.5*y(t-1)+.5y(t-12) These are two totally different "weighting solutions" . From your very vivid example it might be that you might have both opportunities to investigate. For more on time series you might review some of my postings and review what others might have said.
Weighted discrete measurements of a value changing over time
There might be a bit of confusion here with some imprecise statistical jargon. If you have data points that have been measured/reported with different precision/reliability/variability then one turns
Weighted discrete measurements of a value changing over time There might be a bit of confusion here with some imprecise statistical jargon. If you have data points that have been measured/reported with different precision/reliability/variability then one turns naturally to Generalized Least Squares where one transforms/weights the data by adjusting for the relative variability . Search for Weighted Least Squares for example. Now given that one has weighted/transformed observed data one might we faced with another weighting issue. When you have correlated observations over space and/or time (taken at fixed intervals either time or space) one is advised to form an adaptive/autoregressive/auto-projected model called an ARIMA Model. Please review my answer to Seeking certain type of ARIMA explanation which suggests that an ARIMA is simply a weighted average of previous values. For example y(t)=.5*y(t-1)+.25*y(t-2)+.125*y(t-3) +.... or y(t)=.5*y(t-1)+.5y(t-12) These are two totally different "weighting solutions" . From your very vivid example it might be that you might have both opportunities to investigate. For more on time series you might review some of my postings and review what others might have said.
Weighted discrete measurements of a value changing over time There might be a bit of confusion here with some imprecise statistical jargon. If you have data points that have been measured/reported with different precision/reliability/variability then one turns
50,233
Colinearity and scaling when using k-means
As CHL has already explained the use of center and scale to obtain standardized variables, I'll address collinearity: There is good reason to reduce collinear variables when clustering. Curse of Dimensionality The more dimensions you use, the more likely you are to fall victim of Bellman's 'curse of dimensionality'. In brief, the greater the number of dimensions, the greater the total volume, and the greater the sparsity of your data within it. (See the link for more detail.) Dimension Reduction --- manually by inspecting of pairwise collinearity... You mention that you have already reduced variables from some larger number down to 5 using pairwise collinearity measures. While this will work, it is quite tedious, since in general you will have $n\choose 2$ number of pairs to check. (So for example with 10 variables, you would have ${10 \choose 2} = 45$ different pairs to examine -- a few too many to do manually in my opinion! Dimension Reduction --- automatically using Principal Components Analysis (PCA)... One way to handle this automatically is to use the PCA (principle components analysis) algorithm. The concept is more or less what you're doing manually -- ranking the variables by how much unique information each variable is contributing. So you provide PCA your $n$-variable dataset as input, and PCA will rank order your variables according to the greatest variance each explains in the data -- essentially picking out the non-collinear variables. Depending on whether you want 2-D or 3-D clusters, you would use the top 2 or 3 variables from PCA. Principal Components in R The PCA algorithm is available (built-in) from R. Actually there are several functions in R that do principal components. I've had success with prcomp(). Standard Reference available free online One of the best references available is the classic: Elements of Statistical Learning, by Trevor Hastie, Robert Tibshirani, and Jermoe Friedman The authors have graciously made the entire book available (for free) as a PDF download from their Stanford website. There are excellent chapters on Clustering, Principal Comoonents, and a great section on the curse of dimensionality.
Colinearity and scaling when using k-means
As CHL has already explained the use of center and scale to obtain standardized variables, I'll address collinearity: There is good reason to reduce collinear variables when clustering. Curse of Dimen
Colinearity and scaling when using k-means As CHL has already explained the use of center and scale to obtain standardized variables, I'll address collinearity: There is good reason to reduce collinear variables when clustering. Curse of Dimensionality The more dimensions you use, the more likely you are to fall victim of Bellman's 'curse of dimensionality'. In brief, the greater the number of dimensions, the greater the total volume, and the greater the sparsity of your data within it. (See the link for more detail.) Dimension Reduction --- manually by inspecting of pairwise collinearity... You mention that you have already reduced variables from some larger number down to 5 using pairwise collinearity measures. While this will work, it is quite tedious, since in general you will have $n\choose 2$ number of pairs to check. (So for example with 10 variables, you would have ${10 \choose 2} = 45$ different pairs to examine -- a few too many to do manually in my opinion! Dimension Reduction --- automatically using Principal Components Analysis (PCA)... One way to handle this automatically is to use the PCA (principle components analysis) algorithm. The concept is more or less what you're doing manually -- ranking the variables by how much unique information each variable is contributing. So you provide PCA your $n$-variable dataset as input, and PCA will rank order your variables according to the greatest variance each explains in the data -- essentially picking out the non-collinear variables. Depending on whether you want 2-D or 3-D clusters, you would use the top 2 or 3 variables from PCA. Principal Components in R The PCA algorithm is available (built-in) from R. Actually there are several functions in R that do principal components. I've had success with prcomp(). Standard Reference available free online One of the best references available is the classic: Elements of Statistical Learning, by Trevor Hastie, Robert Tibshirani, and Jermoe Friedman The authors have graciously made the entire book available (for free) as a PDF download from their Stanford website. There are excellent chapters on Clustering, Principal Comoonents, and a great section on the curse of dimensionality.
Colinearity and scaling when using k-means As CHL has already explained the use of center and scale to obtain standardized variables, I'll address collinearity: There is good reason to reduce collinear variables when clustering. Curse of Dimen
50,234
Gaussian kernel estimator as Nadaraya-Watson estimator?
The conditional mean is defined by: $$E(Y|X)\equiv\int y f(y|x) dy$$ Where $f(Y|X)$ is the conditional density. Using the product rule, you can show: $$f(y|x)=\frac{f(y,x)}{f(x)}$$ Substituting this back into the integral you get $$E(Y|X)\equiv\frac{\int y f(y,x) dy}{f(x)}$$ Which is of the form you seek, if you use the kernel density estimator.
Gaussian kernel estimator as Nadaraya-Watson estimator?
The conditional mean is defined by: $$E(Y|X)\equiv\int y f(y|x) dy$$ Where $f(Y|X)$ is the conditional density. Using the product rule, you can show: $$f(y|x)=\frac{f(y,x)}{f(x)}$$ Substituting this
Gaussian kernel estimator as Nadaraya-Watson estimator? The conditional mean is defined by: $$E(Y|X)\equiv\int y f(y|x) dy$$ Where $f(Y|X)$ is the conditional density. Using the product rule, you can show: $$f(y|x)=\frac{f(y,x)}{f(x)}$$ Substituting this back into the integral you get $$E(Y|X)\equiv\frac{\int y f(y,x) dy}{f(x)}$$ Which is of the form you seek, if you use the kernel density estimator.
Gaussian kernel estimator as Nadaraya-Watson estimator? The conditional mean is defined by: $$E(Y|X)\equiv\int y f(y|x) dy$$ Where $f(Y|X)$ is the conditional density. Using the product rule, you can show: $$f(y|x)=\frac{f(y,x)}{f(x)}$$ Substituting this
50,235
Computing confidence intervals for prevalence for several types of infection
So you have a population each of whom can have zero or more conditions. To answer the question: How many hospital patients have A? It seems to me that the best you can do is take your favourite proportion estimator and offer it up with your favourite confidence interval. There are lots of choices, which will make a difference for very high or very low proportions. If you have such a situation, the estimator above may not be optimal. If you are interested in the population of just your hospital then you can, as SheldonCooper points out, dispense with the statistics altogether. I suspect however that you are interested in hospital patients more generally, so your standard errors and intervals might be interpreted relative to this population. In your suggested estimator the identity of the population will determine what 1-F is. Certainly hospital patients don't look like non-hospital patients with respect to the conditions you're counting, but that need not matter. Following Sheldon's second observation, it is probable that the conditions correlate. But as far as I can see this is only useful information if you are asking conditional questions, e.g. the prevalence of A among B sufferers. In probabilistic terms your question is about estimating marginals, and correlation information only tells you about estimating conditionals. If you were interested in these sorts of subgroups, you'd certainly want to model this information. You'd also want it if there were differential measurement errors or sample selection issues, etc. e.g. only getting tested for A if you have a B diagnosis... That might also make certain sample marginals problematic as estimates of population marginals. Thankfully, I don't know much about hospital populations, but I'd be willing to bet that there are some of these issues around. Finally, about reporting: If you in fact want to report confidence regions rather than condition-wise intervals, then again the correlation structure matters, and things get considerably trickier. I seem to remember that Agresti had a paper on simultaneous confidence intervals for multivariate Binomial proportions, which might be helpful for this approach.
Computing confidence intervals for prevalence for several types of infection
So you have a population each of whom can have zero or more conditions. To answer the question: How many hospital patients have A? It seems to me that the best you can do is take your favourite prop
Computing confidence intervals for prevalence for several types of infection So you have a population each of whom can have zero or more conditions. To answer the question: How many hospital patients have A? It seems to me that the best you can do is take your favourite proportion estimator and offer it up with your favourite confidence interval. There are lots of choices, which will make a difference for very high or very low proportions. If you have such a situation, the estimator above may not be optimal. If you are interested in the population of just your hospital then you can, as SheldonCooper points out, dispense with the statistics altogether. I suspect however that you are interested in hospital patients more generally, so your standard errors and intervals might be interpreted relative to this population. In your suggested estimator the identity of the population will determine what 1-F is. Certainly hospital patients don't look like non-hospital patients with respect to the conditions you're counting, but that need not matter. Following Sheldon's second observation, it is probable that the conditions correlate. But as far as I can see this is only useful information if you are asking conditional questions, e.g. the prevalence of A among B sufferers. In probabilistic terms your question is about estimating marginals, and correlation information only tells you about estimating conditionals. If you were interested in these sorts of subgroups, you'd certainly want to model this information. You'd also want it if there were differential measurement errors or sample selection issues, etc. e.g. only getting tested for A if you have a B diagnosis... That might also make certain sample marginals problematic as estimates of population marginals. Thankfully, I don't know much about hospital populations, but I'd be willing to bet that there are some of these issues around. Finally, about reporting: If you in fact want to report confidence regions rather than condition-wise intervals, then again the correlation structure matters, and things get considerably trickier. I seem to remember that Agresti had a paper on simultaneous confidence intervals for multivariate Binomial proportions, which might be helpful for this approach.
Computing confidence intervals for prevalence for several types of infection So you have a population each of whom can have zero or more conditions. To answer the question: How many hospital patients have A? It seems to me that the best you can do is take your favourite prop
50,236
Computing confidence intervals for prevalence for several types of infection
A few thoughts: As people have mentioned, if you have the entire hospital population worth of data, and all the questions you have are restricted to that hospital, you can dispense with a confidence interval entirely. However, assuming that's not the case, and you either have a subsample of the hospital, or want to talk about the hospital as a sample of the general population... You can probably ignore the dependency between infections. They're unlikely to be perfectly uncorrelated with each other, but the harder you look at the correlation between infections and the various and sundry other violations of independent happenings (basically, the risk for you is independent of my disease state), the more simple statistics in ID begin to break down. For something like this, you're probably okay. I'm pretty sure you can use the formula as stated. You're not pooling the results together, and as far as you've said, you're not going to be making any sort of comparisons between groups. If we're assuming they're independent, that's no less valid than independently estimating the prevalence any two other unrelated things in the same population. This isn't true if you want to start talking about joint prevalences or the like, but you seem to just want a table of Condition Prev (95% CI).
Computing confidence intervals for prevalence for several types of infection
A few thoughts: As people have mentioned, if you have the entire hospital population worth of data, and all the questions you have are restricted to that hospital, you can dispense with a confidence
Computing confidence intervals for prevalence for several types of infection A few thoughts: As people have mentioned, if you have the entire hospital population worth of data, and all the questions you have are restricted to that hospital, you can dispense with a confidence interval entirely. However, assuming that's not the case, and you either have a subsample of the hospital, or want to talk about the hospital as a sample of the general population... You can probably ignore the dependency between infections. They're unlikely to be perfectly uncorrelated with each other, but the harder you look at the correlation between infections and the various and sundry other violations of independent happenings (basically, the risk for you is independent of my disease state), the more simple statistics in ID begin to break down. For something like this, you're probably okay. I'm pretty sure you can use the formula as stated. You're not pooling the results together, and as far as you've said, you're not going to be making any sort of comparisons between groups. If we're assuming they're independent, that's no less valid than independently estimating the prevalence any two other unrelated things in the same population. This isn't true if you want to start talking about joint prevalences or the like, but you seem to just want a table of Condition Prev (95% CI).
Computing confidence intervals for prevalence for several types of infection A few thoughts: As people have mentioned, if you have the entire hospital population worth of data, and all the questions you have are restricted to that hospital, you can dispense with a confidence
50,237
Which non-parametric test can I use to identify significant interactions of independent variables?
Non-parametric tests are likely to be less powerful than parametric tests and thus require a larger sample size. This is annoying because if you had a large sample size, sample means would be approximately normally distributed by the central limit theorem, and you thus wouldn't need non-parametric tests. Look at generalized linear models, of which least squares and Poisson are special cases. I've never found a text that explains this particularly well; try talking to someone about it. Look at non-parametric methods if you feel like it, but I have a hunch that they won't help you much in this case unless you're using ordinal data or a large set of very bizarrely distributed data.
Which non-parametric test can I use to identify significant interactions of independent variables?
Non-parametric tests are likely to be less powerful than parametric tests and thus require a larger sample size. This is annoying because if you had a large sample size, sample means would be approxim
Which non-parametric test can I use to identify significant interactions of independent variables? Non-parametric tests are likely to be less powerful than parametric tests and thus require a larger sample size. This is annoying because if you had a large sample size, sample means would be approximately normally distributed by the central limit theorem, and you thus wouldn't need non-parametric tests. Look at generalized linear models, of which least squares and Poisson are special cases. I've never found a text that explains this particularly well; try talking to someone about it. Look at non-parametric methods if you feel like it, but I have a hunch that they won't help you much in this case unless you're using ordinal data or a large set of very bizarrely distributed data.
Which non-parametric test can I use to identify significant interactions of independent variables? Non-parametric tests are likely to be less powerful than parametric tests and thus require a larger sample size. This is annoying because if you had a large sample size, sample means would be approxim
50,238
Which non-parametric test can I use to identify significant interactions of independent variables?
I had the same questions and made some research. I came across some texts that seem to offer solutions but I ahve to admit that I did not seriously apply them until now. Feller, A., Holmes, C.C., 2009. Beyond toplines: Heterogeneous treatment effects in random-ized experiments. Leys, C., Schumann, S., 2010. A nonparametric method to analyze interactions: The adjusted rank transform test. Journal of Experimental Social Psychology 46 (4), 684–688. Sawilowsky, S.S., 1990. Nonparametric tests of interaction in experimental design. Review of Educational Research 60 (1), 91–126. At least for 1. it seems that you rely on a sufficiently large sample size. They analyze a dataset which has between 38,00 and 190,000 observations per treatment. If you are working with experimental data from a laboratory and are from the behavioral field, this is probably not very helpful. However, I find their analysis of interaction effects, especially their graphical interpretation, very vivid and intuitive. The 2. text discusses one of the approaches that are discussed in 3. It has been a while since I read the latter paper, but if I remember correctly, the author presents some approaches to analyze interactions non-parametrically in a practical way. As probabilityislogic said, people often criticize that non-parametric tests of interactions lack power. However, Sawilowsky (1990) states that "The review shows that these new techniques are robust, powerful, versatile, and easy to compute." On the other hand, the text is quite old ;) Other approaches, of which I only know the name, are Finite Mixture Models and Latent Class Regression Models. One of the two is a special form of the other, but I do not remember which one. Hope this helps.
Which non-parametric test can I use to identify significant interactions of independent variables?
I had the same questions and made some research. I came across some texts that seem to offer solutions but I ahve to admit that I did not seriously apply them until now. Feller, A., Holmes, C.C., 20
Which non-parametric test can I use to identify significant interactions of independent variables? I had the same questions and made some research. I came across some texts that seem to offer solutions but I ahve to admit that I did not seriously apply them until now. Feller, A., Holmes, C.C., 2009. Beyond toplines: Heterogeneous treatment effects in random-ized experiments. Leys, C., Schumann, S., 2010. A nonparametric method to analyze interactions: The adjusted rank transform test. Journal of Experimental Social Psychology 46 (4), 684–688. Sawilowsky, S.S., 1990. Nonparametric tests of interaction in experimental design. Review of Educational Research 60 (1), 91–126. At least for 1. it seems that you rely on a sufficiently large sample size. They analyze a dataset which has between 38,00 and 190,000 observations per treatment. If you are working with experimental data from a laboratory and are from the behavioral field, this is probably not very helpful. However, I find their analysis of interaction effects, especially their graphical interpretation, very vivid and intuitive. The 2. text discusses one of the approaches that are discussed in 3. It has been a while since I read the latter paper, but if I remember correctly, the author presents some approaches to analyze interactions non-parametrically in a practical way. As probabilityislogic said, people often criticize that non-parametric tests of interactions lack power. However, Sawilowsky (1990) states that "The review shows that these new techniques are robust, powerful, versatile, and easy to compute." On the other hand, the text is quite old ;) Other approaches, of which I only know the name, are Finite Mixture Models and Latent Class Regression Models. One of the two is a special form of the other, but I do not remember which one. Hope this helps.
Which non-parametric test can I use to identify significant interactions of independent variables? I had the same questions and made some research. I came across some texts that seem to offer solutions but I ahve to admit that I did not seriously apply them until now. Feller, A., Holmes, C.C., 20
50,239
How to make a combination (aggregation) of quantile forecast?
Short answer. The problem you mention is well studied by Granger C.W.J. with co-authors, and known as the forecasts combination (or pooling) problem. The general idea is to choose the loss function criterion and the parameters (may be time dependent) that minimize the latter. Below I put some references that may be useful (only publicly available, look for the original works in references after the text). K.F.Wallis Combining Density and Interval Forecasts: A Modest Proposal // Oxford bulletin of economics and statistics, 67, supplement (2005) 0305-9049 (provides a general idea of how to combine interval forecasts, though there is no details on how to choose the weights) Allan Timmermann Forecast combinations. (a survey on different aspects of the forecast combinations by one of the co-editors of Handbook of economic forecasting that I would like to study myself) Hoping for the longer answer from the community.
How to make a combination (aggregation) of quantile forecast?
Short answer. The problem you mention is well studied by Granger C.W.J. with co-authors, and known as the forecasts combination (or pooling) problem. The general idea is to choose the loss function cr
How to make a combination (aggregation) of quantile forecast? Short answer. The problem you mention is well studied by Granger C.W.J. with co-authors, and known as the forecasts combination (or pooling) problem. The general idea is to choose the loss function criterion and the parameters (may be time dependent) that minimize the latter. Below I put some references that may be useful (only publicly available, look for the original works in references after the text). K.F.Wallis Combining Density and Interval Forecasts: A Modest Proposal // Oxford bulletin of economics and statistics, 67, supplement (2005) 0305-9049 (provides a general idea of how to combine interval forecasts, though there is no details on how to choose the weights) Allan Timmermann Forecast combinations. (a survey on different aspects of the forecast combinations by one of the co-editors of Handbook of economic forecasting that I would like to study myself) Hoping for the longer answer from the community.
How to make a combination (aggregation) of quantile forecast? Short answer. The problem you mention is well studied by Granger C.W.J. with co-authors, and known as the forecasts combination (or pooling) problem. The general idea is to choose the loss function cr
50,240
Multistage sampling in R
Yeah, the sampling package handles this, you can do cluster sampling or stratified or a few others: http://cran.r-project.org/web/packages/sampling/sampling.pdf It can then also handle a lot of the special variance estimation techniques you'll have to do for any metric you calculate from the complex design. However, I prefer Lumley's survey package for that.
Multistage sampling in R
Yeah, the sampling package handles this, you can do cluster sampling or stratified or a few others: http://cran.r-project.org/web/packages/sampling/sampling.pdf It can then also handle a lot of the s
Multistage sampling in R Yeah, the sampling package handles this, you can do cluster sampling or stratified or a few others: http://cran.r-project.org/web/packages/sampling/sampling.pdf It can then also handle a lot of the special variance estimation techniques you'll have to do for any metric you calculate from the complex design. However, I prefer Lumley's survey package for that.
Multistage sampling in R Yeah, the sampling package handles this, you can do cluster sampling or stratified or a few others: http://cran.r-project.org/web/packages/sampling/sampling.pdf It can then also handle a lot of the s
50,241
Multistage sampling in R
I think no extra package is needed for the task, just use the basic sample function, e.g.: Get sample from the first group: sample <- sample(data[data$"Care Type" == "Acute Care",], size = 25) Get the choosen IDs out of the orig. dataset (making a backup could be a good idea before that): data <- data[setdiff(data$pat_id, sample_pat_id),] Get sample from second group in the rest of the dataset and concatenate to sample: sample <- rbind(sample, sample(data[(data$"Care Type" == "Acute Care"),], size = 25) Repeat for each segment: data <- data[setdiff(data$pat_id, sample_pat_id),] sample <- rbind(sample, sample(data[(data$"Care Type" == "?"),], size = ?) Sorry, not tested, but I think the point can be seen. And also: I am sure the above code could be improved and minified.
Multistage sampling in R
I think no extra package is needed for the task, just use the basic sample function, e.g.: Get sample from the first group: sample <- sample(data[data$"Care Type" == "Acute Care",], size = 25) Get th
Multistage sampling in R I think no extra package is needed for the task, just use the basic sample function, e.g.: Get sample from the first group: sample <- sample(data[data$"Care Type" == "Acute Care",], size = 25) Get the choosen IDs out of the orig. dataset (making a backup could be a good idea before that): data <- data[setdiff(data$pat_id, sample_pat_id),] Get sample from second group in the rest of the dataset and concatenate to sample: sample <- rbind(sample, sample(data[(data$"Care Type" == "Acute Care"),], size = 25) Repeat for each segment: data <- data[setdiff(data$pat_id, sample_pat_id),] sample <- rbind(sample, sample(data[(data$"Care Type" == "?"),], size = ?) Sorry, not tested, but I think the point can be seen. And also: I am sure the above code could be improved and minified.
Multistage sampling in R I think no extra package is needed for the task, just use the basic sample function, e.g.: Get sample from the first group: sample <- sample(data[data$"Care Type" == "Acute Care",], size = 25) Get th
50,242
Multistage sampling in R
What I would do is provide prob argument with weights for each data point based on number of levels in your variable. Example: df <- data.frame(oks = sample(100), grp = c(rep("trt1", times = 30), rep("trt2", times = 70))) > head(df) oks grp 1 40 trt1 2 29 trt1 3 12 trt1 4 25 trt1 5 19 trt1 6 45 trt1 Obviously: > (df.prob <- table(df$grp)) trt1 trt2 30 70 You pass a vector of probabilities to sample. You can sort your data.frame by your desired variable (and use the adaptation of solution provided here), or you could assign weights to individual rows based on the level of the treatment (not presented here, but shouldn't be too hard to recode). df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ] # / by nrow(df) to get appropriate weight per treatment This is the approximate ratio you're looking for, right? > table(df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ]$grp) trt1 trt2 12 18
Multistage sampling in R
What I would do is provide prob argument with weights for each data point based on number of levels in your variable. Example: df <- data.frame(oks = sample(100), grp = c(rep("trt1", times = 3
Multistage sampling in R What I would do is provide prob argument with weights for each data point based on number of levels in your variable. Example: df <- data.frame(oks = sample(100), grp = c(rep("trt1", times = 30), rep("trt2", times = 70))) > head(df) oks grp 1 40 trt1 2 29 trt1 3 12 trt1 4 25 trt1 5 19 trt1 6 45 trt1 Obviously: > (df.prob <- table(df$grp)) trt1 trt2 30 70 You pass a vector of probabilities to sample. You can sort your data.frame by your desired variable (and use the adaptation of solution provided here), or you could assign weights to individual rows based on the level of the treatment (not presented here, but shouldn't be too hard to recode). df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ] # / by nrow(df) to get appropriate weight per treatment This is the approximate ratio you're looking for, right? > table(df[sample(x = df$oks, size = 30, prob = rep(df.prob/nrow(df), df.prob)), ]$grp) trt1 trt2 12 18
Multistage sampling in R What I would do is provide prob argument with weights for each data point based on number of levels in your variable. Example: df <- data.frame(oks = sample(100), grp = c(rep("trt1", times = 3
50,243
Interaction between ordinal and categorical factor
I'd stick with logistic or probit regression, enter both factors as covariates, but enter the ordinal factor as if it was continuous. To test for interaction, do a likelihood-ratio test comparing models with and without an interaction between the two factors. This test will have a single degree of freedom and therefore retain good power. After using this to decide whether or not you want to include an interaction between the two factors, you can then move on to decide how best to code the 5-level factor in your final model. It could make sense to keep treating it as if it were continuous, or you might wish to code it as four dummy (indicator) variables, or you choose to collapse it into fewer levels, or use some other type of contrast. The choice probably depends on the scientific meaning of the model and its intended use, as well as the fit of the various models.
Interaction between ordinal and categorical factor
I'd stick with logistic or probit regression, enter both factors as covariates, but enter the ordinal factor as if it was continuous. To test for interaction, do a likelihood-ratio test comparing mode
Interaction between ordinal and categorical factor I'd stick with logistic or probit regression, enter both factors as covariates, but enter the ordinal factor as if it was continuous. To test for interaction, do a likelihood-ratio test comparing models with and without an interaction between the two factors. This test will have a single degree of freedom and therefore retain good power. After using this to decide whether or not you want to include an interaction between the two factors, you can then move on to decide how best to code the 5-level factor in your final model. It could make sense to keep treating it as if it were continuous, or you might wish to code it as four dummy (indicator) variables, or you choose to collapse it into fewer levels, or use some other type of contrast. The choice probably depends on the scientific meaning of the model and its intended use, as well as the fit of the various models.
Interaction between ordinal and categorical factor I'd stick with logistic or probit regression, enter both factors as covariates, but enter the ordinal factor as if it was continuous. To test for interaction, do a likelihood-ratio test comparing mode
50,244
How to setup a laboratory experiment in Ecological Research under high natural variability
Whether you have a reasonable chance of obtaining (i.e. power to obtain) reliable conclusions depends on how big the effects are you wish to be able to detect. With such small numbers they'll have to be very large. Clearly having fewer treatments and more replications per treatment will give you at least a bit more power, or equivalently the aiblity to detect somewhat smaller effects with the same power. To put some rough numbers on that, let's ignore the soil types for simplicity (including them will make things more gloomy) and do some standard power calculations two-sample for 2-sample t-tests. If you compare one treatment vs control with 10 in each group (i.e. 20 in total) you'll have 80% power to detect a difference between treatment and control of 1.25 standard deviations (SDs). With two treatments + control, 6 in each group (18 in total), you have 80% power to detect a difference of 1.4 SDs between both treatments combined and control, or 1.6 SDs between either treatment by itself and control (or between the two treatments). It may well be sensible use a log-transform (or perhaps some other transform) your data prior to analysis, in which case the SDs are the SDs of the transformed variables. In the social sciences, an effect of around 0.8 SDs or over would often be considered "large", and designing a study to detect to have decent power only to detect a bigger effect than this might be politely described as "optimistic". But remember that the SD here is the SD of the residual, unexplained variation. You can reduce this by either (1) making your experimental units more uniform or (2) explaining more of the variation by other means. The lower the uncontrolled variability the higher the power you'll have to detect effects due to the factors open to experimental manipulation. You say "variability in field measurements can be as high as 25% within one group". But this is a laboratory experiment; is there a reason the variability need be this high in the lab? Can you homogenise your soil before you start the experiment? I guess this may destroy the soil structure though? Can you take baseline measurements before the treatments are applied? Using these to explain some of the inate variability between units by either analysing change since baseline or (better) adding them to the model as covariates (.e. ANCOVA) may help a lot. Sorry I haven't mentioned G*Power 3 but i've never heard of it and from a quick look the link you gave it looks considerably more sophisticated, and therefore complicated, than is necessary here.
How to setup a laboratory experiment in Ecological Research under high natural variability
Whether you have a reasonable chance of obtaining (i.e. power to obtain) reliable conclusions depends on how big the effects are you wish to be able to detect. With such small numbers they'll have to
How to setup a laboratory experiment in Ecological Research under high natural variability Whether you have a reasonable chance of obtaining (i.e. power to obtain) reliable conclusions depends on how big the effects are you wish to be able to detect. With such small numbers they'll have to be very large. Clearly having fewer treatments and more replications per treatment will give you at least a bit more power, or equivalently the aiblity to detect somewhat smaller effects with the same power. To put some rough numbers on that, let's ignore the soil types for simplicity (including them will make things more gloomy) and do some standard power calculations two-sample for 2-sample t-tests. If you compare one treatment vs control with 10 in each group (i.e. 20 in total) you'll have 80% power to detect a difference between treatment and control of 1.25 standard deviations (SDs). With two treatments + control, 6 in each group (18 in total), you have 80% power to detect a difference of 1.4 SDs between both treatments combined and control, or 1.6 SDs between either treatment by itself and control (or between the two treatments). It may well be sensible use a log-transform (or perhaps some other transform) your data prior to analysis, in which case the SDs are the SDs of the transformed variables. In the social sciences, an effect of around 0.8 SDs or over would often be considered "large", and designing a study to detect to have decent power only to detect a bigger effect than this might be politely described as "optimistic". But remember that the SD here is the SD of the residual, unexplained variation. You can reduce this by either (1) making your experimental units more uniform or (2) explaining more of the variation by other means. The lower the uncontrolled variability the higher the power you'll have to detect effects due to the factors open to experimental manipulation. You say "variability in field measurements can be as high as 25% within one group". But this is a laboratory experiment; is there a reason the variability need be this high in the lab? Can you homogenise your soil before you start the experiment? I guess this may destroy the soil structure though? Can you take baseline measurements before the treatments are applied? Using these to explain some of the inate variability between units by either analysing change since baseline or (better) adding them to the model as covariates (.e. ANCOVA) may help a lot. Sorry I haven't mentioned G*Power 3 but i've never heard of it and from a quick look the link you gave it looks considerably more sophisticated, and therefore complicated, than is necessary here.
How to setup a laboratory experiment in Ecological Research under high natural variability Whether you have a reasonable chance of obtaining (i.e. power to obtain) reliable conclusions depends on how big the effects are you wish to be able to detect. With such small numbers they'll have to
50,245
Automatic test measuring dissimilarity between two time series
There are many different distance measures. For starters, there's always the correlation. You can look at the mean square error. In R, you can see the algorithm for time series in Rob Hyndman's ftsa package (see the error function). See Liao (2005) for a nice short survey of time series similarity measures, including euclidean distance, root mean square distance, mikowski distance, pearson's correlation, dynamic time warping distance, kullback-liebler disance, symmetric Chernoff information divergence, and cross-correlation. T. Warren Liao, "Clustering of time series data—a survey" Pattern Recognition, 2005
Automatic test measuring dissimilarity between two time series
There are many different distance measures. For starters, there's always the correlation. You can look at the mean square error. In R, you can see the algorithm for time series in Rob Hyndman's ft
Automatic test measuring dissimilarity between two time series There are many different distance measures. For starters, there's always the correlation. You can look at the mean square error. In R, you can see the algorithm for time series in Rob Hyndman's ftsa package (see the error function). See Liao (2005) for a nice short survey of time series similarity measures, including euclidean distance, root mean square distance, mikowski distance, pearson's correlation, dynamic time warping distance, kullback-liebler disance, symmetric Chernoff information divergence, and cross-correlation. T. Warren Liao, "Clustering of time series data—a survey" Pattern Recognition, 2005
Automatic test measuring dissimilarity between two time series There are many different distance measures. For starters, there's always the correlation. You can look at the mean square error. In R, you can see the algorithm for time series in Rob Hyndman's ft
50,246
Can I use Synthetic Control Method for Comparative Case Studies with survey data?
[Caveat: I have not read the paper so the below may be nonsense for all I know ...] Based on the summary of the R package I would venture to guess that you could use the proposed methodology for the survey data provided the following conditions are met: You have survey data from control groups during pre-intervention periods. These control groups need not be identical to the treatment groups. The data you have is time series data. Provided points 1 and 2 are met my best intuition as to how the method works is as follows: First, construct a 'hypothetical' (synthetic in their words) control group that behaves as identical as possible as the treatment group. The hypothetical group is constructed by taking a convex combination of the control group data you have. As an example, suppose that you want to measure student performance on math. Your control groups could be different sections whereas the treatment group is one specific section. You construct the hypothetical control group such that the weighted (with the weights summing to 1 and hence convex) average of the scores of the control group sections is as close as possible to the scores of the treatment group before the intervention (i.e., use MSPE which is Mean Squared Prediction Error). Second, extrapolate the hypothetical group's scores into the post-intervention period using the parameter estimates from step 1. Since, the hypothetical group has been constructed to be identical to the treatment group pre-intervention, the post-intervention scores of the hypothetical group provides an appropriate counter-factual evidence to the treatment group's post-intervention scores to assess the effectiveness of the intervention.
Can I use Synthetic Control Method for Comparative Case Studies with survey data?
[Caveat: I have not read the paper so the below may be nonsense for all I know ...] Based on the summary of the R package I would venture to guess that you could use the proposed methodology for the s
Can I use Synthetic Control Method for Comparative Case Studies with survey data? [Caveat: I have not read the paper so the below may be nonsense for all I know ...] Based on the summary of the R package I would venture to guess that you could use the proposed methodology for the survey data provided the following conditions are met: You have survey data from control groups during pre-intervention periods. These control groups need not be identical to the treatment groups. The data you have is time series data. Provided points 1 and 2 are met my best intuition as to how the method works is as follows: First, construct a 'hypothetical' (synthetic in their words) control group that behaves as identical as possible as the treatment group. The hypothetical group is constructed by taking a convex combination of the control group data you have. As an example, suppose that you want to measure student performance on math. Your control groups could be different sections whereas the treatment group is one specific section. You construct the hypothetical control group such that the weighted (with the weights summing to 1 and hence convex) average of the scores of the control group sections is as close as possible to the scores of the treatment group before the intervention (i.e., use MSPE which is Mean Squared Prediction Error). Second, extrapolate the hypothetical group's scores into the post-intervention period using the parameter estimates from step 1. Since, the hypothetical group has been constructed to be identical to the treatment group pre-intervention, the post-intervention scores of the hypothetical group provides an appropriate counter-factual evidence to the treatment group's post-intervention scores to assess the effectiveness of the intervention.
Can I use Synthetic Control Method for Comparative Case Studies with survey data? [Caveat: I have not read the paper so the below may be nonsense for all I know ...] Based on the summary of the R package I would venture to guess that you could use the proposed methodology for the s
50,247
Dependent variable selection for loglinear segmented regression in time-series analysis of rare events
I think you're right to conclude that there's little hope of finding a 'statistically significant' result from 4 wards over 12 months. Of course, that doesn't mean the control measures don't work — just that your sample size is far too small (and the variability too large) to have much chance of finding evidence that it's worked. I'd guess that larger studies have been done and at least some evidence exists about what control measures work, and you should assume the same measures will work in your hospital too unless you have good reason for thinking it's different. Within your one hospital, rather than looking a the p-value(s) as the 'bottom line' i think you'd be better looking at this as performance monitoring. I quick bit of googling found a 2004 Report from Key Indicators Joint Working Group of the Hospital Infection Society, which may be one place to start.
Dependent variable selection for loglinear segmented regression in time-series analysis of rare even
I think you're right to conclude that there's little hope of finding a 'statistically significant' result from 4 wards over 12 months. Of course, that doesn't mean the control measures don't work — ju
Dependent variable selection for loglinear segmented regression in time-series analysis of rare events I think you're right to conclude that there's little hope of finding a 'statistically significant' result from 4 wards over 12 months. Of course, that doesn't mean the control measures don't work — just that your sample size is far too small (and the variability too large) to have much chance of finding evidence that it's worked. I'd guess that larger studies have been done and at least some evidence exists about what control measures work, and you should assume the same measures will work in your hospital too unless you have good reason for thinking it's different. Within your one hospital, rather than looking a the p-value(s) as the 'bottom line' i think you'd be better looking at this as performance monitoring. I quick bit of googling found a 2004 Report from Key Indicators Joint Working Group of the Hospital Infection Society, which may be one place to start.
Dependent variable selection for loglinear segmented regression in time-series analysis of rare even I think you're right to conclude that there's little hope of finding a 'statistically significant' result from 4 wards over 12 months. Of course, that doesn't mean the control measures don't work — ju
50,248
How can you approximate the number of trials to success given a particular Pr(Success)?
If I understand your question correctly you want to compute the quantiles for the "No of failures before the first success" given that $p=\frac{1}{104}$. The distribution you should be looking at is the negative binomial distribution. The wiki discusses the negative binomial as: In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number r of failures occurs. Just invert the interpretation of success and failures with a setting of r=1 would accomplish what you want. The distribution with r=1 is also called the geometric distribution. You could then use the discrete distribution to compute the quantiles. PS: I do not know R.
How can you approximate the number of trials to success given a particular Pr(Success)?
If I understand your question correctly you want to compute the quantiles for the "No of failures before the first success" given that $p=\frac{1}{104}$. The distribution you should be looking at is t
How can you approximate the number of trials to success given a particular Pr(Success)? If I understand your question correctly you want to compute the quantiles for the "No of failures before the first success" given that $p=\frac{1}{104}$. The distribution you should be looking at is the negative binomial distribution. The wiki discusses the negative binomial as: In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number r of failures occurs. Just invert the interpretation of success and failures with a setting of r=1 would accomplish what you want. The distribution with r=1 is also called the geometric distribution. You could then use the discrete distribution to compute the quantiles. PS: I do not know R.
How can you approximate the number of trials to success given a particular Pr(Success)? If I understand your question correctly you want to compute the quantiles for the "No of failures before the first success" given that $p=\frac{1}{104}$. The distribution you should be looking at is t
50,249
How to limit my input data for Jaccard item-item similarity calculation?
I'm confused: shouldn't you only need the 7900^2 item similarities, for which you use ratings from all users, which is still quite sparse? UPDATE I still think there's a more efficient way to do this, but maybe I'm just being dense. Specifically, consider item A and item B. For item A, generate a U-dimensional vector of 0's and 1's, where U is the number of users in your data set, and there's a 1 in dimension i if and only if user i rated item A. Do the same thing for item B. Then you can easily generate the AB, A and B terms for your equation from these vectors. Importantly, these vectors are very sparse, so they can produce a very small data set if encoded properly. Iterate over the item ID's to generate their cross product: (ItemAID, ItemBID) Map this pair to this n-tuple: (ItemAID, ItemBID, ItemAVector, ItemBVector) Reduce this n-tuple to your similarity measure: (ItemAID,ItemBID,SimilarityMetric) If you set up a cache of the ItemXVector's at the start, this computation should be very fast.
How to limit my input data for Jaccard item-item similarity calculation?
I'm confused: shouldn't you only need the 7900^2 item similarities, for which you use ratings from all users, which is still quite sparse? UPDATE I still think there's a more efficient way to do this,
How to limit my input data for Jaccard item-item similarity calculation? I'm confused: shouldn't you only need the 7900^2 item similarities, for which you use ratings from all users, which is still quite sparse? UPDATE I still think there's a more efficient way to do this, but maybe I'm just being dense. Specifically, consider item A and item B. For item A, generate a U-dimensional vector of 0's and 1's, where U is the number of users in your data set, and there's a 1 in dimension i if and only if user i rated item A. Do the same thing for item B. Then you can easily generate the AB, A and B terms for your equation from these vectors. Importantly, these vectors are very sparse, so they can produce a very small data set if encoded properly. Iterate over the item ID's to generate their cross product: (ItemAID, ItemBID) Map this pair to this n-tuple: (ItemAID, ItemBID, ItemAVector, ItemBVector) Reduce this n-tuple to your similarity measure: (ItemAID,ItemBID,SimilarityMetric) If you set up a cache of the ItemXVector's at the start, this computation should be very fast.
How to limit my input data for Jaccard item-item similarity calculation? I'm confused: shouldn't you only need the 7900^2 item similarities, for which you use ratings from all users, which is still quite sparse? UPDATE I still think there's a more efficient way to do this,
50,250
How to limit my input data for Jaccard item-item similarity calculation?
I've solved similar problem with MinHash which is specifically designed to approximate Jaccard distance. Idea is simple using MinHash probabilistic features you group your data into smaller groups (with same hash(s)) and then evalaute pairwise distance inside group (kind of block structure of matrix). The final answer is not exact but you can control how close it to exact by changing depth and amount of hashes.
How to limit my input data for Jaccard item-item similarity calculation?
I've solved similar problem with MinHash which is specifically designed to approximate Jaccard distance. Idea is simple using MinHash probabilistic features you group your data into smaller groups (wi
How to limit my input data for Jaccard item-item similarity calculation? I've solved similar problem with MinHash which is specifically designed to approximate Jaccard distance. Idea is simple using MinHash probabilistic features you group your data into smaller groups (with same hash(s)) and then evalaute pairwise distance inside group (kind of block structure of matrix). The final answer is not exact but you can control how close it to exact by changing depth and amount of hashes.
How to limit my input data for Jaccard item-item similarity calculation? I've solved similar problem with MinHash which is specifically designed to approximate Jaccard distance. Idea is simple using MinHash probabilistic features you group your data into smaller groups (wi
50,251
Birthday paradox for non-uniform probabilities
Approach 1 Let's assume that $d$ is large and the distribution of birthdays on day $i$ can be modelled/approximated as independent Poisson variables. Each day has a frequency of $\lambda_i = n \cdot p_i$ birthdays and the probability of no double birthday on day $i$. $$P(X_i \leq 1) = e^{-\lambda_i}(1+ \lambda_i)$$ The probability of no double birthday on all days is $$\begin{array}{rcl}\prod_{i=1}^d e^{-\lambda_i}(1+ \lambda_i) &=& e^{-n} \prod_{i=1}^d (1+ p_i n) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n \sum_{i=1}^d p_i + n^2 \left[ \sum_{\forall i,j} p_i p_j \right]\right) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n + n^2 0.5 \left[ 1- \sum_{\forall i} p_i^2 \right]\right) \\ & \approx & 1 + 0.5 n^2 \sum_{ i = 1}^n p_i^2 \\ \end{array}$$ and to put that equal to $0.5$ leads to $$n^2 \sum_{ i = 1}^n p_i^2 = 1$$ or $$n = \frac{1}{ \sqrt{\sum_{ i = 1}^n p_i^2 }}$$ Approach 2 We can convert this in a waiting time problem and consider adding birthdays untill there is a double one. Then the probability to have at least a double birthday among $n$ birthdays is equivalent to 1 minus the probability that we have to wait at least $n$ birthdays untill we have a double birthday. If the days have equal probabilities then the probability to 'hit a double birthday' is linearly increasing as more and more birthdays are being added $$P(\text{hit on day $k+1$| no hit yet}) = k/d$$ and the probability of no hits yet is $$P(\text{ no hit yet on day $k+1$}) = 1 - \prod_{i=1}^k (1 - i/d)$$ or $$\log\left(1-P(\text{ no hit yet on day $k+1$})\right) = \sum_{i=1}^k \log(1 - i/d) \approx \int_0^{k} \log(1-x/d) dx = (k - d)\log(1-k/d) - k \approx -\frac{k^2}{2d}$$ In your case $d=100$ you will get the $0.5$ probability for $k \approx 11.77$, which seems close to the asymptote in your graph. In the case of equal probabilities of a birthday on a day, we consider this integral $\int_0^{k} \log(1-x/d) dx$ as the probability of at least a double birthday. With unequal probabilities, the path $ \log(1-x/d)$ will not be fixed and will be stochastic instead. That may cause discrepancies. Possibly this problem can be solved as a random walk with drift, and each step there is a probability of ending the walk, based on the position of the walk.
Birthday paradox for non-uniform probabilities
Approach 1 Let's assume that $d$ is large and the distribution of birthdays on day $i$ can be modelled/approximated as independent Poisson variables. Each day has a frequency of $\lambda_i = n \cdot p
Birthday paradox for non-uniform probabilities Approach 1 Let's assume that $d$ is large and the distribution of birthdays on day $i$ can be modelled/approximated as independent Poisson variables. Each day has a frequency of $\lambda_i = n \cdot p_i$ birthdays and the probability of no double birthday on day $i$. $$P(X_i \leq 1) = e^{-\lambda_i}(1+ \lambda_i)$$ The probability of no double birthday on all days is $$\begin{array}{rcl}\prod_{i=1}^d e^{-\lambda_i}(1+ \lambda_i) &=& e^{-n} \prod_{i=1}^d (1+ p_i n) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n \sum_{i=1}^d p_i + n^2 \left[ \sum_{\forall i,j} p_i p_j \right]\right) \\ & \approx & \left(1-n+0.5n^2\right) \left( 1 + n + n^2 0.5 \left[ 1- \sum_{\forall i} p_i^2 \right]\right) \\ & \approx & 1 + 0.5 n^2 \sum_{ i = 1}^n p_i^2 \\ \end{array}$$ and to put that equal to $0.5$ leads to $$n^2 \sum_{ i = 1}^n p_i^2 = 1$$ or $$n = \frac{1}{ \sqrt{\sum_{ i = 1}^n p_i^2 }}$$ Approach 2 We can convert this in a waiting time problem and consider adding birthdays untill there is a double one. Then the probability to have at least a double birthday among $n$ birthdays is equivalent to 1 minus the probability that we have to wait at least $n$ birthdays untill we have a double birthday. If the days have equal probabilities then the probability to 'hit a double birthday' is linearly increasing as more and more birthdays are being added $$P(\text{hit on day $k+1$| no hit yet}) = k/d$$ and the probability of no hits yet is $$P(\text{ no hit yet on day $k+1$}) = 1 - \prod_{i=1}^k (1 - i/d)$$ or $$\log\left(1-P(\text{ no hit yet on day $k+1$})\right) = \sum_{i=1}^k \log(1 - i/d) \approx \int_0^{k} \log(1-x/d) dx = (k - d)\log(1-k/d) - k \approx -\frac{k^2}{2d}$$ In your case $d=100$ you will get the $0.5$ probability for $k \approx 11.77$, which seems close to the asymptote in your graph. In the case of equal probabilities of a birthday on a day, we consider this integral $\int_0^{k} \log(1-x/d) dx$ as the probability of at least a double birthday. With unequal probabilities, the path $ \log(1-x/d)$ will not be fixed and will be stochastic instead. That may cause discrepancies. Possibly this problem can be solved as a random walk with drift, and each step there is a probability of ending the walk, based on the position of the walk.
Birthday paradox for non-uniform probabilities Approach 1 Let's assume that $d$ is large and the distribution of birthdays on day $i$ can be modelled/approximated as independent Poisson variables. Each day has a frequency of $\lambda_i = n \cdot p
50,252
Who created the "soup analogy" for sampling
This saying is credit to George Gallup. It dates from before 1941, though I've not been able to find a primary source. It seems likely that he used the analogy multiple times. For example the Ottawa Citizen writes: When a cook want to taste the soup to see how it is coming he doesn't have to drink the whole boilerful, nor does he take a spoonful from the top then a bit from the middle and some from the bottom. He stirs the whole cauldron thouroughly, then he stirs it some more, then he tastes it. This doesn't claim to be an exact quote, but seems to be indicative of how Gallup made the analogy. It is presented in the context of George Gallup, but not as a quote. Given the early date of this article it is possible that was, in fact, Gregory Clark who came up with the idea. But given the range of other sources pointing to Gallup, one can surmise that Gallup had used the analogy either in his interview with Clarke, or it was in the background reading on Gallup that Clarke did before the interview. As an aside - this article is dated Nov 27th 1941, 10 days before the USA joined the second world war. Look to the top right of the page for a short story on how "Pearl Harbour would be in grave danger of sabotage, if the US become involve in a war in the Pacific". It is one column tucked away on page 18. The lead story that day "Seige of Tobruk Broken".
Who created the "soup analogy" for sampling
This saying is credit to George Gallup. It dates from before 1941, though I've not been able to find a primary source. It seems likely that he used the analogy multiple times. For example the Ottawa
Who created the "soup analogy" for sampling This saying is credit to George Gallup. It dates from before 1941, though I've not been able to find a primary source. It seems likely that he used the analogy multiple times. For example the Ottawa Citizen writes: When a cook want to taste the soup to see how it is coming he doesn't have to drink the whole boilerful, nor does he take a spoonful from the top then a bit from the middle and some from the bottom. He stirs the whole cauldron thouroughly, then he stirs it some more, then he tastes it. This doesn't claim to be an exact quote, but seems to be indicative of how Gallup made the analogy. It is presented in the context of George Gallup, but not as a quote. Given the early date of this article it is possible that was, in fact, Gregory Clark who came up with the idea. But given the range of other sources pointing to Gallup, one can surmise that Gallup had used the analogy either in his interview with Clarke, or it was in the background reading on Gallup that Clarke did before the interview. As an aside - this article is dated Nov 27th 1941, 10 days before the USA joined the second world war. Look to the top right of the page for a short story on how "Pearl Harbour would be in grave danger of sabotage, if the US become involve in a war in the Pacific". It is one column tucked away on page 18. The lead story that day "Seige of Tobruk Broken".
Who created the "soup analogy" for sampling This saying is credit to George Gallup. It dates from before 1941, though I've not been able to find a primary source. It seems likely that he used the analogy multiple times. For example the Ottawa
50,253
How to formalize the following intutive reasoning (Bayes' rule)?
The issue here is that your prior does not fully specify the joint distribution of the relevant events at issue. If you let $\mathscr{E}_1,\mathscr{E}_2,\mathscr{E}_3,\mathscr{E}_4$ denote the individual events that each of the four respective compartments contain the laptop (only one of which can be true at most), then all you have specified in the prior probability: $$\pi \equiv \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$ This is not a full specification of prior probabilities for each of the possible outcomes. However, suppose you are further willing to specify the prior probability of finding the laptop in a particular compartment if it is in any of the compartments. We will denote these probabilities as: $$\phi_i \equiv \mathbb{P}(\mathscr{E}_i | \mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$ Now, if you observe that compartments 1-3 are empty then your posterior probability that the laptop is in the remaining compartment is: $$\begin{align} \mathbb{P}(\mathscr{E}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3) &= 1 - \mathbb{P}(\bar{\mathscr{E}}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3) \\[14pt] &= 1 - \frac{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3 \cap \bar{\mathscr{E}}_4)}{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3)} \\[6pt] &= 1 - \frac{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4)}{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3)} \\[6pt] &= 1 - \frac{1 - (\phi_1 + \phi_2 + \phi_3 + \phi_4) \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi} \\[6pt] &= \frac{\phi_4 \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi}. \\[6pt] \end{align}$$ In your particular specification of the problem you have $\pi = 0.8$ which means that the resulting posterior probability that the laptop is in the remaining compartment can be anywhere from zero up to 80%. (The latter result is obtained by taking $\phi_1 = \phi_2 = \phi_3 = 0$ and $\phi_4 = 1$.) Suppose alternatively that you are of the view that if the laptop is in one of the compartments then it is equally likely to be in any of them. This is reflected by choosing prior probabilities $\phi_1 = \phi_2 = \phi_3 = \phi_4 = \tfrac{1}{4}$ which then leads to the posterior probability 50%.
How to formalize the following intutive reasoning (Bayes' rule)?
The issue here is that your prior does not fully specify the joint distribution of the relevant events at issue. If you let $\mathscr{E}_1,\mathscr{E}_2,\mathscr{E}_3,\mathscr{E}_4$ denote the indivi
How to formalize the following intutive reasoning (Bayes' rule)? The issue here is that your prior does not fully specify the joint distribution of the relevant events at issue. If you let $\mathscr{E}_1,\mathscr{E}_2,\mathscr{E}_3,\mathscr{E}_4$ denote the individual events that each of the four respective compartments contain the laptop (only one of which can be true at most), then all you have specified in the prior probability: $$\pi \equiv \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$ This is not a full specification of prior probabilities for each of the possible outcomes. However, suppose you are further willing to specify the prior probability of finding the laptop in a particular compartment if it is in any of the compartments. We will denote these probabilities as: $$\phi_i \equiv \mathbb{P}(\mathscr{E}_i | \mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4).$$ Now, if you observe that compartments 1-3 are empty then your posterior probability that the laptop is in the remaining compartment is: $$\begin{align} \mathbb{P}(\mathscr{E}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3) &= 1 - \mathbb{P}(\bar{\mathscr{E}}_4 | \bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3) \\[14pt] &= 1 - \frac{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3 \cap \bar{\mathscr{E}}_4)}{\mathbb{P}(\bar{\mathscr{E}}_1 \cap \bar{\mathscr{E}}_2 \cap \bar{\mathscr{E}}_3)} \\[6pt] &= 1 - \frac{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3 \cup \mathscr{E}_4)}{1 - \mathbb{P}(\mathscr{E}_1 \cup \mathscr{E}_2 \cup \mathscr{E}_3)} \\[6pt] &= 1 - \frac{1 - (\phi_1 + \phi_2 + \phi_3 + \phi_4) \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi} \\[6pt] &= \frac{\phi_4 \pi}{1 - (\phi_1 + \phi_2 + \phi_3) \pi}. \\[6pt] \end{align}$$ In your particular specification of the problem you have $\pi = 0.8$ which means that the resulting posterior probability that the laptop is in the remaining compartment can be anywhere from zero up to 80%. (The latter result is obtained by taking $\phi_1 = \phi_2 = \phi_3 = 0$ and $\phi_4 = 1$.) Suppose alternatively that you are of the view that if the laptop is in one of the compartments then it is equally likely to be in any of them. This is reflected by choosing prior probabilities $\phi_1 = \phi_2 = \phi_3 = \phi_4 = \tfrac{1}{4}$ which then leads to the posterior probability 50%.
How to formalize the following intutive reasoning (Bayes' rule)? The issue here is that your prior does not fully specify the joint distribution of the relevant events at issue. If you let $\mathscr{E}_1,\mathscr{E}_2,\mathscr{E}_3,\mathscr{E}_4$ denote the indivi
50,254
Interpretation of causal effect from instrumental variables
I think in an A/B test like this, where you have an encouragement design with one-sided non-compliance, you can make some progress under reasonable assumptions. Using the notation from here, where C stands for compliers and not clickers, the LATE identified by IV is $$\Delta_{IV} =\frac{E(Y_1 \vert C) \cdot Pr(C)−E(Y_0 \vert C) \cdot Pr(C)}{Pr(C)}=E(Y_1 \vert C)−E(Y_0 \vert C)$$ Putting this into relative terms means doing this: $$\%\Delta_{IV}=\frac{E(Y_1 \vert C)−E(Y_0 \vert C)}{E(Y_0 \vert C)}.$$ The issue is that you don't know the denominator. Assuming treatment is randomized, in the control group: $$\require{cancel} E(Y_0) = \cancel{E(Y_0 \vert AT)\cdot Pr(AT)}+E(Y_0 \vert C)\cdot Pr(C)+ \cancel{E(Y_0 \vert DF) \cdot Pr(DF)}+E(Y_0 \vert NT) \cdot Pr(NT)$$ Here always-takers go away since control users cannot click. Ditto for defiers. This is different from the typical labor economics experiment, where people can take up job training somewhere else even if they are in the control group. So the LATE = ATT. You also know that your treatment group is compliers, who all click, and never-takers, who don't. This allows you to separate the two groups cleanly. The same logic applies to the control group, since the types are fixed. The outcome for the never-takers should be the same in treatment and control as long as the video is the only channel by which the treatment can change the outcome. This rules out behavior like control users getting pissed about being denied the video and reducing purchases. But if you are willing to make these reasonable assumptions, you can back out the share of never-takers in the treatment group (96%) and the mean untreated outcome for them (0.42). You can also get the share of compliers in the treated group (4%), which should be the same as in the control. You can then calculate the mean of the untreated outcome for never-takers and compliers together in control (0.41). That should be enough to pin down the mean of the untreated outcome in control (.08), which should be the same in treatment. That should give you a relative lift of $\frac{0.26}{.08} \approx 3.15X$. This is pretty large, but not stag sig. Your first stage is strong, so this is probably not a weak instrument artifact. Issues of statistical significance aside, this result implies that you have very low take-up, but an enormous effect for those who do it. You may want to explore making take-up easier (product changes making the video more prominent, like screen takeovers, subsidies for watching, etc.). You can also try to fit a model for $\Pr(Complier \vert X)$. Maybe all the compliers are new users, so the strategy above is limited by the inflow of new users. Standard errors are a bit trickier but can bootstrap the IV regression plus complier arithmetic jointly.
Interpretation of causal effect from instrumental variables
I think in an A/B test like this, where you have an encouragement design with one-sided non-compliance, you can make some progress under reasonable assumptions. Using the notation from here, where C s
Interpretation of causal effect from instrumental variables I think in an A/B test like this, where you have an encouragement design with one-sided non-compliance, you can make some progress under reasonable assumptions. Using the notation from here, where C stands for compliers and not clickers, the LATE identified by IV is $$\Delta_{IV} =\frac{E(Y_1 \vert C) \cdot Pr(C)−E(Y_0 \vert C) \cdot Pr(C)}{Pr(C)}=E(Y_1 \vert C)−E(Y_0 \vert C)$$ Putting this into relative terms means doing this: $$\%\Delta_{IV}=\frac{E(Y_1 \vert C)−E(Y_0 \vert C)}{E(Y_0 \vert C)}.$$ The issue is that you don't know the denominator. Assuming treatment is randomized, in the control group: $$\require{cancel} E(Y_0) = \cancel{E(Y_0 \vert AT)\cdot Pr(AT)}+E(Y_0 \vert C)\cdot Pr(C)+ \cancel{E(Y_0 \vert DF) \cdot Pr(DF)}+E(Y_0 \vert NT) \cdot Pr(NT)$$ Here always-takers go away since control users cannot click. Ditto for defiers. This is different from the typical labor economics experiment, where people can take up job training somewhere else even if they are in the control group. So the LATE = ATT. You also know that your treatment group is compliers, who all click, and never-takers, who don't. This allows you to separate the two groups cleanly. The same logic applies to the control group, since the types are fixed. The outcome for the never-takers should be the same in treatment and control as long as the video is the only channel by which the treatment can change the outcome. This rules out behavior like control users getting pissed about being denied the video and reducing purchases. But if you are willing to make these reasonable assumptions, you can back out the share of never-takers in the treatment group (96%) and the mean untreated outcome for them (0.42). You can also get the share of compliers in the treated group (4%), which should be the same as in the control. You can then calculate the mean of the untreated outcome for never-takers and compliers together in control (0.41). That should be enough to pin down the mean of the untreated outcome in control (.08), which should be the same in treatment. That should give you a relative lift of $\frac{0.26}{.08} \approx 3.15X$. This is pretty large, but not stag sig. Your first stage is strong, so this is probably not a weak instrument artifact. Issues of statistical significance aside, this result implies that you have very low take-up, but an enormous effect for those who do it. You may want to explore making take-up easier (product changes making the video more prominent, like screen takeovers, subsidies for watching, etc.). You can also try to fit a model for $\Pr(Complier \vert X)$. Maybe all the compliers are new users, so the strategy above is limited by the inflow of new users. Standard errors are a bit trickier but can bootstrap the IV regression plus complier arithmetic jointly.
Interpretation of causal effect from instrumental variables I think in an A/B test like this, where you have an encouragement design with one-sided non-compliance, you can make some progress under reasonable assumptions. Using the notation from here, where C s
50,255
Why use the particular test statistic for hypothesis testing linear regression coefficients
Why use that particular test statistic and not another? The t-test coincidences with the likelihood-ratio test and therefore has as good properties in being a powerful test. Is it uniformly most powerful (UMP) among its peers? Yes if you consider an alternative point hypothesis (think of Neyman's and Pearson's theorems), but not necessarily for composite hypotheses. See for example the difference in power of one-sided versus two-sided tests. None of these tests are dominant everywhere Image from the question: https://stats.stackexchange.com/a/548241/164061 However, for a one-sided composite hypothesis, e.g. $H_0: \beta = 0$ versus $H_a: \beta > 0$ the t-test is UMP among unbiased tests. You can argue this by considering that of all hypothesis tests with a simple alternative hypotheses inside the region of the one-sided composite hypothesis, the t-test is the same and UMP.
Why use the particular test statistic for hypothesis testing linear regression coefficients
Why use that particular test statistic and not another? The t-test coincidences with the likelihood-ratio test and therefore has as good properties in being a powerful test. Is it uniformly most pow
Why use the particular test statistic for hypothesis testing linear regression coefficients Why use that particular test statistic and not another? The t-test coincidences with the likelihood-ratio test and therefore has as good properties in being a powerful test. Is it uniformly most powerful (UMP) among its peers? Yes if you consider an alternative point hypothesis (think of Neyman's and Pearson's theorems), but not necessarily for composite hypotheses. See for example the difference in power of one-sided versus two-sided tests. None of these tests are dominant everywhere Image from the question: https://stats.stackexchange.com/a/548241/164061 However, for a one-sided composite hypothesis, e.g. $H_0: \beta = 0$ versus $H_a: \beta > 0$ the t-test is UMP among unbiased tests. You can argue this by considering that of all hypothesis tests with a simple alternative hypotheses inside the region of the one-sided composite hypothesis, the t-test is the same and UMP.
Why use the particular test statistic for hypothesis testing linear regression coefficients Why use that particular test statistic and not another? The t-test coincidences with the likelihood-ratio test and therefore has as good properties in being a powerful test. Is it uniformly most pow
50,256
Show that $\min_{a \in \mathbb{R}} E \left[ \max \left( (1-a) V, a Z \right) \right]$ is minimized by $a$ such that $0<a<1$
It is not possible to limit the solution to the open interval $a \in (0,1),$ because when for instance $(X,Y)$ is standard Normal, the global minimum is attained on the entire closed interval $[0,1]$ and, as you can readily compute, when $X$ is a nontrivial mixture of two Normals the global minimum is attained only at $a=0$ or $a=1.$ I will show that all global minima are attained for $0 \le a \le 1.$ The key idea is that when $Z$ is any random variable with finite expectation and distribution function $F_Z,$ $$E[Z] = \int_{-\infty}^0 -F_Z(z)\,\mathrm dz + \int_0^\infty 1 - F_Z(z)\,\mathrm dz.$$ This solution makes (far) weaker assumptions about the variables $X$ and $Y$ than assumed in the question. I will highlight the assumptions needed as we go along. For any bivariate random variable $(X,Y)$ and real numbers $a,b,t,$ define $$g_{X,Y}(a,b) = E[\max(aX, bY)].$$ When $a \gt 0,$ $$E[\max(aX, bY)] = E\left[a\max\left(X, \frac{b}{a}Y\right)\right] = a E\left[\max\left(X, \frac{b}{a}Y\right)\right] = a\,g_{X,Y}\left(1,\frac{b}{a}\right).$$ When $X$ and $Y$ are independent with marginal distribution functions $F_X$ and $F_Y,$ and $a$ and $b$ are positive, $$\begin{aligned} \Pr(\max(aX, bY) \le t) &= \Pr(aX\le t,\ bY\le t) \\&= \Pr\left(X\le \frac{t}{a}\right)\Pr\left(Y\le\frac{t}{b}\right) \\&= F_X\left(\frac{t}{a}\right)F_Y\left(\frac{t}{b}\right). \end{aligned}$$ Thus $$\begin{aligned} g_{X,Y}(a,b) &= a\,g_{X,Y}\left(1,\frac{b}{a}\right) \\&= a\left[\int_{-\infty}^0 -F_X\left(t\right)F_Y\left(\frac{at}{b}\right)\mathrm dt + \int_0^\infty \left(1 - F_X\left(t\right)F_Y\left(\frac{at}{b}\right)\right)\mathrm dt\right]. \end{aligned}$$ Provided $g_{X,Y}(a_0,b_0)$ is defined and finite for some $a_0\gt 0$ and $b_0\gt 0,$ $g_{X,Y}(a,b)$ is defined and finite for every $a\gt 0$ and $b \gt 0$ because $\max\left(aX, bY\right) \le \max(a/a_0,b/b_0) \max(a_0X,b_0Y)$ implies $$g_{X,Y}(a,b)\le \max\left(\frac{a}{a_0}, \frac{b}{b_0}\right)\,g_{X,Y}(a_0,b_0) \lt \infty.$$ Comparable relations hold in the other three quadrants (where $a\lt 0$ and $b\gt 0,$ $a\lt 0$ and $b \lt 0$, or $a\gt 0$ and $b\lt 0$). Assume now that $F_Y$ is everywhere differentiable with derivative $f_y.$ $g$ is differentiable with derivatives $Dg = (D_1 g, D_2 g)$ given by differentiating under the integral signs. After doing so, substitute $t = by/a$ to obtain $$\begin{aligned} a\, D_1 g_{X,Y}(a,b) &= g_{X,Y}(a,b) - b\int F_X\left(\frac{by}{a}\right) y f_Y(y)\,\mathrm dy;\\ b\, D_2 g_{X,Y}(a,b) &= b\int F_X\left(\frac{by}{a}\right) y f_Y(y)\,\mathrm dy. \end{aligned}$$ Consequently, for $a\gt 0$ and $b \gt 0,$ $$a D_1 g_{X,Y}(a,b) + b D_2 g_{X,Y}(a,b) = g_{X,Y}(a,b).\tag{*}$$ Because $$g_{X,Y}(a,b) = g_{-X,Y}(-a,b) = g_{X,-Y}(a,-b) = g_{-X,-Y}(-a,-b),$$ relationships analogous to $(*)$ hold in all four quadrants. It follows that the function $a\to g_{X,Y}(1-a, a)$ is increasing for $a \gt 1$ and decreasing for $a \lt 0.$ Here is an illustration where $Y$ has a standard Normal distribution and $X$ is a mixture of Uniform$(-3,-1)$ and Uniform$(0, 1)$ distributions (with zero mean): This contour plot for the same $(X,Y)$ is characteristic of the situation: The bold line plots the locus of $(1-a,a).$ When this line exits the first quadrant into the second (upper left) or fourth (lower right) quadrants, its component in the direction of the $(a,b)$ vector field described by $(*)$ is positive: thus, $g$ increases along the rays parameterized by $(a\mid a \gt 1)$ and $(-a\mid a \gt 0).$ The following conclusions are immediate from the foregoing assumptions (namely, that $(X,Y)$ is independent, $Y$ has an absolutely continuous distribution, and there exist points in each of the four $(a,b)$ quadrants where $g$ is finite): $g$ is continuous everywhere and differentiable on the set $\{(a,b)\mid a\ne 0, b \ne 0.\}$ $g$ increases in all directions away from the origin. Therefore the restriction of $g$ to any line attains at least one global minimum, and that minimum occurs in whatever quadrant in which that line is bounded. Because the line parameterized by $(1-a,a)$ is bounded in the first quadrant and its intersection with that quadrant corresponds to the interval $a\in [0,1],$ we are done.
Show that $\min_{a \in \mathbb{R}} E \left[ \max \left( (1-a) V, a Z \right) \right]$ is minimized b
It is not possible to limit the solution to the open interval $a \in (0,1),$ because when for instance $(X,Y)$ is standard Normal, the global minimum is attained on the entire closed interval $[0,1]$
Show that $\min_{a \in \mathbb{R}} E \left[ \max \left( (1-a) V, a Z \right) \right]$ is minimized by $a$ such that $0<a<1$ It is not possible to limit the solution to the open interval $a \in (0,1),$ because when for instance $(X,Y)$ is standard Normal, the global minimum is attained on the entire closed interval $[0,1]$ and, as you can readily compute, when $X$ is a nontrivial mixture of two Normals the global minimum is attained only at $a=0$ or $a=1.$ I will show that all global minima are attained for $0 \le a \le 1.$ The key idea is that when $Z$ is any random variable with finite expectation and distribution function $F_Z,$ $$E[Z] = \int_{-\infty}^0 -F_Z(z)\,\mathrm dz + \int_0^\infty 1 - F_Z(z)\,\mathrm dz.$$ This solution makes (far) weaker assumptions about the variables $X$ and $Y$ than assumed in the question. I will highlight the assumptions needed as we go along. For any bivariate random variable $(X,Y)$ and real numbers $a,b,t,$ define $$g_{X,Y}(a,b) = E[\max(aX, bY)].$$ When $a \gt 0,$ $$E[\max(aX, bY)] = E\left[a\max\left(X, \frac{b}{a}Y\right)\right] = a E\left[\max\left(X, \frac{b}{a}Y\right)\right] = a\,g_{X,Y}\left(1,\frac{b}{a}\right).$$ When $X$ and $Y$ are independent with marginal distribution functions $F_X$ and $F_Y,$ and $a$ and $b$ are positive, $$\begin{aligned} \Pr(\max(aX, bY) \le t) &= \Pr(aX\le t,\ bY\le t) \\&= \Pr\left(X\le \frac{t}{a}\right)\Pr\left(Y\le\frac{t}{b}\right) \\&= F_X\left(\frac{t}{a}\right)F_Y\left(\frac{t}{b}\right). \end{aligned}$$ Thus $$\begin{aligned} g_{X,Y}(a,b) &= a\,g_{X,Y}\left(1,\frac{b}{a}\right) \\&= a\left[\int_{-\infty}^0 -F_X\left(t\right)F_Y\left(\frac{at}{b}\right)\mathrm dt + \int_0^\infty \left(1 - F_X\left(t\right)F_Y\left(\frac{at}{b}\right)\right)\mathrm dt\right]. \end{aligned}$$ Provided $g_{X,Y}(a_0,b_0)$ is defined and finite for some $a_0\gt 0$ and $b_0\gt 0,$ $g_{X,Y}(a,b)$ is defined and finite for every $a\gt 0$ and $b \gt 0$ because $\max\left(aX, bY\right) \le \max(a/a_0,b/b_0) \max(a_0X,b_0Y)$ implies $$g_{X,Y}(a,b)\le \max\left(\frac{a}{a_0}, \frac{b}{b_0}\right)\,g_{X,Y}(a_0,b_0) \lt \infty.$$ Comparable relations hold in the other three quadrants (where $a\lt 0$ and $b\gt 0,$ $a\lt 0$ and $b \lt 0$, or $a\gt 0$ and $b\lt 0$). Assume now that $F_Y$ is everywhere differentiable with derivative $f_y.$ $g$ is differentiable with derivatives $Dg = (D_1 g, D_2 g)$ given by differentiating under the integral signs. After doing so, substitute $t = by/a$ to obtain $$\begin{aligned} a\, D_1 g_{X,Y}(a,b) &= g_{X,Y}(a,b) - b\int F_X\left(\frac{by}{a}\right) y f_Y(y)\,\mathrm dy;\\ b\, D_2 g_{X,Y}(a,b) &= b\int F_X\left(\frac{by}{a}\right) y f_Y(y)\,\mathrm dy. \end{aligned}$$ Consequently, for $a\gt 0$ and $b \gt 0,$ $$a D_1 g_{X,Y}(a,b) + b D_2 g_{X,Y}(a,b) = g_{X,Y}(a,b).\tag{*}$$ Because $$g_{X,Y}(a,b) = g_{-X,Y}(-a,b) = g_{X,-Y}(a,-b) = g_{-X,-Y}(-a,-b),$$ relationships analogous to $(*)$ hold in all four quadrants. It follows that the function $a\to g_{X,Y}(1-a, a)$ is increasing for $a \gt 1$ and decreasing for $a \lt 0.$ Here is an illustration where $Y$ has a standard Normal distribution and $X$ is a mixture of Uniform$(-3,-1)$ and Uniform$(0, 1)$ distributions (with zero mean): This contour plot for the same $(X,Y)$ is characteristic of the situation: The bold line plots the locus of $(1-a,a).$ When this line exits the first quadrant into the second (upper left) or fourth (lower right) quadrants, its component in the direction of the $(a,b)$ vector field described by $(*)$ is positive: thus, $g$ increases along the rays parameterized by $(a\mid a \gt 1)$ and $(-a\mid a \gt 0).$ The following conclusions are immediate from the foregoing assumptions (namely, that $(X,Y)$ is independent, $Y$ has an absolutely continuous distribution, and there exist points in each of the four $(a,b)$ quadrants where $g$ is finite): $g$ is continuous everywhere and differentiable on the set $\{(a,b)\mid a\ne 0, b \ne 0.\}$ $g$ increases in all directions away from the origin. Therefore the restriction of $g$ to any line attains at least one global minimum, and that minimum occurs in whatever quadrant in which that line is bounded. Because the line parameterized by $(1-a,a)$ is bounded in the first quadrant and its intersection with that quadrant corresponds to the interval $a\in [0,1],$ we are done.
Show that $\min_{a \in \mathbb{R}} E \left[ \max \left( (1-a) V, a Z \right) \right]$ is minimized b It is not possible to limit the solution to the open interval $a \in (0,1),$ because when for instance $(X,Y)$ is standard Normal, the global minimum is attained on the entire closed interval $[0,1]$
50,257
How do I model a paired experiment with pre-post data?
The model proposed is not incorrect but it can be improved. I would suggest including the matching variables in the regression model too. The main reasons are two: 1. we might have a poor balance on the other covariates, even if the exact matching variables are perfectly balanced and 2. some of the covariates might have substantial effects on the outcome too. The inclusion of these covariates, if anything, will allow us to reduce the standard error of the final estimate $\beta$ estimate. Yes, we will lose some degrees of freedom but with ~20K pairs that's should not be a huge concern. If you haven't seen it already I would suggest a quick read of Stuart (2010) "Matching Methods for Causal Inference: A Review and a Look Forward", Section 5. "ANALYSIS OF THE OUTCOME". The exceptional vignette of the MatchIt R package "Estimating Effects After Matching" written by Greifer should be you some greater pointer too. That said, be aware of the "Table 2 Fallacy" as interpreting the $\beta$ coefficients of these matching variables is problematic because these additional variables are still subject to confounding; "The Table 2 Fallacy: Presenting and Interpreting Confounder and Modifier Coefficients" (2013) by Westreich & Greenland cover this in detail.
How do I model a paired experiment with pre-post data?
The model proposed is not incorrect but it can be improved. I would suggest including the matching variables in the regression model too. The main reasons are two: 1. we might have a poor balance on t
How do I model a paired experiment with pre-post data? The model proposed is not incorrect but it can be improved. I would suggest including the matching variables in the regression model too. The main reasons are two: 1. we might have a poor balance on the other covariates, even if the exact matching variables are perfectly balanced and 2. some of the covariates might have substantial effects on the outcome too. The inclusion of these covariates, if anything, will allow us to reduce the standard error of the final estimate $\beta$ estimate. Yes, we will lose some degrees of freedom but with ~20K pairs that's should not be a huge concern. If you haven't seen it already I would suggest a quick read of Stuart (2010) "Matching Methods for Causal Inference: A Review and a Look Forward", Section 5. "ANALYSIS OF THE OUTCOME". The exceptional vignette of the MatchIt R package "Estimating Effects After Matching" written by Greifer should be you some greater pointer too. That said, be aware of the "Table 2 Fallacy" as interpreting the $\beta$ coefficients of these matching variables is problematic because these additional variables are still subject to confounding; "The Table 2 Fallacy: Presenting and Interpreting Confounder and Modifier Coefficients" (2013) by Westreich & Greenland cover this in detail.
How do I model a paired experiment with pre-post data? The model proposed is not incorrect but it can be improved. I would suggest including the matching variables in the regression model too. The main reasons are two: 1. we might have a poor balance on t
50,258
Distribution of positive and negative values in a Brownian bridge
The distribution is uniform. A more well known related relationship is Lévy's arcsine law: the distribution of time that a random walk is positive follows an arcsine distribution (or Beta 1/2,1/2). On mathematics the same question was asked and answered Distribution of time spent above 0 by a Brownian Bridge. We should be able to derive the uniform distribution of the Brownian bridge by using the arcsine laws for the Wiener process. Schematically it looks like below. The Wiener process can be viewed as the combination of a scaled Brownian bridge at times below $t$, and a final piece that is entirely below or above zero at times above $t$. Then the fraction of time that the Wiener process is above zero $f_W$ is equal to $$f_W = t \cdot f_B + (1-t) \cdot X$$ where $f_B$ is the fraction of time that the Brownian bridge is above zero, $t$ is the last time since the random walk hits zero, and $X$ is a Bernoulli variable (the last part is fifty-fifty either all positive or all negative). With the arcsine laws we know that $f_W,t \sim B(0.5,0.5)$ with this we should be able to derive $f_B$ (I still have to work that part out, but this seems to be the strategy to get from the arcsine laws for the Brownian motion to the laws for the Brownian bridge).
Distribution of positive and negative values in a Brownian bridge
The distribution is uniform. A more well known related relationship is Lévy's arcsine law: the distribution of time that a random walk is positive follows an arcsine distribution (or Beta 1/2,1/2). On
Distribution of positive and negative values in a Brownian bridge The distribution is uniform. A more well known related relationship is Lévy's arcsine law: the distribution of time that a random walk is positive follows an arcsine distribution (or Beta 1/2,1/2). On mathematics the same question was asked and answered Distribution of time spent above 0 by a Brownian Bridge. We should be able to derive the uniform distribution of the Brownian bridge by using the arcsine laws for the Wiener process. Schematically it looks like below. The Wiener process can be viewed as the combination of a scaled Brownian bridge at times below $t$, and a final piece that is entirely below or above zero at times above $t$. Then the fraction of time that the Wiener process is above zero $f_W$ is equal to $$f_W = t \cdot f_B + (1-t) \cdot X$$ where $f_B$ is the fraction of time that the Brownian bridge is above zero, $t$ is the last time since the random walk hits zero, and $X$ is a Bernoulli variable (the last part is fifty-fifty either all positive or all negative). With the arcsine laws we know that $f_W,t \sim B(0.5,0.5)$ with this we should be able to derive $f_B$ (I still have to work that part out, but this seems to be the strategy to get from the arcsine laws for the Brownian motion to the laws for the Brownian bridge).
Distribution of positive and negative values in a Brownian bridge The distribution is uniform. A more well known related relationship is Lévy's arcsine law: the distribution of time that a random walk is positive follows an arcsine distribution (or Beta 1/2,1/2). On
50,259
How can I test whether one empirical CDF is to the left or right of another?
Maybe you're interested in whether sample y stochastically dominates sample x. If so, you might want to look directly at ECDF plots, and do some formal tests. Here are summaries and ECDF plots of two samples. summary(x); length(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 5.067 14.628 21.012 21.807 28.297 53.943 [1] 30 # sample size [1] 10.56207 # sample SD summary(y); length(y); sd(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 12.81 25.30 29.27 29.25 32.56 45.47 [1] 30 [1] 8.098928 plot(ecdf(x), col="blue", main="ECDFs", xlab="values") lines(ecdf(y), col="brown") Because the ECDF of y (brown) plots to the right of the ECDF of x (blue), and therefore below, it seems the values of y are generally larger than values of x A two-sample Kolmogorov-Smirnov test confirms this with a P-value below 5%. The test statistic $D$ is the maximum vertical distance between the two ECDF plots. ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.43333, p-value = 0.006548 alternative hypothesis: two-sided When two samples are not of the same shape (including the same variability), a two-sample Wilcoxon Rank Sum test, is said to be a test of stochastic dominance (rather than of different medians). boxplot(x, y, horizontal=T, col=c("skyblue2", "wheat")) wilcox.test(x,y) Wilcoxon rank sum test data: x and y W = 236, p-value = 0.001292 alternative hypothesis: true location shift is not equal to 0 Notes: (1) Technically speaking, there are several different types of 'stochastic dominance' with somewhat different definitions. You may be interested in googling that. Perhaps start here. (2) The fictitious samples used in the above discussion were sampled in R as follows: set.seed(2022) x = rgamma(30, 4, 1/5) y = rgamma(30, 5, 1/5) + 7
How can I test whether one empirical CDF is to the left or right of another?
Maybe you're interested in whether sample y stochastically dominates sample x. If so, you might want to look directly at ECDF plots, and do some formal tests. Here are summaries and ECDF plots of two
How can I test whether one empirical CDF is to the left or right of another? Maybe you're interested in whether sample y stochastically dominates sample x. If so, you might want to look directly at ECDF plots, and do some formal tests. Here are summaries and ECDF plots of two samples. summary(x); length(x); sd(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 5.067 14.628 21.012 21.807 28.297 53.943 [1] 30 # sample size [1] 10.56207 # sample SD summary(y); length(y); sd(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 12.81 25.30 29.27 29.25 32.56 45.47 [1] 30 [1] 8.098928 plot(ecdf(x), col="blue", main="ECDFs", xlab="values") lines(ecdf(y), col="brown") Because the ECDF of y (brown) plots to the right of the ECDF of x (blue), and therefore below, it seems the values of y are generally larger than values of x A two-sample Kolmogorov-Smirnov test confirms this with a P-value below 5%. The test statistic $D$ is the maximum vertical distance between the two ECDF plots. ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.43333, p-value = 0.006548 alternative hypothesis: two-sided When two samples are not of the same shape (including the same variability), a two-sample Wilcoxon Rank Sum test, is said to be a test of stochastic dominance (rather than of different medians). boxplot(x, y, horizontal=T, col=c("skyblue2", "wheat")) wilcox.test(x,y) Wilcoxon rank sum test data: x and y W = 236, p-value = 0.001292 alternative hypothesis: true location shift is not equal to 0 Notes: (1) Technically speaking, there are several different types of 'stochastic dominance' with somewhat different definitions. You may be interested in googling that. Perhaps start here. (2) The fictitious samples used in the above discussion were sampled in R as follows: set.seed(2022) x = rgamma(30, 4, 1/5) y = rgamma(30, 5, 1/5) + 7
How can I test whether one empirical CDF is to the left or right of another? Maybe you're interested in whether sample y stochastically dominates sample x. If so, you might want to look directly at ECDF plots, and do some formal tests. Here are summaries and ECDF plots of two
50,260
Does bias eventually increase with model complexity?
I was wondering the same thing. What you have to realize though, is that the bias is defined for the expectation of the prediction for a given x over all possible training sets (D): \begin{equation} D=\left\{\left(x_{1}, y_{1}\right) \ldots,\left(x_{n}, y_{n}\right)\right\} \end{equation} \begin{equation} \operatorname{Bias}_{D}[\hat{f}(x ; D)]=\mathrm{E}_{D}[\hat{f}(x ; D)]-f(x) \end{equation} This means that the expectation ranges over different choices of the training set. In your example (figure 2.10), a specific training set is given. Indeed, the prediction of the flexible model (green line) is way off the real function. Now imagine that you would create multiple training sets, and create multiple of these green lines. The average of these green lines would almost perfectly allign the black line (real model), even better than the yellow line would. The expectation of the prediction of all possible training sets for the more flexibel model is closer to the real model and the bias is therefore lower.
Does bias eventually increase with model complexity?
I was wondering the same thing. What you have to realize though, is that the bias is defined for the expectation of the prediction for a given x over all possible training sets (D): \begin{equation} D
Does bias eventually increase with model complexity? I was wondering the same thing. What you have to realize though, is that the bias is defined for the expectation of the prediction for a given x over all possible training sets (D): \begin{equation} D=\left\{\left(x_{1}, y_{1}\right) \ldots,\left(x_{n}, y_{n}\right)\right\} \end{equation} \begin{equation} \operatorname{Bias}_{D}[\hat{f}(x ; D)]=\mathrm{E}_{D}[\hat{f}(x ; D)]-f(x) \end{equation} This means that the expectation ranges over different choices of the training set. In your example (figure 2.10), a specific training set is given. Indeed, the prediction of the flexible model (green line) is way off the real function. Now imagine that you would create multiple training sets, and create multiple of these green lines. The average of these green lines would almost perfectly allign the black line (real model), even better than the yellow line would. The expectation of the prediction of all possible training sets for the more flexibel model is closer to the real model and the bias is therefore lower.
Does bias eventually increase with model complexity? I was wondering the same thing. What you have to realize though, is that the bias is defined for the expectation of the prediction for a given x over all possible training sets (D): \begin{equation} D
50,261
Does bias eventually increase with model complexity?
It depends, is all I can say. As a rule of thumb, bias decreases in a model as you add more parameters, but there can be some weird exceptions.
Does bias eventually increase with model complexity?
It depends, is all I can say. As a rule of thumb, bias decreases in a model as you add more parameters, but there can be some weird exceptions.
Does bias eventually increase with model complexity? It depends, is all I can say. As a rule of thumb, bias decreases in a model as you add more parameters, but there can be some weird exceptions.
Does bias eventually increase with model complexity? It depends, is all I can say. As a rule of thumb, bias decreases in a model as you add more parameters, but there can be some weird exceptions.
50,262
Why use the EM Algorithm and not just marginalise the complete likelihood?
So I've discussed this with a colleague. Consider the marginalisation $$p(\mathbf{X} \mid \boldsymbol{\theta}) = \int p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta})p(\mathbf{Z} \mid \boldsymbol{\theta}) d\mathbf{Z}.$$ This can be rewritten as the expectation $$\mathbb{E}_{\mathbf{Z} \mid \boldsymbol{\theta}}\left[ p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta}) \right].$$ If this can be calculated exactly, we're fine. However, if the expectation is intractable, we need to compute a numerical approximation. Hence to maximise w.r.t. $\boldsymbol{\theta}$ we will need to use some iterative procedure like gradient-ascent. Suppose we have the current estimate $\boldsymbol{\hat{\theta}^{(t)}}$ for $\boldsymbol{\theta}$. If we now try and approximate the expectation by sampling from $\mathbf{Z}$, we get $$\frac{1}{n}\sum_{i=1}^n p(\mathbf{X} \mid \mathbf{Z}_i, \boldsymbol{\theta})$$ but the $\mathbf{Z}_i$ were sampled from the distribution $\mathbf{Z} \mid \boldsymbol{\hat{\theta}^{(t)}}$, so this actually approximates the expectation $$\mathbb{E}_{\mathbf{Z} \mid \boldsymbol{\hat{\theta}^{(t)}}}\left[ p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta}) \right] \approx \frac{1}{n}\sum_{i=1}^n p(\mathbf{X} \mid \mathbf{Z}_i, \boldsymbol{\theta})$$ which is not the same as the original expectation we wanted. Hence if we maximised this new expectation w.r.t. $\boldsymbol{\theta}$ we won't be doing Maximum Likelihood Estimation. The EM algorithm therefore takes a different approach.
Why use the EM Algorithm and not just marginalise the complete likelihood?
So I've discussed this with a colleague. Consider the marginalisation $$p(\mathbf{X} \mid \boldsymbol{\theta}) = \int p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta})p(\mathbf{Z} \mid \boldsymbol{\t
Why use the EM Algorithm and not just marginalise the complete likelihood? So I've discussed this with a colleague. Consider the marginalisation $$p(\mathbf{X} \mid \boldsymbol{\theta}) = \int p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta})p(\mathbf{Z} \mid \boldsymbol{\theta}) d\mathbf{Z}.$$ This can be rewritten as the expectation $$\mathbb{E}_{\mathbf{Z} \mid \boldsymbol{\theta}}\left[ p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta}) \right].$$ If this can be calculated exactly, we're fine. However, if the expectation is intractable, we need to compute a numerical approximation. Hence to maximise w.r.t. $\boldsymbol{\theta}$ we will need to use some iterative procedure like gradient-ascent. Suppose we have the current estimate $\boldsymbol{\hat{\theta}^{(t)}}$ for $\boldsymbol{\theta}$. If we now try and approximate the expectation by sampling from $\mathbf{Z}$, we get $$\frac{1}{n}\sum_{i=1}^n p(\mathbf{X} \mid \mathbf{Z}_i, \boldsymbol{\theta})$$ but the $\mathbf{Z}_i$ were sampled from the distribution $\mathbf{Z} \mid \boldsymbol{\hat{\theta}^{(t)}}$, so this actually approximates the expectation $$\mathbb{E}_{\mathbf{Z} \mid \boldsymbol{\hat{\theta}^{(t)}}}\left[ p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta}) \right] \approx \frac{1}{n}\sum_{i=1}^n p(\mathbf{X} \mid \mathbf{Z}_i, \boldsymbol{\theta})$$ which is not the same as the original expectation we wanted. Hence if we maximised this new expectation w.r.t. $\boldsymbol{\theta}$ we won't be doing Maximum Likelihood Estimation. The EM algorithm therefore takes a different approach.
Why use the EM Algorithm and not just marginalise the complete likelihood? So I've discussed this with a colleague. Consider the marginalisation $$p(\mathbf{X} \mid \boldsymbol{\theta}) = \int p(\mathbf{X} \mid \mathbf{Z}, \boldsymbol{\theta})p(\mathbf{Z} \mid \boldsymbol{\t
50,263
Why transformer in deep learning is called transformer?
Transformer, becuase it uses the attention mechanism with softmax transformation after that using the feedforward with nonlinear transformation. In short it uses different transformations(activation functions) to transform the input from intial representation into final representation if we would explain that in very simple words.
Why transformer in deep learning is called transformer?
Transformer, becuase it uses the attention mechanism with softmax transformation after that using the feedforward with nonlinear transformation. In short it uses different transformations(activation
Why transformer in deep learning is called transformer? Transformer, becuase it uses the attention mechanism with softmax transformation after that using the feedforward with nonlinear transformation. In short it uses different transformations(activation functions) to transform the input from intial representation into final representation if we would explain that in very simple words.
Why transformer in deep learning is called transformer? Transformer, becuase it uses the attention mechanism with softmax transformation after that using the feedforward with nonlinear transformation. In short it uses different transformations(activation
50,264
Parameter uncertainity in least squares optimization: rescaling Hessian
One can possibly look at the approximation for the covariance as a "rescaled inverse Hessian", but it kind of hides the simple deduction. In principle it's the stopped series expansion of the term that is minimized. So, if one stops at the second term, the expression is in the simple 1D case around the (minimum) value of a: $$f(x) = f(a) + f^\prime(a)(x-a) + \frac{1}{2}f^{\prime\prime}(a)(x-a)^2$$ If you change to the vectorized version, the derivatives are replaced by their respective vectorized parts. At the minimum the slope is zero, which means the first derivative -- the Jacobian -- is zero. E.g. also https://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables So, now you can identify the remaining part with the expression of the approximation at the minimum. The actual scaling factor of the variance is a bit more complicated, why the number of degrees of freedom is a good choice. The logic behind this is that the term should neither be under-, nor overestimated. Sometimes this is described as "unbiased", but in my understanding this implies a dependency on the value itself. So, in my understanding it's more like a correction that can be verified by using analytical examples.
Parameter uncertainity in least squares optimization: rescaling Hessian
One can possibly look at the approximation for the covariance as a "rescaled inverse Hessian", but it kind of hides the simple deduction. In principle it's the stopped series expansion of the term tha
Parameter uncertainity in least squares optimization: rescaling Hessian One can possibly look at the approximation for the covariance as a "rescaled inverse Hessian", but it kind of hides the simple deduction. In principle it's the stopped series expansion of the term that is minimized. So, if one stops at the second term, the expression is in the simple 1D case around the (minimum) value of a: $$f(x) = f(a) + f^\prime(a)(x-a) + \frac{1}{2}f^{\prime\prime}(a)(x-a)^2$$ If you change to the vectorized version, the derivatives are replaced by their respective vectorized parts. At the minimum the slope is zero, which means the first derivative -- the Jacobian -- is zero. E.g. also https://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables So, now you can identify the remaining part with the expression of the approximation at the minimum. The actual scaling factor of the variance is a bit more complicated, why the number of degrees of freedom is a good choice. The logic behind this is that the term should neither be under-, nor overestimated. Sometimes this is described as "unbiased", but in my understanding this implies a dependency on the value itself. So, in my understanding it's more like a correction that can be verified by using analytical examples.
Parameter uncertainity in least squares optimization: rescaling Hessian One can possibly look at the approximation for the covariance as a "rescaled inverse Hessian", but it kind of hides the simple deduction. In principle it's the stopped series expansion of the term tha
50,265
Probability that a device turns off in n seconds
The question concerns a sample space $\Omega$ of sequences of observations of the light made at times $1, 2, 3, \ldots.$ For it to be answerable, we have to suppose that the switch can be flipped no more than once in any interval $(n-1,n]$ (for otherwise the observations do not determine when the switch is flipped). $\Omega$ therefore can be identified with the set of all binary sequences $$\Omega = \{\omega\mid \omega:\mathbb{N}\to \{0,1\}\}$$ where $\omega(0)=1$ indicates the light is on at time $0$ and generally at any time $n,$ $\omega(n)=1$ if and only if the light is on at time $n.$ Let $\mathcal{P}_i = \{\omega\mid \omega(i)=0\}$ be the set where the light is off at time $i.$ The problem supposes every $\mathcal{P}_i$ is an event for $i=0,1,2,\ldots$ and the associated probabilities of these events are $$\Pr(\mathcal{P}_i)=p_i.$$ Any answer therefore boils down to representing the set $$\mathcal{E}_n = \text{The light was first turned off in the interval }(n-1,n]$$ in terms of the events $\mathcal{P}_i.$ We can try to figure this out recursively. Begin with $n=1:$ $\mathcal{E}_1$ is the event the light is not on at time $1.$ It is identical to $\mathcal{P}_1,$ $$\mathcal{E}_1 = \mathcal{P}_1.$$ When $n=2,$ $\mathcal{E}_2$ is the event "the light is not on at time $2$ but the light was still on at time $1.$" In set notation, using overbars to denote complements (with respect to $\Omega$), $$\mathcal{E}_2 = \mathcal{P}_2\cap \bar{\mathcal{P}_1}.$$ It consists of all sequences of the form $110\ldots\,.$ Because it is difficult to see how the chance of this intersection is determined by any of the specified probabilities $p_i,$ let's look for a counterexample. Evidently, we can focus on the first three times $0,1,2.$ Consider, then, the family of probability functions $\mathbb{P}_\alpha$ given by this table: $$\begin{array}{} \omega & \mathbb{P}_\alpha \\ \hline 111\ldots & \alpha(1-p_1)\\ 110\ldots & (1-\alpha)(1-p_1)\\ 101\ldots & p_1 - p_2 + (1-\alpha)(1-p_1)\\ 100\ldots & p_2 - (1-\alpha)(1-p_1) \end{array}$$ The left hand column indicates the four events corresponding to the state of the light at times $0,1,$ and $2$ while the right hand column gives their probabilities. For this to be a valid probability function, none of the chances can be negative. This forces $\alpha$ to lie between $0$ and $1$ (to make the first two chances non-negative) and $$\frac{p_2-p_1}{1-p_1}\le \alpha \le \frac{p_2}{1-p_1}$$ (to make the last two chances non-negative). For instance, when $p_1=p_2=1/2,$ we must have $0\le \alpha \le 1.$ This demonstrates such probability families exist. Now since $\mathcal{P}_1 = \{100\ldots, 101\ldots\}$ and $\mathcal{P}_2 = \{100\ldots, 110\ldots\},$ the axioms of probability give $$\mathbb{P}_\alpha(\mathcal{P}_1) = \mathbb{P}_\alpha(100\ldots) + \mathbb{P}_\alpha(101\ldots) = p_1$$ and $$\mathbb{P}_\alpha(\mathcal{P}_2) = \mathbb{P}_\alpha(100\ldots) + \mathbb{P}_\alpha(110\ldots) = p_2,$$ showing that these probability functions satisfy all the requirements of the problem. However, $$\mathbb{P}_\alpha(\mathcal{E}_2) = \mathbb{P}_\alpha(\mathcal{P}_2 \cap \bar{\mathcal{P}_1}) = \mathbb{P}_\alpha(110\ldots) = (1-\alpha)(1-p_1).$$ This looks like it can vary with $\alpha.$ Indeed, taking the example $p_1=p_2=1/2,$ these probabilities are $$(1-\alpha)(1-1/2) = (1-\alpha)/2,$$ which (as we saw above) can be any value from $(1-0)/2=1/2$ down to $(1-1)/2=0.$ This proves the question does not generally have a unique answer. In fact, the restrictions on $\alpha$ only imply $$p_2 - p_1 \le \Pr(\mathcal{E}_2) \le p_2.$$ Similar inequalities must apply to the chances of all the other events $\mathcal{E}_3,$ $\mathcal{E}_4,$ etc.
Probability that a device turns off in n seconds
The question concerns a sample space $\Omega$ of sequences of observations of the light made at times $1, 2, 3, \ldots.$ For it to be answerable, we have to suppose that the switch can be flipped no m
Probability that a device turns off in n seconds The question concerns a sample space $\Omega$ of sequences of observations of the light made at times $1, 2, 3, \ldots.$ For it to be answerable, we have to suppose that the switch can be flipped no more than once in any interval $(n-1,n]$ (for otherwise the observations do not determine when the switch is flipped). $\Omega$ therefore can be identified with the set of all binary sequences $$\Omega = \{\omega\mid \omega:\mathbb{N}\to \{0,1\}\}$$ where $\omega(0)=1$ indicates the light is on at time $0$ and generally at any time $n,$ $\omega(n)=1$ if and only if the light is on at time $n.$ Let $\mathcal{P}_i = \{\omega\mid \omega(i)=0\}$ be the set where the light is off at time $i.$ The problem supposes every $\mathcal{P}_i$ is an event for $i=0,1,2,\ldots$ and the associated probabilities of these events are $$\Pr(\mathcal{P}_i)=p_i.$$ Any answer therefore boils down to representing the set $$\mathcal{E}_n = \text{The light was first turned off in the interval }(n-1,n]$$ in terms of the events $\mathcal{P}_i.$ We can try to figure this out recursively. Begin with $n=1:$ $\mathcal{E}_1$ is the event the light is not on at time $1.$ It is identical to $\mathcal{P}_1,$ $$\mathcal{E}_1 = \mathcal{P}_1.$$ When $n=2,$ $\mathcal{E}_2$ is the event "the light is not on at time $2$ but the light was still on at time $1.$" In set notation, using overbars to denote complements (with respect to $\Omega$), $$\mathcal{E}_2 = \mathcal{P}_2\cap \bar{\mathcal{P}_1}.$$ It consists of all sequences of the form $110\ldots\,.$ Because it is difficult to see how the chance of this intersection is determined by any of the specified probabilities $p_i,$ let's look for a counterexample. Evidently, we can focus on the first three times $0,1,2.$ Consider, then, the family of probability functions $\mathbb{P}_\alpha$ given by this table: $$\begin{array}{} \omega & \mathbb{P}_\alpha \\ \hline 111\ldots & \alpha(1-p_1)\\ 110\ldots & (1-\alpha)(1-p_1)\\ 101\ldots & p_1 - p_2 + (1-\alpha)(1-p_1)\\ 100\ldots & p_2 - (1-\alpha)(1-p_1) \end{array}$$ The left hand column indicates the four events corresponding to the state of the light at times $0,1,$ and $2$ while the right hand column gives their probabilities. For this to be a valid probability function, none of the chances can be negative. This forces $\alpha$ to lie between $0$ and $1$ (to make the first two chances non-negative) and $$\frac{p_2-p_1}{1-p_1}\le \alpha \le \frac{p_2}{1-p_1}$$ (to make the last two chances non-negative). For instance, when $p_1=p_2=1/2,$ we must have $0\le \alpha \le 1.$ This demonstrates such probability families exist. Now since $\mathcal{P}_1 = \{100\ldots, 101\ldots\}$ and $\mathcal{P}_2 = \{100\ldots, 110\ldots\},$ the axioms of probability give $$\mathbb{P}_\alpha(\mathcal{P}_1) = \mathbb{P}_\alpha(100\ldots) + \mathbb{P}_\alpha(101\ldots) = p_1$$ and $$\mathbb{P}_\alpha(\mathcal{P}_2) = \mathbb{P}_\alpha(100\ldots) + \mathbb{P}_\alpha(110\ldots) = p_2,$$ showing that these probability functions satisfy all the requirements of the problem. However, $$\mathbb{P}_\alpha(\mathcal{E}_2) = \mathbb{P}_\alpha(\mathcal{P}_2 \cap \bar{\mathcal{P}_1}) = \mathbb{P}_\alpha(110\ldots) = (1-\alpha)(1-p_1).$$ This looks like it can vary with $\alpha.$ Indeed, taking the example $p_1=p_2=1/2,$ these probabilities are $$(1-\alpha)(1-1/2) = (1-\alpha)/2,$$ which (as we saw above) can be any value from $(1-0)/2=1/2$ down to $(1-1)/2=0.$ This proves the question does not generally have a unique answer. In fact, the restrictions on $\alpha$ only imply $$p_2 - p_1 \le \Pr(\mathcal{E}_2) \le p_2.$$ Similar inequalities must apply to the chances of all the other events $\mathcal{E}_3,$ $\mathcal{E}_4,$ etc.
Probability that a device turns off in n seconds The question concerns a sample space $\Omega$ of sequences of observations of the light made at times $1, 2, 3, \ldots.$ For it to be answerable, we have to suppose that the switch can be flipped no m
50,266
Probability that a device turns off in n seconds
To answer this question and confirm the answer that is proposed in the OP, I would first propose to slightly rephrase it as : "A device is initially on and experiences a sequence of events $1, 2, \ldots$ which can turn it off. Each binary trigger event occurs with an independent probability given by a Bernouilli trial parameterized by respectively $p_1, p_2, \ldots$. What is the probability that the device is turned off for the first time at the $n^\text{th}$ event?" First, if $n=1$, the answer is simply $p_1$, that's easy and that special case can be put to side. For $n>1$, the final answer follows by stating that this happens at event $n$ if and only if (1) the switch was not triggered off at events $1, 2, \ldots, n-1$ and (2) that the switch is triggered off at event $n$. For the first part, note that in general at a given event $k<n$, given that the switch is still on, the probability that the switch is not triggered off at this event $k$ is equal to $1-p_k$. In particular, the probability that the switch was not triggered off at event $1$ is $1-p_1$. Given that the switch was not triggered off at event $1$, the probability that it is not triggered at event $2$ is $1-p_2$, and so on. For the second part, the probability that the switch is triggered off at event $n$ is simply $p_n$. Since all draws are independent, probabilities are multiplied and the probability that the switch is triggered off for the first time at event $n$ is equal to: $$ (1 - p_1) \times (1 - p_2) \times \ldots \times (1 - p_{n-1}) \times p_n $$ which confirms the intuition in the OP. What may be confusing at first in the OP is the fact that the problem is set up as a temporal sequence and that we may need to consider the probability space of all "trajectories" of trigger events. Yet the problem is simpler as one asks in the OP the probability that event $n$ is triggered for the first time, and thus that all previous events are not triggered, and thus irrespective of a sequence (as would be explored in other answers).
Probability that a device turns off in n seconds
To answer this question and confirm the answer that is proposed in the OP, I would first propose to slightly rephrase it as : "A device is initially on and experiences a sequence of events $1, 2, \ld
Probability that a device turns off in n seconds To answer this question and confirm the answer that is proposed in the OP, I would first propose to slightly rephrase it as : "A device is initially on and experiences a sequence of events $1, 2, \ldots$ which can turn it off. Each binary trigger event occurs with an independent probability given by a Bernouilli trial parameterized by respectively $p_1, p_2, \ldots$. What is the probability that the device is turned off for the first time at the $n^\text{th}$ event?" First, if $n=1$, the answer is simply $p_1$, that's easy and that special case can be put to side. For $n>1$, the final answer follows by stating that this happens at event $n$ if and only if (1) the switch was not triggered off at events $1, 2, \ldots, n-1$ and (2) that the switch is triggered off at event $n$. For the first part, note that in general at a given event $k<n$, given that the switch is still on, the probability that the switch is not triggered off at this event $k$ is equal to $1-p_k$. In particular, the probability that the switch was not triggered off at event $1$ is $1-p_1$. Given that the switch was not triggered off at event $1$, the probability that it is not triggered at event $2$ is $1-p_2$, and so on. For the second part, the probability that the switch is triggered off at event $n$ is simply $p_n$. Since all draws are independent, probabilities are multiplied and the probability that the switch is triggered off for the first time at event $n$ is equal to: $$ (1 - p_1) \times (1 - p_2) \times \ldots \times (1 - p_{n-1}) \times p_n $$ which confirms the intuition in the OP. What may be confusing at first in the OP is the fact that the problem is set up as a temporal sequence and that we may need to consider the probability space of all "trajectories" of trigger events. Yet the problem is simpler as one asks in the OP the probability that event $n$ is triggered for the first time, and thus that all previous events are not triggered, and thus irrespective of a sequence (as would be explored in other answers).
Probability that a device turns off in n seconds To answer this question and confirm the answer that is proposed in the OP, I would first propose to slightly rephrase it as : "A device is initially on and experiences a sequence of events $1, 2, \ld
50,267
Distribution of $\langle x^2,a\rangle$ where $x$ is is a random direction?
Let's start with $m(a)=E[\langle x^2,a\rangle]$ and $s(a)=E[\langle x^2,a\rangle^2]$, so that the variance will be $s(a)-m(a)^2$. We will calculate these by expressing $x$ in $n$-dimensional spherical coordinates, so \begin{align} & & x_1&=\cos(\phi_1)\\ 0\ \le\ &\phi_1\le\pi & x_2&=\sin(\phi_1)\cos(\phi_2)\\ 0\ \le\ &\phi_2\le\pi & x_3&=\sin(\phi_1)\sin(\phi_2)\cos(\phi_3)\\ &\vdots & \vdots\\ 0\ \le\ & \phi_{n-1}\le\pi & x_{n-1}&=\sin(\phi_1)\cdots\sin(\phi_{n-2})\cos(\phi_{n-1})\\ 0\ \le\ &\phi_n\le2\pi & x_n&=\sin(\phi_1)\cdots\sin(\phi_{n-2})\sin(\phi_{n-1})\\ \end{align} and the element of $n-1$-dimensional surface area is $$dS = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1\, d\phi_2 \cdots d\phi_n$$ We begin with some special cases, letting $b_i$ be the vector that agrees with $a$ on the $i^{th}$ coordinate and is zero elsewhere. First, \begin{align} m(b_1)&=\frac {\int\cdots\int a_1\cos^2(\phi_1)\, dS} {\int\cdots\int dS}\\ &=\frac{\int a_1\cos^2(\phi_1)\sin^{n-2}(\phi_1)\,d\phi_1}{\int \sin^{n-2}(\phi_1)\,d\phi_1}\\ &=a_1\left(1-\frac{\int \sin^n(\phi_1)\,d\phi_1} {\int \sin^{n-2}(\phi_1)\,d\phi_1}\right)\\ &=a_1\left(1-\frac{n-1}{n}\right)\\ &=a_1/n \end{align} where the integrals over higher $\phi_i$ cancel, and the final ratio uses integration by parts. Since $m$ must be symmetric in all the coordinates, we get \begin{align} m(b_i)&=b_i/n\\ m(a)=m\left(\sum b_i\right)&=\sum m(b_i)=\frac{1}{n}\sum a_i \end{align} Similarly, \begin{align} s(b_i)&=3a_i^2/(n^2+2n)\\ s(b_i+b_j)-s(b_i)-s(b_j)&=2a_ia_j/(n^2+2n) \end{align} Now by the polynomial identity $$\left(\sum x_i\right)^2=\sum_i x_i^2+\sum_{i<j} \left((x_i+x_j)^2 - x_i^2-x_j^2\right)$$ we also have \begin{align} s(a) &=s\left(\sum b_i\right)\\ &=\sum_i s(b_i)+\sum_{i<j}\left(s(b_i+b_j)-s(b_i)-s(b_j)\right)\\ &=\sum_i \frac{3a_i^2}{n^2+2n}+\sum_{i<j}\frac{2a_ia_j}{n^2+2n} \end{align} So finally, the variance is \begin{align} v&=s(a)-m(a)^2\\ &=\sum_{i}a_i^2\left(\frac{3}{n^2+2n}-\frac{1}{n^2}\right) + \sum_{i<j} a_ia_j\left(\frac{2}{n^2+2n}-\frac{2}{n^2}\right)\\ &=\frac{2}{n^2(n+2)}\left((n-1)\sum_{i}a_i^2 - 2\sum_{i<j} a_ia_j\right)\\ &=\frac{2}{n^2(n+2)}\sum_{i<j}(a_i-a_j)^2 \end{align}
Distribution of $\langle x^2,a\rangle$ where $x$ is is a random direction?
Let's start with $m(a)=E[\langle x^2,a\rangle]$ and $s(a)=E[\langle x^2,a\rangle^2]$, so that the variance will be $s(a)-m(a)^2$. We will calculate these by expressing $x$ in $n$-dimensional spherical
Distribution of $\langle x^2,a\rangle$ where $x$ is is a random direction? Let's start with $m(a)=E[\langle x^2,a\rangle]$ and $s(a)=E[\langle x^2,a\rangle^2]$, so that the variance will be $s(a)-m(a)^2$. We will calculate these by expressing $x$ in $n$-dimensional spherical coordinates, so \begin{align} & & x_1&=\cos(\phi_1)\\ 0\ \le\ &\phi_1\le\pi & x_2&=\sin(\phi_1)\cos(\phi_2)\\ 0\ \le\ &\phi_2\le\pi & x_3&=\sin(\phi_1)\sin(\phi_2)\cos(\phi_3)\\ &\vdots & \vdots\\ 0\ \le\ & \phi_{n-1}\le\pi & x_{n-1}&=\sin(\phi_1)\cdots\sin(\phi_{n-2})\cos(\phi_{n-1})\\ 0\ \le\ &\phi_n\le2\pi & x_n&=\sin(\phi_1)\cdots\sin(\phi_{n-2})\sin(\phi_{n-1})\\ \end{align} and the element of $n-1$-dimensional surface area is $$dS = \sin^{n-2}(\phi_1)\sin^{n-3}(\phi_2)\cdots \sin(\phi_{n-2})\ d\phi_1\, d\phi_2 \cdots d\phi_n$$ We begin with some special cases, letting $b_i$ be the vector that agrees with $a$ on the $i^{th}$ coordinate and is zero elsewhere. First, \begin{align} m(b_1)&=\frac {\int\cdots\int a_1\cos^2(\phi_1)\, dS} {\int\cdots\int dS}\\ &=\frac{\int a_1\cos^2(\phi_1)\sin^{n-2}(\phi_1)\,d\phi_1}{\int \sin^{n-2}(\phi_1)\,d\phi_1}\\ &=a_1\left(1-\frac{\int \sin^n(\phi_1)\,d\phi_1} {\int \sin^{n-2}(\phi_1)\,d\phi_1}\right)\\ &=a_1\left(1-\frac{n-1}{n}\right)\\ &=a_1/n \end{align} where the integrals over higher $\phi_i$ cancel, and the final ratio uses integration by parts. Since $m$ must be symmetric in all the coordinates, we get \begin{align} m(b_i)&=b_i/n\\ m(a)=m\left(\sum b_i\right)&=\sum m(b_i)=\frac{1}{n}\sum a_i \end{align} Similarly, \begin{align} s(b_i)&=3a_i^2/(n^2+2n)\\ s(b_i+b_j)-s(b_i)-s(b_j)&=2a_ia_j/(n^2+2n) \end{align} Now by the polynomial identity $$\left(\sum x_i\right)^2=\sum_i x_i^2+\sum_{i<j} \left((x_i+x_j)^2 - x_i^2-x_j^2\right)$$ we also have \begin{align} s(a) &=s\left(\sum b_i\right)\\ &=\sum_i s(b_i)+\sum_{i<j}\left(s(b_i+b_j)-s(b_i)-s(b_j)\right)\\ &=\sum_i \frac{3a_i^2}{n^2+2n}+\sum_{i<j}\frac{2a_ia_j}{n^2+2n} \end{align} So finally, the variance is \begin{align} v&=s(a)-m(a)^2\\ &=\sum_{i}a_i^2\left(\frac{3}{n^2+2n}-\frac{1}{n^2}\right) + \sum_{i<j} a_ia_j\left(\frac{2}{n^2+2n}-\frac{2}{n^2}\right)\\ &=\frac{2}{n^2(n+2)}\left((n-1)\sum_{i}a_i^2 - 2\sum_{i<j} a_ia_j\right)\\ &=\frac{2}{n^2(n+2)}\sum_{i<j}(a_i-a_j)^2 \end{align}
Distribution of $\langle x^2,a\rangle$ where $x$ is is a random direction? Let's start with $m(a)=E[\langle x^2,a\rangle]$ and $s(a)=E[\langle x^2,a\rangle^2]$, so that the variance will be $s(a)-m(a)^2$. We will calculate these by expressing $x$ in $n$-dimensional spherical
50,268
Internal validation steps
Expanding on the comment from gung - Reinstate Monica: You have correctly grasped the major point. With this bootstrap approach you are validating your model-building process. Under the bootstrap principle, resampling from your data set is akin to taking data sets from the underlying population. Thus, if repeating your modeling process on multiple bootstrap samples from the data fits the full data set well, it's reasonable to assume that your modeling process applied to the full data set will reasonably represent the situation in the underlying population. The instability of variable selection you note is particularly an issue in LASSO. Hastie et al illustrate bootstrap evaluation of LASSO modeling in Section 6.2 of Statistical Learning with Sparsity, with cross-validated choice of penalty and consequent variable selection performed on each resample. They show graphical displays of the distributions of regression-coefficient estimates among the models and of the frequency of omission of each predictor. If your modeling process involves predictor selection, you might want to generate similar displays. This is one reason why Steps 2 and 6 in Harrell's list are so helpful, quoted here in part: Formulate good hypotheses that lead to specification of relevant candidate predictors and possible interactions. If the number of terms fitted or tested in the modeling process ... is too large in comparison with the number of outcomes in the sample, use data reduction (ignoring Y) until the number of remaining free variables needing regression coefficients is tolerable. ... Alternatively, use penalized estimation with the entire set of variables. If you start with a well-specified list of predictors based on your understanding of the subject matter, with either a number of predictors appropriate to the size of your data set or penalization like ridge regression that doesn't perform variable selection, there will be no data-driven variable selection to examine in the bootstrapping. That simplifies the validation and calibration via bootstrapping.
Internal validation steps
Expanding on the comment from gung - Reinstate Monica: You have correctly grasped the major point. With this bootstrap approach you are validating your model-building process. Under the bootstrap prin
Internal validation steps Expanding on the comment from gung - Reinstate Monica: You have correctly grasped the major point. With this bootstrap approach you are validating your model-building process. Under the bootstrap principle, resampling from your data set is akin to taking data sets from the underlying population. Thus, if repeating your modeling process on multiple bootstrap samples from the data fits the full data set well, it's reasonable to assume that your modeling process applied to the full data set will reasonably represent the situation in the underlying population. The instability of variable selection you note is particularly an issue in LASSO. Hastie et al illustrate bootstrap evaluation of LASSO modeling in Section 6.2 of Statistical Learning with Sparsity, with cross-validated choice of penalty and consequent variable selection performed on each resample. They show graphical displays of the distributions of regression-coefficient estimates among the models and of the frequency of omission of each predictor. If your modeling process involves predictor selection, you might want to generate similar displays. This is one reason why Steps 2 and 6 in Harrell's list are so helpful, quoted here in part: Formulate good hypotheses that lead to specification of relevant candidate predictors and possible interactions. If the number of terms fitted or tested in the modeling process ... is too large in comparison with the number of outcomes in the sample, use data reduction (ignoring Y) until the number of remaining free variables needing regression coefficients is tolerable. ... Alternatively, use penalized estimation with the entire set of variables. If you start with a well-specified list of predictors based on your understanding of the subject matter, with either a number of predictors appropriate to the size of your data set or penalization like ridge regression that doesn't perform variable selection, there will be no data-driven variable selection to examine in the bootstrapping. That simplifies the validation and calibration via bootstrapping.
Internal validation steps Expanding on the comment from gung - Reinstate Monica: You have correctly grasped the major point. With this bootstrap approach you are validating your model-building process. Under the bootstrap prin
50,269
Internal validation steps
Normally, validation sets or validation is done inside of the cross validation function for hyperparameter testing. So, you actually don't care about a specific validation set. You normally end up with train, test. What I believe is or could be the meaning of the part you quoted, without having access to the text of yours, is, that when we model the outcome of different estimators, may it be different regressions and when applying different strategies to deal with the data for regression, that we make sure all techniques our data undergoes is applied to every subsample of the cv in the same way AND that the subsamples for all estimators are even so that we can really compare performance. This is the reason why we normally not do this by manually. So in other words what you do, make sure you do it for 'everyone' in the same way. No 'one' should be treated unequally. Does this help you or do I misinterpret your question?
Internal validation steps
Normally, validation sets or validation is done inside of the cross validation function for hyperparameter testing. So, you actually don't care about a specific validation set. You normally end up wit
Internal validation steps Normally, validation sets or validation is done inside of the cross validation function for hyperparameter testing. So, you actually don't care about a specific validation set. You normally end up with train, test. What I believe is or could be the meaning of the part you quoted, without having access to the text of yours, is, that when we model the outcome of different estimators, may it be different regressions and when applying different strategies to deal with the data for regression, that we make sure all techniques our data undergoes is applied to every subsample of the cv in the same way AND that the subsamples for all estimators are even so that we can really compare performance. This is the reason why we normally not do this by manually. So in other words what you do, make sure you do it for 'everyone' in the same way. No 'one' should be treated unequally. Does this help you or do I misinterpret your question?
Internal validation steps Normally, validation sets or validation is done inside of the cross validation function for hyperparameter testing. So, you actually don't care about a specific validation set. You normally end up wit
50,270
Linear Mixed Models non-independent data with unbalanced design and non-independent data?
You say that the factor Year nested in Plant If Year is nested within Plant. In that case, the moel should be lmer(Productivity~Temperature +(1|Plant/Year),data = data) or eqivalently: lmer(Productivity~Temperature +(1|Plant) + (1|Plant:Year),data = data) So, just to clarify, this means that each Year belongs to one and only one Plant. So year 1 could belong to plant 1, and year 2 could also belong to plant 1, which means that for each year, one and only 1 plant was measured. For year 3, for example, this could belong to plant 2 (but not plant 1). The nested structure looks like Plant1 Plant2 Plant3 / \ / \ / \ Year1 Year2 Year3 Year4 Year5 Year6 Edit: It appears from the comments that the design is partially crossed (partially nested). This might look something like Plant1 Plant2 Plant3 /\ / \ \ / \ / \ / \ X \ / \ / \ / \ \ Year1 Year2 Year3 Year4 In that case, the appropriate random structure is: lmer(Productivity~Temperature + (1|Plant) + (1|Year), data = data) More detail about nested and crossed random effects is here: Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
Linear Mixed Models non-independent data with unbalanced design and non-independent data?
You say that the factor Year nested in Plant If Year is nested within Plant. In that case, the moel should be lmer(Productivity~Temperature +(1|Plant/Year),data = data) or eqivalently: lmer(Product
Linear Mixed Models non-independent data with unbalanced design and non-independent data? You say that the factor Year nested in Plant If Year is nested within Plant. In that case, the moel should be lmer(Productivity~Temperature +(1|Plant/Year),data = data) or eqivalently: lmer(Productivity~Temperature +(1|Plant) + (1|Plant:Year),data = data) So, just to clarify, this means that each Year belongs to one and only one Plant. So year 1 could belong to plant 1, and year 2 could also belong to plant 1, which means that for each year, one and only 1 plant was measured. For year 3, for example, this could belong to plant 2 (but not plant 1). The nested structure looks like Plant1 Plant2 Plant3 / \ / \ / \ Year1 Year2 Year3 Year4 Year5 Year6 Edit: It appears from the comments that the design is partially crossed (partially nested). This might look something like Plant1 Plant2 Plant3 /\ / \ \ / \ / \ / \ X \ / \ / \ / \ \ Year1 Year2 Year3 Year4 In that case, the appropriate random structure is: lmer(Productivity~Temperature + (1|Plant) + (1|Year), data = data) More detail about nested and crossed random effects is here: Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4?
Linear Mixed Models non-independent data with unbalanced design and non-independent data? You say that the factor Year nested in Plant If Year is nested within Plant. In that case, the moel should be lmer(Productivity~Temperature +(1|Plant/Year),data = data) or eqivalently: lmer(Product
50,271
Intuitive understanding of least squares slope formula
Define "obvious"! What's obvious to you doesn't need be obvious to someone else. So I can only offer my perspective. First, I hope it's obvious that the slope $\beta$ of the line doesn't change if we shift the data around; only the intercept changes. So, to simplify the formulas, we can, without loss of generality, assume that our data are centred around the origin: $\bar x = \bar y = 0$. In that case the covariance-based formula simplifies to \begin{align} \beta &= \frac{Cov(X, Y)}{Var(X)} \\ &= \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2} \\ &= \frac{\sum_i x_i y_i}{\sum_i x_i^2} \\ \end{align} This is, incidentally, the same formula we obtain by minimising least squares. From the above assumption of the centred data it follows that the intercept is zero, so it suffices to solve: $$ \min_{\beta} \frac{1}{2} \sum_i (y_i - \beta x_i)^2 $$ We find the minimum by setting the first derivative to zero: $$ \sum_i (y_i - \beta x_i)x_i = 0 $$ Solving for $\beta$ produces again: $$ \beta = \frac{\sum_i x_i y_i}{\sum_i x_i^2} $$ Now, agreed, this is again mathematical formalism. Another, more "intuitive" way of looking at it is to observe that each $x_i y_i$ is the area of a rectangle with sides $x_i$ and $y_i$. Equally, each $x_i^2$ is the area of a square with sides $x_i$. So the sum in the numerator above can be interpreted as the average rectangle area, and in the denominator as the average square (both scaled by the number of points, $N$, but these cancel out, so we can ignore them). The slope $\beta$ is then the ratio of these two areas. Now, if we construct the rectangle and the square to have the same base, $\tilde x$: $$ \tilde x = \sqrt{\sum_i x_i^2} $$ then the other side of the "average rectangle" is given by $$ \tilde y = \frac{\sum_i x_i y_i}{\tilde x} $$ It is then straightforward to see that $$ \beta = \frac{\tilde y}{\tilde x} $$ or, graphically,
Intuitive understanding of least squares slope formula
Define "obvious"! What's obvious to you doesn't need be obvious to someone else. So I can only offer my perspective. First, I hope it's obvious that the slope $\beta$ of the line doesn't change if we
Intuitive understanding of least squares slope formula Define "obvious"! What's obvious to you doesn't need be obvious to someone else. So I can only offer my perspective. First, I hope it's obvious that the slope $\beta$ of the line doesn't change if we shift the data around; only the intercept changes. So, to simplify the formulas, we can, without loss of generality, assume that our data are centred around the origin: $\bar x = \bar y = 0$. In that case the covariance-based formula simplifies to \begin{align} \beta &= \frac{Cov(X, Y)}{Var(X)} \\ &= \frac{\sum_i (x_i - \bar x) (y_i - \bar y)}{\sum_i (x_i - \bar x)^2} \\ &= \frac{\sum_i x_i y_i}{\sum_i x_i^2} \\ \end{align} This is, incidentally, the same formula we obtain by minimising least squares. From the above assumption of the centred data it follows that the intercept is zero, so it suffices to solve: $$ \min_{\beta} \frac{1}{2} \sum_i (y_i - \beta x_i)^2 $$ We find the minimum by setting the first derivative to zero: $$ \sum_i (y_i - \beta x_i)x_i = 0 $$ Solving for $\beta$ produces again: $$ \beta = \frac{\sum_i x_i y_i}{\sum_i x_i^2} $$ Now, agreed, this is again mathematical formalism. Another, more "intuitive" way of looking at it is to observe that each $x_i y_i$ is the area of a rectangle with sides $x_i$ and $y_i$. Equally, each $x_i^2$ is the area of a square with sides $x_i$. So the sum in the numerator above can be interpreted as the average rectangle area, and in the denominator as the average square (both scaled by the number of points, $N$, but these cancel out, so we can ignore them). The slope $\beta$ is then the ratio of these two areas. Now, if we construct the rectangle and the square to have the same base, $\tilde x$: $$ \tilde x = \sqrt{\sum_i x_i^2} $$ then the other side of the "average rectangle" is given by $$ \tilde y = \frac{\sum_i x_i y_i}{\tilde x} $$ It is then straightforward to see that $$ \beta = \frac{\tilde y}{\tilde x} $$ or, graphically,
Intuitive understanding of least squares slope formula Define "obvious"! What's obvious to you doesn't need be obvious to someone else. So I can only offer my perspective. First, I hope it's obvious that the slope $\beta$ of the line doesn't change if we
50,272
Intuitive understanding of least squares slope formula
Answer 1 (simple): The simple answer is that $\beta$ is the sensitivity of the response $y$ with respect to the regressor $X$ (assuming the relationship is true). Its estimator $\hat \beta$, which is what you quoted, is the result (the slope) of "forcing" (i.e. fitting) a line (consisting of an intercept and slope) to go through all the points, under specific conditions (i.e. minimizing squared distance). If $\beta=1$ then $y$ moves at the same rate than $X$, i.e. if $X$ moves up/down 1 unit then $y$ also moves up/down 1 unit on average. More/less than 1 unit if $\beta \gt 1 / \beta<1$. Since the formula for $\hat \beta$ is only a special case with 1 single regressor and it also is the result of an optimization problem we could simply look at the formula as such, i.e. it's "just coincidence" that the covariance and variance come out. Answer 2 (advanced): But I think you are looking for something deeper than the above. In financial mathematics (more specifically: portfolio theory) there's a very interesting interpretation to precisely this formula (for the OLS estimator of the slope with 1 single regressor), which - even if maybe over-the-top - could be generalized beyond this field. It interprets the covariance as being the systemic risk and the variance as the non-systemic (or idiosyncratic) risk. Let me explain: In financial mathematics the CAPM (capital asset pricing model) is a formula for calculating the expected return of a financial instrument (e.g. share). According to financial theory (see Markowitz portfolio theory) the average return of an individual stock can be explained by the average return of the market. Without going into too much detail, the individual excess stock return (the response $y=r_i-r_f$) is the excess return of the market (the regressor $X$) times the beta factor: $$r_i = r_f + \beta \cdot\underbrace{(r_M-rf)}_{\textrm{excess return market}}$$ where $r_f$ is the risk-free rate (can be ignored at this point). The beta factor $\beta$ is defined as $$ \beta = \frac{Cov(r_i,r_M)}{Var(r_M)} $$ and it is interpreted as follows: if the investor that is interested in calculating the return $r_i$ for stock $i$ is well-diversified, then they won't care about the individual risk of the stock, i.e. risk inherent to that specific company beyond the current state of the economy, here defined as the variance, since that can be diversified away. Instead, what they will be interested in is only the systematic risk of the stock, i.e. the risk that the company has due to the underlying state of the economy. To make it clear what is meant by "diversifying away": imagine we have 1 single stock, represented by a single random variable $X_1$. Obviously, the "risk" we have will be the whole variance $\sigma_1^2$ (assuming we regard both up and down swings as risk). But what if we add another stock, represented by the random variable $X_2$. Then the variance of our portfolio $0.5*X_1+0.5*X_2$ (i.e. sum of equal proportions of all random variables) will now be $0.5^2\sigma_1^2+0.5^2\sigma_2^2+0.5^2\sigma_{1,2}$ where $\sigma_{1,2}$ is the covariance between them. If we assume we keep adding random variables $X_3, X_4, \dots X_\infty$ with the same variance and covariances to our portfolio, something interesting happens: we will converge towards the covariance (assuming the covariance is smaller than the variance). Here is a simple simulation to show what I mean: The now infinite (or very large) portfolio is the market (i.e. all stocks weighted equally and summed up). Obviously, the variance is now as small as can be and is equal to the individual covariance. Hence, if we add another stock to it, given that we already have this whole portfolio, the variance of our portfolio will only grow by the additional covariance of the stock with the market (i.e. with all other stocks), not by its individual variance. Hence, we are only interested in the covariance. So, the beta-factor is the ratio of the newly added risk to the portfolio (cov of stock to market) to the risk of the portfolio (var of market). Interestingly enough in textbooks the beta-factor is initially defined as the formula above and only thereafter it is mentioned that it can be estimated via regression, so pretty much the inverse of the logic of the first answer. (An interesting additional info: if the assumption of the fully diversified investor is not made (as mentioned above), the formula for the beta-factor changes to $\beta = Var(r_i)/Var(r_M)$, i.e. to include the idiosyncratic risk of the stock as well. This is called the total beta.) Conclusion (TL;DR): Now you could try and generalize this: $\hat \beta$ is the ratio of the variance that $X$ adds to $y$, if we interpret $y$ as a portfolio of very many $X$s, to the the variance of $y$, i.e. it is the proportion/factor of the "systemic" risk that $X$ brings to $y$ to the "systemic" risk as whole (where the "system" is $y$). (Mind you, take this with a pinch of salt, I haven't given it much more thought. I personally stick with Answer 1 since it's generalizabe beyond 1 regressor, when the formula for the OLSE changes.)
Intuitive understanding of least squares slope formula
Answer 1 (simple): The simple answer is that $\beta$ is the sensitivity of the response $y$ with respect to the regressor $X$ (assuming the relationship is true). Its estimator $\hat \beta$, which is
Intuitive understanding of least squares slope formula Answer 1 (simple): The simple answer is that $\beta$ is the sensitivity of the response $y$ with respect to the regressor $X$ (assuming the relationship is true). Its estimator $\hat \beta$, which is what you quoted, is the result (the slope) of "forcing" (i.e. fitting) a line (consisting of an intercept and slope) to go through all the points, under specific conditions (i.e. minimizing squared distance). If $\beta=1$ then $y$ moves at the same rate than $X$, i.e. if $X$ moves up/down 1 unit then $y$ also moves up/down 1 unit on average. More/less than 1 unit if $\beta \gt 1 / \beta<1$. Since the formula for $\hat \beta$ is only a special case with 1 single regressor and it also is the result of an optimization problem we could simply look at the formula as such, i.e. it's "just coincidence" that the covariance and variance come out. Answer 2 (advanced): But I think you are looking for something deeper than the above. In financial mathematics (more specifically: portfolio theory) there's a very interesting interpretation to precisely this formula (for the OLS estimator of the slope with 1 single regressor), which - even if maybe over-the-top - could be generalized beyond this field. It interprets the covariance as being the systemic risk and the variance as the non-systemic (or idiosyncratic) risk. Let me explain: In financial mathematics the CAPM (capital asset pricing model) is a formula for calculating the expected return of a financial instrument (e.g. share). According to financial theory (see Markowitz portfolio theory) the average return of an individual stock can be explained by the average return of the market. Without going into too much detail, the individual excess stock return (the response $y=r_i-r_f$) is the excess return of the market (the regressor $X$) times the beta factor: $$r_i = r_f + \beta \cdot\underbrace{(r_M-rf)}_{\textrm{excess return market}}$$ where $r_f$ is the risk-free rate (can be ignored at this point). The beta factor $\beta$ is defined as $$ \beta = \frac{Cov(r_i,r_M)}{Var(r_M)} $$ and it is interpreted as follows: if the investor that is interested in calculating the return $r_i$ for stock $i$ is well-diversified, then they won't care about the individual risk of the stock, i.e. risk inherent to that specific company beyond the current state of the economy, here defined as the variance, since that can be diversified away. Instead, what they will be interested in is only the systematic risk of the stock, i.e. the risk that the company has due to the underlying state of the economy. To make it clear what is meant by "diversifying away": imagine we have 1 single stock, represented by a single random variable $X_1$. Obviously, the "risk" we have will be the whole variance $\sigma_1^2$ (assuming we regard both up and down swings as risk). But what if we add another stock, represented by the random variable $X_2$. Then the variance of our portfolio $0.5*X_1+0.5*X_2$ (i.e. sum of equal proportions of all random variables) will now be $0.5^2\sigma_1^2+0.5^2\sigma_2^2+0.5^2\sigma_{1,2}$ where $\sigma_{1,2}$ is the covariance between them. If we assume we keep adding random variables $X_3, X_4, \dots X_\infty$ with the same variance and covariances to our portfolio, something interesting happens: we will converge towards the covariance (assuming the covariance is smaller than the variance). Here is a simple simulation to show what I mean: The now infinite (or very large) portfolio is the market (i.e. all stocks weighted equally and summed up). Obviously, the variance is now as small as can be and is equal to the individual covariance. Hence, if we add another stock to it, given that we already have this whole portfolio, the variance of our portfolio will only grow by the additional covariance of the stock with the market (i.e. with all other stocks), not by its individual variance. Hence, we are only interested in the covariance. So, the beta-factor is the ratio of the newly added risk to the portfolio (cov of stock to market) to the risk of the portfolio (var of market). Interestingly enough in textbooks the beta-factor is initially defined as the formula above and only thereafter it is mentioned that it can be estimated via regression, so pretty much the inverse of the logic of the first answer. (An interesting additional info: if the assumption of the fully diversified investor is not made (as mentioned above), the formula for the beta-factor changes to $\beta = Var(r_i)/Var(r_M)$, i.e. to include the idiosyncratic risk of the stock as well. This is called the total beta.) Conclusion (TL;DR): Now you could try and generalize this: $\hat \beta$ is the ratio of the variance that $X$ adds to $y$, if we interpret $y$ as a portfolio of very many $X$s, to the the variance of $y$, i.e. it is the proportion/factor of the "systemic" risk that $X$ brings to $y$ to the "systemic" risk as whole (where the "system" is $y$). (Mind you, take this with a pinch of salt, I haven't given it much more thought. I personally stick with Answer 1 since it's generalizabe beyond 1 regressor, when the formula for the OLSE changes.)
Intuitive understanding of least squares slope formula Answer 1 (simple): The simple answer is that $\beta$ is the sensitivity of the response $y$ with respect to the regressor $X$ (assuming the relationship is true). Its estimator $\hat \beta$, which is
50,273
Regression with variance as outcome
For your example, for some parameter values $c,d$ the variance will be negative ... for that reason, in such models often is used a log link function for the variance. But such models (and many others) can be fitted with extensions of generalized linear models (glm's), also introducing link functions and linear predictors for the variance (and maybe even for other parameters.) One such family of models is known as gamlss see gamlss website for information. For some examples Compare shape and scale parameters between Weibull distributions, Are there better approaches than the weighted mean?
Regression with variance as outcome
For your example, for some parameter values $c,d$ the variance will be negative ... for that reason, in such models often is used a log link function for the variance. But such models (and many others
Regression with variance as outcome For your example, for some parameter values $c,d$ the variance will be negative ... for that reason, in such models often is used a log link function for the variance. But such models (and many others) can be fitted with extensions of generalized linear models (glm's), also introducing link functions and linear predictors for the variance (and maybe even for other parameters.) One such family of models is known as gamlss see gamlss website for information. For some examples Compare shape and scale parameters between Weibull distributions, Are there better approaches than the weighted mean?
Regression with variance as outcome For your example, for some parameter values $c,d$ the variance will be negative ... for that reason, in such models often is used a log link function for the variance. But such models (and many others
50,274
Identifying the correlation between a slope and a level
A slightly different characterisation of the problem Instead of these separate variations/errors in $\alpha$ and $\beta$ you could describe the variance of $Y_i$ directly. A common way (which you see a lot on this site) is to describe a linear function like $$y_i = a+bx_i + \epsilon \quad \text{where} \quad \epsilon \sim N(0,\sigma^2)$$ or $$y_i|x_i \sim N(a+bx_i,\sigma^2)$$ The above is with normal distributed errors. But you can use other distributions too. In general you could describe the mean and variance for $Y_i$. Conditional on $X_i$ it is often like (the case for homogeneous errors, independent of $x_i$) $$\begin{array}{rcl} \text{E}[y_i|x_i] &=& \alpha + \beta x_i \\ \text{Var}[y_i|x_i] &=& \sigma^2 \end{array}$$ (In the case of general linear models a description where $\text{Var}[y_i|x_i]$ is a function of $\text{E}[y_i|x_i]$ is also useful) Your case is very similar but now the variance of the error is not a constant $\sigma$ and it depends on $x_i$. $$\begin{array}{rcl} \text{E}[y_i|x_i] &=& \alpha + \beta x_i \\ \text{Var}[y_i|x_i] &=& \sigma_{\alpha\alpha} + 2 x_i \sigma_{\alpha\beta} + {x_i}^2 \sigma_{\beta\beta} \end{array}$$ where we use $\sigma_{ij}$ to indicate the variance or covariance. In the case $\alpha = 5, \beta = 3, \sigma_{\alpha\alpha} = \sigma_{\beta\beta} = 0.1, \sigma_{\alpha\beta} = 0.05$ and $X \sim Unif(-1,1)$ it will look like: It is a linear relationship with heteroscedasticity. We can estimate the variance and covariance of $\alpha$ and $\beta$ based on this heteroscedastic dependency of the variance of the error (which we might approximate with the residuals). Method of moments The method that you used is the method of moments. You expressed the expectation of $\tilde X_i Y_i^2$ for the population in terms of coefficients. Then you replace in the expressions the expectation for the population by the average for the sample to obtain estimates of the coefficients. (In your particular execution there is a small mistake by assuming that the expectation of $X_i^2\tilde X_i$ is zero. This is only true when $X_i$ is distributed symmetrical around zero) Least squares method A simpler approach might be to model the expectation of the square of the errors as a linear function of terms of $X_i$ and estimate it with the least squares method applied to the residuals. (It is simpler because it is straightforward and it will help to generalize the problem) The errors are distributed as: $$E(\epsilon_i^2) = \text{Var}[y_i|x_i] = \sigma_{\alpha\alpha} + 2 x_i \sigma_{\alpha\beta} + {x_i}^2 \sigma_{\beta\beta}$$ library(MASS) fit <- function(cMu, cSigma, n) { ### generate data coef <- mvrnorm(n,cMu,cSigma) X <- runif(n,-1,1) Y <- coef[,1]+coef[,2]*X ### model means mod <- lm(Y ~ X) res <- mod$residuals ### model covariance tabel modr <- lm(res^2 ~ 1 + I(2*X) + I(X^2)) ### using glm as a slight improvement to lm ### as the variance is not homogeneous but scales with mu ### (note that res^2 follows a chi-square distribution ### for which we have var = 2*mu) modr <- glm(res^2 ~ 1 + I(2*X) + I(X^2), family = quasi(link = "identity", variance = "mu"), start = coef(modr)) ### fitcov fitcov <- mean(X*Y^2)/(2*mean(X^2)) - prod(coef(mod)) ### return result ret <- c(coef(modr),fitcov) names(ret) <- c("alpha", "cov", "beta", "fitcov") return(ret) } ### settings set.seed(1) n <- 10^4 cSigma <- matrix(c(0.1,0.05, 0.05,0.1), 2, byrow = 1) cMu <- c(5,3) ### generate data and perform fitting fit(cMu,cSigma, 10^5) Maximum Likelihood I guess that you might also maximize the likelihood function (or a quasi-likelihood function if you do not see a particular distribution and stick to a formulation with only known conditional mean and variance). But I can not find a closed solution for this. It can be done computationally. I leave this as a separate problem as writing a function that solves it might make this answer too cluttered. In addition, I am not sure whether it will be much faster or more accurate than solving it with the method of moments or fitting the square of the residuals. Generalising Your problem with two equations can be solved in the same way. Now we have two sets of residuals $r_{1i}$ and $r_{2i}$ whose expectation of the products depend on the covariance of the $\alpha_1$, $\alpha_2$, $\beta_1$ and $\beta_2$. $$\begin{array}{rcl} \text{E}[r_{1i}r_{2i}|x_i] &=& \sigma_{\alpha_1\alpha_2} + x_i (\sigma_{\alpha_1\beta_2} + \sigma_{\alpha_2\beta_1}) + {x_i}^2 \sigma_{\beta_1\beta_2} \end{array}$$ You have indeed the term $(\sigma_{\alpha_1\beta_2} + \sigma_{\alpha_2\beta_1})$ whose terms can not be separated with this single equation. The dependency of $r_{1i}r_{2i}$ or $y_{1i}y_{2i}$ on $x_i$ is dependent on the sum but not the independent terms. If you would measure the $y_{1i}$ and ${y_{2i}}$ based on the same correlated $\alpha_1$, $\alpha_2$, $\beta_1$ and $\beta_2$, but with different $x_i$ (say $x_{1i}$ and $x_{2i}$) then you could separate the variables $$\begin{array}{rcl} \text{E}[r_{1i}r_{2i}|x_i] &=& \sigma_{\alpha_1\alpha_2} + x_{2i} \sigma_{\alpha_1\beta_2} + x_{1i} \sigma_{\alpha_2\beta_1} + x_{1i}x_{2i} \sigma_{\beta_1\beta_2} \end{array}$$ For what it is worth, here's a code that would compute the covariances (based on the linear fit of the residual term): fit2 <- function(cMu, cSigma, n) { ### generate data coef <- mvrnorm(n,cMu,cSigma) X <- runif(n,-1,1) Y1 <- coef[,1]+coef[,2]*X Y2 <- coef[,3]+coef[,4]*X ### model means mod1 <- lm(Y1 ~ X) res1 <- mod1$residuals mod2 <- lm(Y2 ~ X) res2 <- mod2$residuals ### model covariance tabel modr <- lm(I(res1*res2) ~ 1 + I(X) + I(X^2)) ### return result ret <- c(coef(modr)) names(ret) <- c("alpha-alpha", "alpha-beta", "beta-beta") return(ret) } ### settings set.seed(1) n <- 10^4 # a1, b1 , a2, b2 cSigma <- matrix(c(0.10,0.05,0.10,0.10, 0.05,0.10,0.10,0.10, 0.10,0.10,0.40,0.20, 0.10,0.10,0.20,0.40), 4, byrow = 1) # a1 , b1 , a2 , b2 cMu <- c( 5, 3, 5, 3) ### generate data and perform fitting fit2(cMu,cSigma, n)
Identifying the correlation between a slope and a level
A slightly different characterisation of the problem Instead of these separate variations/errors in $\alpha$ and $\beta$ you could describe the variance of $Y_i$ directly. A common way (which you see
Identifying the correlation between a slope and a level A slightly different characterisation of the problem Instead of these separate variations/errors in $\alpha$ and $\beta$ you could describe the variance of $Y_i$ directly. A common way (which you see a lot on this site) is to describe a linear function like $$y_i = a+bx_i + \epsilon \quad \text{where} \quad \epsilon \sim N(0,\sigma^2)$$ or $$y_i|x_i \sim N(a+bx_i,\sigma^2)$$ The above is with normal distributed errors. But you can use other distributions too. In general you could describe the mean and variance for $Y_i$. Conditional on $X_i$ it is often like (the case for homogeneous errors, independent of $x_i$) $$\begin{array}{rcl} \text{E}[y_i|x_i] &=& \alpha + \beta x_i \\ \text{Var}[y_i|x_i] &=& \sigma^2 \end{array}$$ (In the case of general linear models a description where $\text{Var}[y_i|x_i]$ is a function of $\text{E}[y_i|x_i]$ is also useful) Your case is very similar but now the variance of the error is not a constant $\sigma$ and it depends on $x_i$. $$\begin{array}{rcl} \text{E}[y_i|x_i] &=& \alpha + \beta x_i \\ \text{Var}[y_i|x_i] &=& \sigma_{\alpha\alpha} + 2 x_i \sigma_{\alpha\beta} + {x_i}^2 \sigma_{\beta\beta} \end{array}$$ where we use $\sigma_{ij}$ to indicate the variance or covariance. In the case $\alpha = 5, \beta = 3, \sigma_{\alpha\alpha} = \sigma_{\beta\beta} = 0.1, \sigma_{\alpha\beta} = 0.05$ and $X \sim Unif(-1,1)$ it will look like: It is a linear relationship with heteroscedasticity. We can estimate the variance and covariance of $\alpha$ and $\beta$ based on this heteroscedastic dependency of the variance of the error (which we might approximate with the residuals). Method of moments The method that you used is the method of moments. You expressed the expectation of $\tilde X_i Y_i^2$ for the population in terms of coefficients. Then you replace in the expressions the expectation for the population by the average for the sample to obtain estimates of the coefficients. (In your particular execution there is a small mistake by assuming that the expectation of $X_i^2\tilde X_i$ is zero. This is only true when $X_i$ is distributed symmetrical around zero) Least squares method A simpler approach might be to model the expectation of the square of the errors as a linear function of terms of $X_i$ and estimate it with the least squares method applied to the residuals. (It is simpler because it is straightforward and it will help to generalize the problem) The errors are distributed as: $$E(\epsilon_i^2) = \text{Var}[y_i|x_i] = \sigma_{\alpha\alpha} + 2 x_i \sigma_{\alpha\beta} + {x_i}^2 \sigma_{\beta\beta}$$ library(MASS) fit <- function(cMu, cSigma, n) { ### generate data coef <- mvrnorm(n,cMu,cSigma) X <- runif(n,-1,1) Y <- coef[,1]+coef[,2]*X ### model means mod <- lm(Y ~ X) res <- mod$residuals ### model covariance tabel modr <- lm(res^2 ~ 1 + I(2*X) + I(X^2)) ### using glm as a slight improvement to lm ### as the variance is not homogeneous but scales with mu ### (note that res^2 follows a chi-square distribution ### for which we have var = 2*mu) modr <- glm(res^2 ~ 1 + I(2*X) + I(X^2), family = quasi(link = "identity", variance = "mu"), start = coef(modr)) ### fitcov fitcov <- mean(X*Y^2)/(2*mean(X^2)) - prod(coef(mod)) ### return result ret <- c(coef(modr),fitcov) names(ret) <- c("alpha", "cov", "beta", "fitcov") return(ret) } ### settings set.seed(1) n <- 10^4 cSigma <- matrix(c(0.1,0.05, 0.05,0.1), 2, byrow = 1) cMu <- c(5,3) ### generate data and perform fitting fit(cMu,cSigma, 10^5) Maximum Likelihood I guess that you might also maximize the likelihood function (or a quasi-likelihood function if you do not see a particular distribution and stick to a formulation with only known conditional mean and variance). But I can not find a closed solution for this. It can be done computationally. I leave this as a separate problem as writing a function that solves it might make this answer too cluttered. In addition, I am not sure whether it will be much faster or more accurate than solving it with the method of moments or fitting the square of the residuals. Generalising Your problem with two equations can be solved in the same way. Now we have two sets of residuals $r_{1i}$ and $r_{2i}$ whose expectation of the products depend on the covariance of the $\alpha_1$, $\alpha_2$, $\beta_1$ and $\beta_2$. $$\begin{array}{rcl} \text{E}[r_{1i}r_{2i}|x_i] &=& \sigma_{\alpha_1\alpha_2} + x_i (\sigma_{\alpha_1\beta_2} + \sigma_{\alpha_2\beta_1}) + {x_i}^2 \sigma_{\beta_1\beta_2} \end{array}$$ You have indeed the term $(\sigma_{\alpha_1\beta_2} + \sigma_{\alpha_2\beta_1})$ whose terms can not be separated with this single equation. The dependency of $r_{1i}r_{2i}$ or $y_{1i}y_{2i}$ on $x_i$ is dependent on the sum but not the independent terms. If you would measure the $y_{1i}$ and ${y_{2i}}$ based on the same correlated $\alpha_1$, $\alpha_2$, $\beta_1$ and $\beta_2$, but with different $x_i$ (say $x_{1i}$ and $x_{2i}$) then you could separate the variables $$\begin{array}{rcl} \text{E}[r_{1i}r_{2i}|x_i] &=& \sigma_{\alpha_1\alpha_2} + x_{2i} \sigma_{\alpha_1\beta_2} + x_{1i} \sigma_{\alpha_2\beta_1} + x_{1i}x_{2i} \sigma_{\beta_1\beta_2} \end{array}$$ For what it is worth, here's a code that would compute the covariances (based on the linear fit of the residual term): fit2 <- function(cMu, cSigma, n) { ### generate data coef <- mvrnorm(n,cMu,cSigma) X <- runif(n,-1,1) Y1 <- coef[,1]+coef[,2]*X Y2 <- coef[,3]+coef[,4]*X ### model means mod1 <- lm(Y1 ~ X) res1 <- mod1$residuals mod2 <- lm(Y2 ~ X) res2 <- mod2$residuals ### model covariance tabel modr <- lm(I(res1*res2) ~ 1 + I(X) + I(X^2)) ### return result ret <- c(coef(modr)) names(ret) <- c("alpha-alpha", "alpha-beta", "beta-beta") return(ret) } ### settings set.seed(1) n <- 10^4 # a1, b1 , a2, b2 cSigma <- matrix(c(0.10,0.05,0.10,0.10, 0.05,0.10,0.10,0.10, 0.10,0.10,0.40,0.20, 0.10,0.10,0.20,0.40), 4, byrow = 1) # a1 , b1 , a2 , b2 cMu <- c( 5, 3, 5, 3) ### generate data and perform fitting fit2(cMu,cSigma, n)
Identifying the correlation between a slope and a level A slightly different characterisation of the problem Instead of these separate variations/errors in $\alpha$ and $\beta$ you could describe the variance of $Y_i$ directly. A common way (which you see
50,275
Identifying the correlation between a slope and a level
Let me not answer the question exactly as I posed it, but to answer a very related question (and in fact, the question I am interested in in the first place). Suppose we have potential outcomes $Y_1(X), Y_0(X)$, and suppose that for each individual has a default level of $X$, say $X_d$, in the absence of experimental intervention. Suppose that the experimenter can shock $X$ from its default level by some randomized quantity $\varepsilon$ so that locally, the experimenter can (at least in principle) use an experiment to measure any quantity taking the form $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0}$$ for some pre-specified function $g(Y_1, Y_2)$. Translating my original question into this framework, the claim that $\mathbb E[\alpha_i\beta_i]$ (and hence $\mathbb C\mathrm{ov}(\alpha_i, \beta_i)$) is identified follows from the observation that taking $g(Y_1, Y_2) = \frac12 Y_1^2$ gives $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \underbrace{Y_1}_{"\alpha_i"}\cdot \underbrace{\frac{d Y_1}{d\varepsilon}\bigg|_{\varepsilon = 0}}_{"\beta_i"}$$ where I am being a bit loose with notation on the RHS above. Similarly, when we take $g(Y_1, Y_2) = Y_1 Y_2$, we have $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \underbrace{Y_1 \frac{d Y_2}{d X}}_{"\alpha_{i,1}\cdot \beta_{i,2}"} + \underbrace{Y_2 \frac{d Y_1}{d X}}_{"\alpha_{i,2}\cdot \beta_{i,1}"}$$ The question now, is whether the individual terms on the RHS above can be separately identified (instead of just identifying their sum) using some function $g$. Since we just showed that their sum can be identified, this is equivalent to asking if there exists some $g$ such that $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \frac{\partial g}{\partial Y_2}\frac{d Y_2}{dX} + \frac{\partial g}{\partial Y_1}\frac{d Y_1}{d X} = Y_1 \frac{d Y_2}{d X} - Y_2 \frac{d Y_1}{d X}$$ at all $Y_1, Y_2$. But this requires $g$ to satisfy the following system of PDE $$\frac{\partial g}{\partial Y_1} = -Y_2,\quad \frac{\partial g}{\partial Y_2} = Y_1$$ Such a $g$ cannot exist on any neighborhood. To see why, fix some point $(a,b)$, and consider $g(a,b)$ compared to $g(a+\delta, b + \delta)$ for any $\delta > 0$. WLOG, we can normalize $g(a,b) = 0$. Using the PDE system above, we can try to evaluate $g(a+\delta,b+\delta)$ two different ways. First, we could first integrate along the first dimension and then integrate along the second dimension to get $g(a+\delta,b+\delta) = -\delta b + \delta (a + \delta)$. Second, we could first integrate along the second dimension and then integrate along the first to get $g(a+\delta,b+\delta) = - \delta(b + \delta) + \delta a$. Setting these two expresssings for $g(a+\delta,b+\delta)$ and simplifying, we arrive at the contradiction $\delta = - \delta$. I am not sure that this completely rules out any way of identifying the cross-equation correlations separately, but it certainly suggests that no treatment-effect based approach on its own will work.
Identifying the correlation between a slope and a level
Let me not answer the question exactly as I posed it, but to answer a very related question (and in fact, the question I am interested in in the first place). Suppose we have potential outcomes $Y_1(X
Identifying the correlation between a slope and a level Let me not answer the question exactly as I posed it, but to answer a very related question (and in fact, the question I am interested in in the first place). Suppose we have potential outcomes $Y_1(X), Y_0(X)$, and suppose that for each individual has a default level of $X$, say $X_d$, in the absence of experimental intervention. Suppose that the experimenter can shock $X$ from its default level by some randomized quantity $\varepsilon$ so that locally, the experimenter can (at least in principle) use an experiment to measure any quantity taking the form $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0}$$ for some pre-specified function $g(Y_1, Y_2)$. Translating my original question into this framework, the claim that $\mathbb E[\alpha_i\beta_i]$ (and hence $\mathbb C\mathrm{ov}(\alpha_i, \beta_i)$) is identified follows from the observation that taking $g(Y_1, Y_2) = \frac12 Y_1^2$ gives $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \underbrace{Y_1}_{"\alpha_i"}\cdot \underbrace{\frac{d Y_1}{d\varepsilon}\bigg|_{\varepsilon = 0}}_{"\beta_i"}$$ where I am being a bit loose with notation on the RHS above. Similarly, when we take $g(Y_1, Y_2) = Y_1 Y_2$, we have $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \underbrace{Y_1 \frac{d Y_2}{d X}}_{"\alpha_{i,1}\cdot \beta_{i,2}"} + \underbrace{Y_2 \frac{d Y_1}{d X}}_{"\alpha_{i,2}\cdot \beta_{i,1}"}$$ The question now, is whether the individual terms on the RHS above can be separately identified (instead of just identifying their sum) using some function $g$. Since we just showed that their sum can be identified, this is equivalent to asking if there exists some $g$ such that $$\frac{d g(Y_1(X_d + \varepsilon), Y_2(X_d + \varepsilon))}{d\varepsilon}\bigg |_{\varepsilon = 0} = \frac{\partial g}{\partial Y_2}\frac{d Y_2}{dX} + \frac{\partial g}{\partial Y_1}\frac{d Y_1}{d X} = Y_1 \frac{d Y_2}{d X} - Y_2 \frac{d Y_1}{d X}$$ at all $Y_1, Y_2$. But this requires $g$ to satisfy the following system of PDE $$\frac{\partial g}{\partial Y_1} = -Y_2,\quad \frac{\partial g}{\partial Y_2} = Y_1$$ Such a $g$ cannot exist on any neighborhood. To see why, fix some point $(a,b)$, and consider $g(a,b)$ compared to $g(a+\delta, b + \delta)$ for any $\delta > 0$. WLOG, we can normalize $g(a,b) = 0$. Using the PDE system above, we can try to evaluate $g(a+\delta,b+\delta)$ two different ways. First, we could first integrate along the first dimension and then integrate along the second dimension to get $g(a+\delta,b+\delta) = -\delta b + \delta (a + \delta)$. Second, we could first integrate along the second dimension and then integrate along the first to get $g(a+\delta,b+\delta) = - \delta(b + \delta) + \delta a$. Setting these two expresssings for $g(a+\delta,b+\delta)$ and simplifying, we arrive at the contradiction $\delta = - \delta$. I am not sure that this completely rules out any way of identifying the cross-equation correlations separately, but it certainly suggests that no treatment-effect based approach on its own will work.
Identifying the correlation between a slope and a level Let me not answer the question exactly as I posed it, but to answer a very related question (and in fact, the question I am interested in in the first place). Suppose we have potential outcomes $Y_1(X
50,276
Why are we checking the difference between q(z|x), and p(z|x) in variational encoders?
The first thing to appreciate about VAEs is that they are not just some magical deep generative model but that they are a special case of the Auto-Encoding Variational Bayes algorithm for doing variational Bayesian inference in generative models. What that means is we consider a setup with a dataset $\mathcal{D} =\{x_i\}_{i=1}^n$ where we assume the generative process for an instance $x_i$ is done in two steps: First sample a latent variable $z_i$ from some prior $p(z)$ Second sample $x_i$ from the distribution $p(x|z)$ Note that we don't observe the values of $z_i$ in our dataset so we're interested in working out for each example in our data what's the posterior distribution over its corresponding latent variable given we've seen all the data - i.e. what is $p(z|x)$? So in a sense then this is the main objective in VAEs - to uncover $p(z|x)$ and not exactly to model p(x), that is just a byproduct of the algorithm. To answer your first question then, we are interested in the $D_{KL}[q(z|x)||(p(z|x)]$ because what we're doing is accepting that $p(z|x)$ is intractable and we won't be able to calculate it, but we're trying to approximate it with $q(z|x)$ (which here is using a neural network) and we want them to be as similar as possible, hence why we're minimising that divergence. For your second question on why this should perform well you just need to take a look at the form of the ELBO given here: $$ \mathcal{L}(\theta,\phi) = \underbrace{\mathbb{E}_{q_\phi}[\log p_\theta(x|z)]}_{\text{Reconstruction Loss}} - \underbrace{D_{KL}[q_\phi(z|x)||p(z)]}_{\text{KL Regulariser}} $$ The first term is the expected likelihood of the data given the latent variables, it should make sense that is a sensible thing to maximise as this makes the data more likely under the model while the second term maintains the `auto-encoder' part and keeps the approximate distribution close to the prior. For your final question I would just say that I don't think it's particularly useful to think of the two network as ``mirroring'' each other. Rather they both model a different probability distribution and it so happens that we can train them jointly and effectively through this auto encoding algorithm. Comparing $q(z|x)$ and $p(z|x)$ then is measuring how good the encoder is at approximating the true posterior distribution over the latent variables, and so doesn't really say anything about the decoder. Also you repeatedly mention $q(x|z)$ but this is just not something that is part of the model since we kind of assume the structure of $p_\theta(x|z)$ and just learn that directly (it's the decoder) so we don't worry about some $q$ variational distribution.
Why are we checking the difference between q(z|x), and p(z|x) in variational encoders?
The first thing to appreciate about VAEs is that they are not just some magical deep generative model but that they are a special case of the Auto-Encoding Variational Bayes algorithm for doing variat
Why are we checking the difference between q(z|x), and p(z|x) in variational encoders? The first thing to appreciate about VAEs is that they are not just some magical deep generative model but that they are a special case of the Auto-Encoding Variational Bayes algorithm for doing variational Bayesian inference in generative models. What that means is we consider a setup with a dataset $\mathcal{D} =\{x_i\}_{i=1}^n$ where we assume the generative process for an instance $x_i$ is done in two steps: First sample a latent variable $z_i$ from some prior $p(z)$ Second sample $x_i$ from the distribution $p(x|z)$ Note that we don't observe the values of $z_i$ in our dataset so we're interested in working out for each example in our data what's the posterior distribution over its corresponding latent variable given we've seen all the data - i.e. what is $p(z|x)$? So in a sense then this is the main objective in VAEs - to uncover $p(z|x)$ and not exactly to model p(x), that is just a byproduct of the algorithm. To answer your first question then, we are interested in the $D_{KL}[q(z|x)||(p(z|x)]$ because what we're doing is accepting that $p(z|x)$ is intractable and we won't be able to calculate it, but we're trying to approximate it with $q(z|x)$ (which here is using a neural network) and we want them to be as similar as possible, hence why we're minimising that divergence. For your second question on why this should perform well you just need to take a look at the form of the ELBO given here: $$ \mathcal{L}(\theta,\phi) = \underbrace{\mathbb{E}_{q_\phi}[\log p_\theta(x|z)]}_{\text{Reconstruction Loss}} - \underbrace{D_{KL}[q_\phi(z|x)||p(z)]}_{\text{KL Regulariser}} $$ The first term is the expected likelihood of the data given the latent variables, it should make sense that is a sensible thing to maximise as this makes the data more likely under the model while the second term maintains the `auto-encoder' part and keeps the approximate distribution close to the prior. For your final question I would just say that I don't think it's particularly useful to think of the two network as ``mirroring'' each other. Rather they both model a different probability distribution and it so happens that we can train them jointly and effectively through this auto encoding algorithm. Comparing $q(z|x)$ and $p(z|x)$ then is measuring how good the encoder is at approximating the true posterior distribution over the latent variables, and so doesn't really say anything about the decoder. Also you repeatedly mention $q(x|z)$ but this is just not something that is part of the model since we kind of assume the structure of $p_\theta(x|z)$ and just learn that directly (it's the decoder) so we don't worry about some $q$ variational distribution.
Why are we checking the difference between q(z|x), and p(z|x) in variational encoders? The first thing to appreciate about VAEs is that they are not just some magical deep generative model but that they are a special case of the Auto-Encoding Variational Bayes algorithm for doing variat
50,277
Law of the norm of the empirical mean of uniforms on the sphere?
Densities of short uniform walks in higher dimensions could be relevant. It discusses random walks with $n$ steps each of length $1$ in $\mathbb{R}^d$, where each step is taken in a uniformly random direction. Theorem 2.1 states that the probability density function of the distance to the origin in $d \ge 2$ dimensions after $n \ge 2$ steps is, for $x \gt 0$, $$p_n(\nu; x) = \frac{2^{-\nu}}{\nu!}\int_0^\infty (tx)^{\nu+1}J_v(tx)j_\nu^n(t)\:dt$$ Where: $\nu = \frac{d}{2}-1$ $J_\nu$ means the Bessel function of the first kind $j_\nu(x) = \nu!(\frac{2}{x})^\nu J_\nu(x)$ is the "normalized Bessel function of the first kind" The norm of the mean of the steps is just the final distance from $0$ divided by $n$. So you could theoretically plug $x = ny$ into the formula to get the value of the pdf you care about at $y$. But looks like it could be computationally difficult. (Theorem 2.10 apparently gives a "computationally more accessible" expression for the pdf.)
Law of the norm of the empirical mean of uniforms on the sphere?
Densities of short uniform walks in higher dimensions could be relevant. It discusses random walks with $n$ steps each of length $1$ in $\mathbb{R}^d$, where each step is taken in a uniformly random d
Law of the norm of the empirical mean of uniforms on the sphere? Densities of short uniform walks in higher dimensions could be relevant. It discusses random walks with $n$ steps each of length $1$ in $\mathbb{R}^d$, where each step is taken in a uniformly random direction. Theorem 2.1 states that the probability density function of the distance to the origin in $d \ge 2$ dimensions after $n \ge 2$ steps is, for $x \gt 0$, $$p_n(\nu; x) = \frac{2^{-\nu}}{\nu!}\int_0^\infty (tx)^{\nu+1}J_v(tx)j_\nu^n(t)\:dt$$ Where: $\nu = \frac{d}{2}-1$ $J_\nu$ means the Bessel function of the first kind $j_\nu(x) = \nu!(\frac{2}{x})^\nu J_\nu(x)$ is the "normalized Bessel function of the first kind" The norm of the mean of the steps is just the final distance from $0$ divided by $n$. So you could theoretically plug $x = ny$ into the formula to get the value of the pdf you care about at $y$. But looks like it could be computationally difficult. (Theorem 2.10 apparently gives a "computationally more accessible" expression for the pdf.)
Law of the norm of the empirical mean of uniforms on the sphere? Densities of short uniform walks in higher dimensions could be relevant. It discusses random walks with $n$ steps each of length $1$ in $\mathbb{R}^d$, where each step is taken in a uniformly random d
50,278
Law of the norm of the empirical mean of uniforms on the sphere?
Similarity with a rubber band model. This problem got me to think of the model for a 'rubber band' (See for instance wikipedia or section 3-7 in Herbert B. Callen's thermodynamics and an introduction to thermostatistics). With a bit of hand waving: consider the distribution of only one axis/component of the $U_i$ (the marginal distribution along a single axis) approximate this with a normal distribution (the approximation becomes more accurate for larger $n$ and also for larger $d$) because the distribution needs to be spherically symmetric we assume the other components to be identical and independently distributed. The squared length will be the sum of $d$ squared normal distributed variables. Then the distribution of $M$ is approximately a scaled $\chi$ distributed variable with $d$ degrees of freedom. The scaling factor is $1/\sqrt{dn}$ To know the scaling we need to know the variance of the distribution of the component. The contributions of each $U_{i}$ follow some sort of beta distribution (it is only the projection of $U_{i}$ that matters) $$f(x) \propto (1-x^2)^\frac{d-3}{2} \quad \text{for $-1 \leq x \leq 1$}$$ or with $t = x^2$ $$f(t) \propto t^\frac{1}{2}(1-t)^\frac{d-3}{2}\quad \text{for $0 \leq t \leq 1$}$$ This means that the mean of $T$ or the variance of $X$ is equal to $1/d$ (the mean of a beta distribution with $\alpha = 1/2$ and $\beta = (d-1)/2$). For the mean of $n$ of those variables, you get that the variance is $1/(dn)$. Simulation Below is an example for the case of $n=20$ and $d=4$ This is computed with the following r-code: n = 20 d = 4 ### sample from sphere simsphere <- function(d) { x <- rnorm(d) x <- x/sqrt(sum(x^2)) return(x) } ### add 'n' times U and compute absolute value getM <- function(n,d) { xv <- replicate(n,simsphere(d)) vector <- rowSums(xv) norm <- sum(vector^2)^0.5 return(norm/n) } ### chi distriution dchi <- function(x,nu) { x^{nu-1}*exp(-1/2*x^2)/(2^((nu/2)-1)*gamma(nu/2)) } dchi <- Vectorize(dchi) ### simulate and plot histogram M <- replicate(10^4, getM(n,d)) hist(M, breaks = seq(0,5,0.01), freq = 0, xlim = c(0,0.5)) ### add approximation based on chi distribution v = 1/d/n ms <- seq(0,10,0.01) lines(ms,dchi(ms/sqrt(v),d)/sqrt(v), col = 1, lwd = 2)
Law of the norm of the empirical mean of uniforms on the sphere?
Similarity with a rubber band model. This problem got me to think of the model for a 'rubber band' (See for instance wikipedia or section 3-7 in Herbert B. Callen's thermodynamics and an introduction
Law of the norm of the empirical mean of uniforms on the sphere? Similarity with a rubber band model. This problem got me to think of the model for a 'rubber band' (See for instance wikipedia or section 3-7 in Herbert B. Callen's thermodynamics and an introduction to thermostatistics). With a bit of hand waving: consider the distribution of only one axis/component of the $U_i$ (the marginal distribution along a single axis) approximate this with a normal distribution (the approximation becomes more accurate for larger $n$ and also for larger $d$) because the distribution needs to be spherically symmetric we assume the other components to be identical and independently distributed. The squared length will be the sum of $d$ squared normal distributed variables. Then the distribution of $M$ is approximately a scaled $\chi$ distributed variable with $d$ degrees of freedom. The scaling factor is $1/\sqrt{dn}$ To know the scaling we need to know the variance of the distribution of the component. The contributions of each $U_{i}$ follow some sort of beta distribution (it is only the projection of $U_{i}$ that matters) $$f(x) \propto (1-x^2)^\frac{d-3}{2} \quad \text{for $-1 \leq x \leq 1$}$$ or with $t = x^2$ $$f(t) \propto t^\frac{1}{2}(1-t)^\frac{d-3}{2}\quad \text{for $0 \leq t \leq 1$}$$ This means that the mean of $T$ or the variance of $X$ is equal to $1/d$ (the mean of a beta distribution with $\alpha = 1/2$ and $\beta = (d-1)/2$). For the mean of $n$ of those variables, you get that the variance is $1/(dn)$. Simulation Below is an example for the case of $n=20$ and $d=4$ This is computed with the following r-code: n = 20 d = 4 ### sample from sphere simsphere <- function(d) { x <- rnorm(d) x <- x/sqrt(sum(x^2)) return(x) } ### add 'n' times U and compute absolute value getM <- function(n,d) { xv <- replicate(n,simsphere(d)) vector <- rowSums(xv) norm <- sum(vector^2)^0.5 return(norm/n) } ### chi distriution dchi <- function(x,nu) { x^{nu-1}*exp(-1/2*x^2)/(2^((nu/2)-1)*gamma(nu/2)) } dchi <- Vectorize(dchi) ### simulate and plot histogram M <- replicate(10^4, getM(n,d)) hist(M, breaks = seq(0,5,0.01), freq = 0, xlim = c(0,0.5)) ### add approximation based on chi distribution v = 1/d/n ms <- seq(0,10,0.01) lines(ms,dchi(ms/sqrt(v),d)/sqrt(v), col = 1, lwd = 2)
Law of the norm of the empirical mean of uniforms on the sphere? Similarity with a rubber band model. This problem got me to think of the model for a 'rubber band' (See for instance wikipedia or section 3-7 in Herbert B. Callen's thermodynamics and an introduction
50,279
Law of the norm of the empirical mean of uniforms on the sphere?
I'm not sure if you want an upper or lower bound. You mention both in the question. The easiest way to get a very loose upper bound on this problem is to use Markov's inequality. Just in case it's been overlooked. $$M=\left\|\frac{1}{n}\sum_{i=1}^n U_i \right\|.$$ $$P\left(M \ge \sqrt{\frac{\lambda}{n}}\right) \le \sqrt{\frac{n}{\lambda}}E[M]$$ by Markov's inequality. Then by Jensen's inequality, $$E[M] < \sqrt{E[M^2]}.$$ So, the final bound is $$P\left(M \ge \sqrt{\frac{\lambda}{n}}\right) \le \frac{\sqrt{\sum_i \sum_j E\left[ \langle U_i,U_j \rangle \right]}}{\sqrt{\lambda \ n}}.$$ Finding the expectation of the inner product of two independent $U_i$ and $U_j$ shouldn't be too hard. Again, this will be a very loose bound, but it's better than no bound (sometimes).
Law of the norm of the empirical mean of uniforms on the sphere?
I'm not sure if you want an upper or lower bound. You mention both in the question. The easiest way to get a very loose upper bound on this problem is to use Markov's inequality. Just in case it's bee
Law of the norm of the empirical mean of uniforms on the sphere? I'm not sure if you want an upper or lower bound. You mention both in the question. The easiest way to get a very loose upper bound on this problem is to use Markov's inequality. Just in case it's been overlooked. $$M=\left\|\frac{1}{n}\sum_{i=1}^n U_i \right\|.$$ $$P\left(M \ge \sqrt{\frac{\lambda}{n}}\right) \le \sqrt{\frac{n}{\lambda}}E[M]$$ by Markov's inequality. Then by Jensen's inequality, $$E[M] < \sqrt{E[M^2]}.$$ So, the final bound is $$P\left(M \ge \sqrt{\frac{\lambda}{n}}\right) \le \frac{\sqrt{\sum_i \sum_j E\left[ \langle U_i,U_j \rangle \right]}}{\sqrt{\lambda \ n}}.$$ Finding the expectation of the inner product of two independent $U_i$ and $U_j$ shouldn't be too hard. Again, this will be a very loose bound, but it's better than no bound (sometimes).
Law of the norm of the empirical mean of uniforms on the sphere? I'm not sure if you want an upper or lower bound. You mention both in the question. The easiest way to get a very loose upper bound on this problem is to use Markov's inequality. Just in case it's bee
50,280
CDF*[1-CDF]/PDF --- name? integrable?
Logistic curve One relationship might be with logistic growth which is based on the following differential equation: $$f'(x) = f(x)(1-f(x))$$ But then for $F(x)$ and inhomogeneous (using some variable rate $g(x)$) $$F'(x) = g(x) F(x)(1-F(x))$$ So if we express the CDF as a logistic curve $$F(u) = \frac{1}{1+e^{-u}}$$ where the parameter $u$ is an integral of $q(x)^{-1}$ (where $m$ is the median for which $F(m) =0.5$) $$F(x) = \frac{1}{1+e^{-\int_{m}^x q(t)^{-1} dt}}$$ Then $$f(x) = F'(x) = F(x)(1-F(x)) q(x)^{-1}$$ or like your expression $$q(x) = \frac{F(x)(1-F(x))}{f(x)}$$ A related relationship is that the log odds (odds based on the CDF) are $$\log\left(\frac{F(x)}{1-F(x)}\right) = \int_{m}^x q(t)^{-1} dt$$ And $q(x)$ is the inverse of the rate at which the log odds increase. Order distribution The terms like $F(x)\cdot(1-F(x))$ also occur in the distribution of order statistics. But I am a bit puzzled how you can get this $f(x)$ in the denominator. There are not so many expression where you use $1/f(x)$.
CDF*[1-CDF]/PDF --- name? integrable?
Logistic curve One relationship might be with logistic growth which is based on the following differential equation: $$f'(x) = f(x)(1-f(x))$$ But then for $F(x)$ and inhomogeneous (using some variable
CDF*[1-CDF]/PDF --- name? integrable? Logistic curve One relationship might be with logistic growth which is based on the following differential equation: $$f'(x) = f(x)(1-f(x))$$ But then for $F(x)$ and inhomogeneous (using some variable rate $g(x)$) $$F'(x) = g(x) F(x)(1-F(x))$$ So if we express the CDF as a logistic curve $$F(u) = \frac{1}{1+e^{-u}}$$ where the parameter $u$ is an integral of $q(x)^{-1}$ (where $m$ is the median for which $F(m) =0.5$) $$F(x) = \frac{1}{1+e^{-\int_{m}^x q(t)^{-1} dt}}$$ Then $$f(x) = F'(x) = F(x)(1-F(x)) q(x)^{-1}$$ or like your expression $$q(x) = \frac{F(x)(1-F(x))}{f(x)}$$ A related relationship is that the log odds (odds based on the CDF) are $$\log\left(\frac{F(x)}{1-F(x)}\right) = \int_{m}^x q(t)^{-1} dt$$ And $q(x)$ is the inverse of the rate at which the log odds increase. Order distribution The terms like $F(x)\cdot(1-F(x))$ also occur in the distribution of order statistics. But I am a bit puzzled how you can get this $f(x)$ in the denominator. There are not so many expression where you use $1/f(x)$.
CDF*[1-CDF]/PDF --- name? integrable? Logistic curve One relationship might be with logistic growth which is based on the following differential equation: $$f'(x) = f(x)(1-f(x))$$ But then for $F(x)$ and inhomogeneous (using some variable
50,281
What is the application for using the `Boltzmann Machines`?
The idea behind the Boltzmann Machine is that it represents a closed system where an energy flows from one part to another, i.e. heat dissipation, and models the decrease in the entropy of a closed model - while the model starts with relatively low entropy (i.e. when there is a separation between 'hot' and 'cold' parts), it tends to the state of equilibrium, or high entropy (i.e. all the items of the same energy, or 'heat'). Those networks are a type of Hopfield networks, which are used for associative memory modelling (here is the link from wikipedia). To the reason why it is not so useful - it is due to the infeasibility of its' solution in the general case, when there are an edge between each node to the other, even a relatively small network will have n(n-1) connections. The answer for the first question is also in part an answer for the second, but in addition to it, Constrained Boltzmann Networks may model a more realistic scenarios, where nodes are divided into two separate groups, and the edges between them may represent weight of data flow, etc.
What is the application for using the `Boltzmann Machines`?
The idea behind the Boltzmann Machine is that it represents a closed system where an energy flows from one part to another, i.e. heat dissipation, and models the decrease in the entropy of a closed mo
What is the application for using the `Boltzmann Machines`? The idea behind the Boltzmann Machine is that it represents a closed system where an energy flows from one part to another, i.e. heat dissipation, and models the decrease in the entropy of a closed model - while the model starts with relatively low entropy (i.e. when there is a separation between 'hot' and 'cold' parts), it tends to the state of equilibrium, or high entropy (i.e. all the items of the same energy, or 'heat'). Those networks are a type of Hopfield networks, which are used for associative memory modelling (here is the link from wikipedia). To the reason why it is not so useful - it is due to the infeasibility of its' solution in the general case, when there are an edge between each node to the other, even a relatively small network will have n(n-1) connections. The answer for the first question is also in part an answer for the second, but in addition to it, Constrained Boltzmann Networks may model a more realistic scenarios, where nodes are divided into two separate groups, and the edges between them may represent weight of data flow, etc.
What is the application for using the `Boltzmann Machines`? The idea behind the Boltzmann Machine is that it represents a closed system where an energy flows from one part to another, i.e. heat dissipation, and models the decrease in the entropy of a closed mo
50,282
How can I show that two random variables are independent if their mutual information is 0?
Here's my take. In the discrete case, $$ \operatorname{I}(X;Y) = \sum_{y \in \mathcal Y} \sum_{x \in \mathcal X} {p_{(X,Y)}(x,y) \log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } } $$ So $I(X;Y)=0$ when, at all points, either: ${p_{(X,Y)}(x,y)} = 0$, or $\log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } \;=0$ It's worth noting that MI is always negative or 0, and we can't get a positive logarithm at any point, because the joint probability is always a subset of the marginals, so we don't need to worry about sums cancelling each other out; just that each $\operatorname{I}(X;Y)=0\, \forall X,Y$. (And of course, by definition, any data point can't reduce the total amount of information.) The first case says the joint probability is 0 in those cases. It's essentially saying that the two events can't happen together, so I think we're ok just treating these as impossible or undefined events. The second case requires that $\frac{p_{(X,Y)}\,(x,y)}{p_X(x)\,p_Y(y)} = 1$, which implies $p_{(X,Y)} = p_X(x)\,p_Y(y)$, which is independence, for all cases where both events can happen together. The logic is the same in the continuous case.
How can I show that two random variables are independent if their mutual information is 0?
Here's my take. In the discrete case, $$ \operatorname{I}(X;Y) = \sum_{y \in \mathcal Y} \sum_{x \in \mathcal X} {p_{(X,Y)}(x,y) \log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } } $$ S
How can I show that two random variables are independent if their mutual information is 0? Here's my take. In the discrete case, $$ \operatorname{I}(X;Y) = \sum_{y \in \mathcal Y} \sum_{x \in \mathcal X} {p_{(X,Y)}(x,y) \log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } } $$ So $I(X;Y)=0$ when, at all points, either: ${p_{(X,Y)}(x,y)} = 0$, or $\log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } \;=0$ It's worth noting that MI is always negative or 0, and we can't get a positive logarithm at any point, because the joint probability is always a subset of the marginals, so we don't need to worry about sums cancelling each other out; just that each $\operatorname{I}(X;Y)=0\, \forall X,Y$. (And of course, by definition, any data point can't reduce the total amount of information.) The first case says the joint probability is 0 in those cases. It's essentially saying that the two events can't happen together, so I think we're ok just treating these as impossible or undefined events. The second case requires that $\frac{p_{(X,Y)}\,(x,y)}{p_X(x)\,p_Y(y)} = 1$, which implies $p_{(X,Y)} = p_X(x)\,p_Y(y)$, which is independence, for all cases where both events can happen together. The logic is the same in the continuous case.
How can I show that two random variables are independent if their mutual information is 0? Here's my take. In the discrete case, $$ \operatorname{I}(X;Y) = \sum_{y \in \mathcal Y} \sum_{x \in \mathcal X} {p_{(X,Y)}(x,y) \log{ \left(\frac{p_{(X,Y)}(x,y)}{p_X(x)\,p_Y(y)} \right) } } $$ S
50,283
Likelihood ratio test for $H_0:(\mu_1,\mu_2)=(0,0)$ vs $H_1:(\mu_1,\mu_2) \neq (0,0)$
Your solution seems to be correct. The strange shape of your parameter space (it's not a open subset of $\mathbb R^2$) creates this ambiguity in the final result: each combination of $(p,q,r)$ gives a different LRT. Some are more powered for $\mu_1=0$, some are more powered for $\mu_2=0$ and some are more powered for $\mu_1\neq0,\mu_2\neq0$, but all of them are valid LRTs with significance level $\alpha$.
Likelihood ratio test for $H_0:(\mu_1,\mu_2)=(0,0)$ vs $H_1:(\mu_1,\mu_2) \neq (0,0)$
Your solution seems to be correct. The strange shape of your parameter space (it's not a open subset of $\mathbb R^2$) creates this ambiguity in the final result: each combination of $(p,q,r)$ gives a
Likelihood ratio test for $H_0:(\mu_1,\mu_2)=(0,0)$ vs $H_1:(\mu_1,\mu_2) \neq (0,0)$ Your solution seems to be correct. The strange shape of your parameter space (it's not a open subset of $\mathbb R^2$) creates this ambiguity in the final result: each combination of $(p,q,r)$ gives a different LRT. Some are more powered for $\mu_1=0$, some are more powered for $\mu_2=0$ and some are more powered for $\mu_1\neq0,\mu_2\neq0$, but all of them are valid LRTs with significance level $\alpha$.
Likelihood ratio test for $H_0:(\mu_1,\mu_2)=(0,0)$ vs $H_1:(\mu_1,\mu_2) \neq (0,0)$ Your solution seems to be correct. The strange shape of your parameter space (it's not a open subset of $\mathbb R^2$) creates this ambiguity in the final result: each combination of $(p,q,r)$ gives a
50,284
Best method of quantifying probability of new datum belonging to either of two distanced normal distributions?
Your question is a bit vague and it seems the your figure does not quite match the rest of the problem. I think you may have put parts of two similar problems together in your Question. I'll do my best to give most of the information you requested. You say the means of the two normal populations are unknown with $\mu_A \le \mu_B,$ and I will assume the two population standard deviations are also unknown. If it is somehow known that the two population standard deviations are equal, $\sigma_A = \sigma_B,$ then a pooled 2-sample t test of $H_0: \mu_A = \mu_B$ against $H_1: \mu_A < \mu_B$ is appropriate. I would use your example with values for the two sample means and standard deviations, but I would need to know the two sample sizes in order to show how to do the test. So I will use data with somewhat similar sample means and standard deviations, and with sample sizes $n_A = n_B = 40,$ as sampled in R below: set.seed(2020) x.a = rnorm(40, 104, 10) x.b = rnorm(40, 160, 10) summary(x.a); length(x.a); sd(x.a) Min. 1st Qu. Median Mean 3rd Qu. Max. 73.61 100.93 106.45 105.76 113.37 128.35 [1] 40 [1] 12.00162 summary(x.b); length(x.b); sd(x.b) Min. 1st Qu. Median Mean 3rd Qu. Max. 142.2 154.1 160.7 160.2 165.1 192.0 [1] 40 [1] 9.79959 stripchart(list(x.a, x.b), pch="|", ylim=c(.5, 2.5)) From the summaries and the stripchart, we can see that all values of sample A are below all values of sample B. There is a complete separation of the two samples. With such complete separation, there is little doubt that the pooled t test will reject the null hypothesis. [The parameter var.eq=T calls for the pooled test; without it, R does a Welch two-sample t test when two samples are provided.] t.test(x.a, x.b, alt="less", var.eq=T) Two Sample t-test data: x.a and x.b t = -22.228, df = 78, p-value < 2.2e-16 alternative hypothesis: true difference in means is less than 0 95 percent confidence interval: -Inf -50.37798 sample estimates: mean of x mean of y 105.7579 160.2139 You can find the formulas for doing a pooled two-sample t test in a basic statistics text. Maybe you should find the formulas and use the sample sizes, means and standard deviations to compute the pooled variance estimate, often called $s_p^2$ and then the test statistic $T = 22.228.$ If you choose to do the test at the significance level $\alpha = 1\%$ then the critical value $c = 2.429$ of the test can be found from a printed table of Student's t distributions on the row for degrees of freedom $DF = n_A + n_B - 2 = 38$ or by using software as below. qt(.99, 38) [1] 2.428568 You asked for a value that separates the two distributions. Such a value is $c$ and there are probability $0.01$ of rejecting $H_0$ when it is true. Because the two distributions are so widely separated the probability of failing to reject $H_0$ when it is false is very small. This means that we reject the null hypothesis at the 1% level because $T =22.23 > 2.429.$ [If you know about P-values, the very small P-value (below 1%) is another indication to reject $H_0.$ Ordinarily, you can't get exact P-values from printed distribution tables.] Note: If the distributions were as in the figure you show, then you might choose the critical value to be $c = 1.5$ Then if you were to rely on a single observation to decide between A and B, The probability that an observation from A would fall above $c$ is $0.0668,$ which could be found by standardizing and using printed tables of the standard normal cumulative distribution function. This probability can be found using R (where pnorm is a normal CDF). 1 - pnorm(1.5, 0, 1) [1] 0.0668072 Similarly, or by symmetry, the probability that a single observation from B would fall below $c$ is the same. pnorm(1.5, 3, 1) [1] 0.0668072 Addendum, per Comment. Your intuition that it is important to take variability into account is correct. Here is output from a recent release of Minitab, which explicitly shows the pooled standard deviation. First, I use the summarized data in your Question, and assume both samples are of size 20. Two-Sample T-Test and CI Sample N Mean StDev SE Mean 1 20 103.72 8.62 1.9 2 20 161.2 13.6 3.0 Difference = μ (1) - μ (2) Estimate for difference: -57.45 95% upper bound for difference: -51.37 T-Test of difference = 0 (vs <): T-Value = -15.94 P-Value = 0.000 DF = 38 Both use Pooled StDev = 11.3976 Now, to illustrate the role variability plays, I multiply the sample standard deviations by 10, which amounts to multiplying the variances by 100, and keep the sample sizes the same. [Of course these are no longer real data, but we can pretend.] The effect is to make the denominator of the $T$-statistic larger, so that the statistic itself is smaller. Now the P-value is $0.06 > 0.05,$ so the null hypothesis is not rejected at the 5% level. Two-Sample T-Test and CI SE Sample N Mean StDev Mean 1 20 103.7 86.2 19 2 20 161 136 30 Difference = μ (1) - μ (2) Estimate for difference: -57.4 95% upper bound for difference: 3.3 T-Test of difference = 0 (vs <): T-Value = -1.59 P-Value = 0.060 DF = 38 Both use Pooled StDev = 113.9756
Best method of quantifying probability of new datum belonging to either of two distanced normal dist
Your question is a bit vague and it seems the your figure does not quite match the rest of the problem. I think you may have put parts of two similar problems together in your Question. I'll do my bes
Best method of quantifying probability of new datum belonging to either of two distanced normal distributions? Your question is a bit vague and it seems the your figure does not quite match the rest of the problem. I think you may have put parts of two similar problems together in your Question. I'll do my best to give most of the information you requested. You say the means of the two normal populations are unknown with $\mu_A \le \mu_B,$ and I will assume the two population standard deviations are also unknown. If it is somehow known that the two population standard deviations are equal, $\sigma_A = \sigma_B,$ then a pooled 2-sample t test of $H_0: \mu_A = \mu_B$ against $H_1: \mu_A < \mu_B$ is appropriate. I would use your example with values for the two sample means and standard deviations, but I would need to know the two sample sizes in order to show how to do the test. So I will use data with somewhat similar sample means and standard deviations, and with sample sizes $n_A = n_B = 40,$ as sampled in R below: set.seed(2020) x.a = rnorm(40, 104, 10) x.b = rnorm(40, 160, 10) summary(x.a); length(x.a); sd(x.a) Min. 1st Qu. Median Mean 3rd Qu. Max. 73.61 100.93 106.45 105.76 113.37 128.35 [1] 40 [1] 12.00162 summary(x.b); length(x.b); sd(x.b) Min. 1st Qu. Median Mean 3rd Qu. Max. 142.2 154.1 160.7 160.2 165.1 192.0 [1] 40 [1] 9.79959 stripchart(list(x.a, x.b), pch="|", ylim=c(.5, 2.5)) From the summaries and the stripchart, we can see that all values of sample A are below all values of sample B. There is a complete separation of the two samples. With such complete separation, there is little doubt that the pooled t test will reject the null hypothesis. [The parameter var.eq=T calls for the pooled test; without it, R does a Welch two-sample t test when two samples are provided.] t.test(x.a, x.b, alt="less", var.eq=T) Two Sample t-test data: x.a and x.b t = -22.228, df = 78, p-value < 2.2e-16 alternative hypothesis: true difference in means is less than 0 95 percent confidence interval: -Inf -50.37798 sample estimates: mean of x mean of y 105.7579 160.2139 You can find the formulas for doing a pooled two-sample t test in a basic statistics text. Maybe you should find the formulas and use the sample sizes, means and standard deviations to compute the pooled variance estimate, often called $s_p^2$ and then the test statistic $T = 22.228.$ If you choose to do the test at the significance level $\alpha = 1\%$ then the critical value $c = 2.429$ of the test can be found from a printed table of Student's t distributions on the row for degrees of freedom $DF = n_A + n_B - 2 = 38$ or by using software as below. qt(.99, 38) [1] 2.428568 You asked for a value that separates the two distributions. Such a value is $c$ and there are probability $0.01$ of rejecting $H_0$ when it is true. Because the two distributions are so widely separated the probability of failing to reject $H_0$ when it is false is very small. This means that we reject the null hypothesis at the 1% level because $T =22.23 > 2.429.$ [If you know about P-values, the very small P-value (below 1%) is another indication to reject $H_0.$ Ordinarily, you can't get exact P-values from printed distribution tables.] Note: If the distributions were as in the figure you show, then you might choose the critical value to be $c = 1.5$ Then if you were to rely on a single observation to decide between A and B, The probability that an observation from A would fall above $c$ is $0.0668,$ which could be found by standardizing and using printed tables of the standard normal cumulative distribution function. This probability can be found using R (where pnorm is a normal CDF). 1 - pnorm(1.5, 0, 1) [1] 0.0668072 Similarly, or by symmetry, the probability that a single observation from B would fall below $c$ is the same. pnorm(1.5, 3, 1) [1] 0.0668072 Addendum, per Comment. Your intuition that it is important to take variability into account is correct. Here is output from a recent release of Minitab, which explicitly shows the pooled standard deviation. First, I use the summarized data in your Question, and assume both samples are of size 20. Two-Sample T-Test and CI Sample N Mean StDev SE Mean 1 20 103.72 8.62 1.9 2 20 161.2 13.6 3.0 Difference = μ (1) - μ (2) Estimate for difference: -57.45 95% upper bound for difference: -51.37 T-Test of difference = 0 (vs <): T-Value = -15.94 P-Value = 0.000 DF = 38 Both use Pooled StDev = 11.3976 Now, to illustrate the role variability plays, I multiply the sample standard deviations by 10, which amounts to multiplying the variances by 100, and keep the sample sizes the same. [Of course these are no longer real data, but we can pretend.] The effect is to make the denominator of the $T$-statistic larger, so that the statistic itself is smaller. Now the P-value is $0.06 > 0.05,$ so the null hypothesis is not rejected at the 5% level. Two-Sample T-Test and CI SE Sample N Mean StDev Mean 1 20 103.7 86.2 19 2 20 161 136 30 Difference = μ (1) - μ (2) Estimate for difference: -57.4 95% upper bound for difference: 3.3 T-Test of difference = 0 (vs <): T-Value = -1.59 P-Value = 0.060 DF = 38 Both use Pooled StDev = 113.9756
Best method of quantifying probability of new datum belonging to either of two distanced normal dist Your question is a bit vague and it seems the your figure does not quite match the rest of the problem. I think you may have put parts of two similar problems together in your Question. I'll do my bes
50,285
Best method of quantifying probability of new datum belonging to either of two distanced normal distributions?
Here the aim "is to find a threshold value between the two distributions such that a new datum can be assigned to $A$ if its value falls below this central point, and to $B$ if it lies above, with a certain level of accuracy". Suppose we measure the accuracy as (probability of wrong assignment for data in $A$) + (probability of wrong assignment for data in $B$). Then we are looking for a threshold value $t$ to minimize $$P[A>t\ |\ A\sim N(m_A,s_A)] + P[B<t\ |\ B\sim N(m_B,s_B)]$$ The derivative of this with respect to $t$ should be 0: $$\frac{-e^{-(t-m_A)^2/(2s_A^2)}}{\sqrt{2\pi} s_A} +\frac{e^{-(t-m_B)^2/(2s_B^2)}}{\sqrt{2\pi} s_B} = 0$$ This can be solved analytically with some algebra and the quadratic formula: $$(t-m_A)^2/(2s_A^2) + \ln s_A= (t-m_B)^2/(2s_B^2) + \ln s_B$$ $$t = \frac{b\pm\sqrt{b^2-ac}}{a},\text{ where}$$ $$a=\frac{1}{s_A^2}-\frac{1}{s_B^2},\ \ b=\frac{m_A}{s_A^2}-\frac{m_B}{s_B^2},\ \ c=\frac{m_A^2}{s_A^2}-\frac{m_B^2}{s_B^2}+\ln\left(\frac{s_A^2}{s_B^2}\right)$$ For the particular numerical values in the question, this gives $a=0.00807$, $b=0.527$, $c=3.84$, and $t=126.9$ as the option in between $m_A$ and $m_B$. The measure of accuracy is $0.95\%$. For other ways of measuring accuracy we would get other values of $t$; this is one way of getting a reasonable value.
Best method of quantifying probability of new datum belonging to either of two distanced normal dist
Here the aim "is to find a threshold value between the two distributions such that a new datum can be assigned to $A$ if its value falls below this central point, and to $B$ if it lies above, with a c
Best method of quantifying probability of new datum belonging to either of two distanced normal distributions? Here the aim "is to find a threshold value between the two distributions such that a new datum can be assigned to $A$ if its value falls below this central point, and to $B$ if it lies above, with a certain level of accuracy". Suppose we measure the accuracy as (probability of wrong assignment for data in $A$) + (probability of wrong assignment for data in $B$). Then we are looking for a threshold value $t$ to minimize $$P[A>t\ |\ A\sim N(m_A,s_A)] + P[B<t\ |\ B\sim N(m_B,s_B)]$$ The derivative of this with respect to $t$ should be 0: $$\frac{-e^{-(t-m_A)^2/(2s_A^2)}}{\sqrt{2\pi} s_A} +\frac{e^{-(t-m_B)^2/(2s_B^2)}}{\sqrt{2\pi} s_B} = 0$$ This can be solved analytically with some algebra and the quadratic formula: $$(t-m_A)^2/(2s_A^2) + \ln s_A= (t-m_B)^2/(2s_B^2) + \ln s_B$$ $$t = \frac{b\pm\sqrt{b^2-ac}}{a},\text{ where}$$ $$a=\frac{1}{s_A^2}-\frac{1}{s_B^2},\ \ b=\frac{m_A}{s_A^2}-\frac{m_B}{s_B^2},\ \ c=\frac{m_A^2}{s_A^2}-\frac{m_B^2}{s_B^2}+\ln\left(\frac{s_A^2}{s_B^2}\right)$$ For the particular numerical values in the question, this gives $a=0.00807$, $b=0.527$, $c=3.84$, and $t=126.9$ as the option in between $m_A$ and $m_B$. The measure of accuracy is $0.95\%$. For other ways of measuring accuracy we would get other values of $t$; this is one way of getting a reasonable value.
Best method of quantifying probability of new datum belonging to either of two distanced normal dist Here the aim "is to find a threshold value between the two distributions such that a new datum can be assigned to $A$ if its value falls below this central point, and to $B$ if it lies above, with a c
50,286
Population vs. Data-Generating Process
In population approach, the model that you are fitting to a data can potentially be a reduced form of the true DGP. A crude example: Say $X_t$ is a time-series that actually grows with time with white noise ($e_t$). Specifically, let DGP is $X_t = a_0+a_1t+a_2t^2+e_t$ $\implies X_{t-1} = a_0+a_1(t-1)+a_2(t-1)^2 + e_{t-1}$ $\implies X_{t-1} = X_t-a_1+a_2-2a_2t + e_{t-1}-e_t$ $\implies \Delta X_t = a_1-a_2+2a_2t + \Delta e_t$ $\implies \Delta X_t-\Delta X_{t-1}=2a_2 + \Delta e_t - \Delta e_{t-1}$ Therefore, the DGP can be simplified to the following MA(1) type process ($u_t \equiv \Delta e_t$): $Z_t \equiv \Delta X_t-\Delta X_{t-1}=\beta + \Delta u_t$ So the random variable $Z_t$ has a particular distribution with mean value $\beta$, which will be estimated from given observations. And while that is true, it is not unique to the original DGP, because information about at least $a_1$ is permanently lost. If, on the other hand, you model $\Delta X_t-\Delta X_{t-1}=\beta + u_t$ as DGP, you are saying that realized value of $X_t$ is, by process design, is a function of last two period's values - which is very different from our earlier case. So the two approaches, I think, will have different implications on interpretation and causal inference.
Population vs. Data-Generating Process
In population approach, the model that you are fitting to a data can potentially be a reduced form of the true DGP. A crude example: Say $X_t$ is a time-series that actually grows with time with white
Population vs. Data-Generating Process In population approach, the model that you are fitting to a data can potentially be a reduced form of the true DGP. A crude example: Say $X_t$ is a time-series that actually grows with time with white noise ($e_t$). Specifically, let DGP is $X_t = a_0+a_1t+a_2t^2+e_t$ $\implies X_{t-1} = a_0+a_1(t-1)+a_2(t-1)^2 + e_{t-1}$ $\implies X_{t-1} = X_t-a_1+a_2-2a_2t + e_{t-1}-e_t$ $\implies \Delta X_t = a_1-a_2+2a_2t + \Delta e_t$ $\implies \Delta X_t-\Delta X_{t-1}=2a_2 + \Delta e_t - \Delta e_{t-1}$ Therefore, the DGP can be simplified to the following MA(1) type process ($u_t \equiv \Delta e_t$): $Z_t \equiv \Delta X_t-\Delta X_{t-1}=\beta + \Delta u_t$ So the random variable $Z_t$ has a particular distribution with mean value $\beta$, which will be estimated from given observations. And while that is true, it is not unique to the original DGP, because information about at least $a_1$ is permanently lost. If, on the other hand, you model $\Delta X_t-\Delta X_{t-1}=\beta + u_t$ as DGP, you are saying that realized value of $X_t$ is, by process design, is a function of last two period's values - which is very different from our earlier case. So the two approaches, I think, will have different implications on interpretation and causal inference.
Population vs. Data-Generating Process In population approach, the model that you are fitting to a data can potentially be a reduced form of the true DGP. A crude example: Say $X_t$ is a time-series that actually grows with time with white
50,287
$R^2$ of Logistic Regression Without Intercept?
(Throughout, I assume the labels are $0$ and $1$, not $\pm 1$.) Let's look at what $R^2$ means in the setting where we use a linear regression with an intercept. While there are many equivalent definitions in this setting, the definition that I find to apply in the most generality is comparing the performance of our model to the performance of a baseline model that only has an intercept and always predicts the pooled mean of $y$. $$ R^2 = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - \bar y \right)^2 } = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 } $$ When we assume a Gaussian conditional distribution, the numerator is equivalent to the negative log likelihood (in the technical sense) of our model, and the denominator is equivalent to the negative log likelihood of that baseline model. $$ R^2 = 1-\dfrac{-NLL(model)}{-NLL(baseline)}=1-\dfrac{NLL(model)}{NLL(baseline)}= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 }\\ = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - 0 \right)^2 }\\ = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n y_i^2 } $$ When we get rid of the intercept and also set the remaining coefficients to zero, that "baseline" is zero. We use a baseline model that always predicts zero. $$ R^2 = 1-\dfrac{NLL(model)}{NLL(baseline)}$$ Now let's turn to logistic regression. There is a different likelihood, but we can still consider the negative log likelihood of our model compared to the negative log likelihood of a baseline model that always predicts a log-odds of zero, equivalent to always predicting a probability of $0.5$. Negative log likelihood is equivalent to the log-loss. $$ -\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg] $$ Consequently, a likelihood-based $R^2$ (akin to McFadden's $R^2$ for a model with an intercept) for a no-intercept logistic regression, would be: $$R^2_{\text{likelihood-based}}=1-\dfrac{-\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg]}{ -\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(0.5) + (1-y_i)\log(0.5) \bigg]}\\ =1-\dfrac{\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg]}{ \sum_{i=1}^n\bigg[ y_i\log(0.5) + (1-y_i)\log(0.5) \bigg]} $$ Alternatively, we can use the regular $R^2$ formula, with our $y_{baseline}=0.5$. This would be equivalent to evaluating the model on the Brier score instead of the likelihood. $$ R^2_{\text{Brier-based}}= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 }= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - 0.5 \right)^2 } $$ EDIT I don't agree with everything on this page by UCLA, but it is a good reference for $R^2$-style metrics for logistic regression. In particular, I dislike considering classification accuracy ("Count" on that page) to be $R^2$-style, since it makes no comparison to a baseline value. The final metric, adjusted count, does make a comparison to a baseline model, however.
$R^2$ of Logistic Regression Without Intercept?
(Throughout, I assume the labels are $0$ and $1$, not $\pm 1$.) Let's look at what $R^2$ means in the setting where we use a linear regression with an intercept. While there are many equivalent defini
$R^2$ of Logistic Regression Without Intercept? (Throughout, I assume the labels are $0$ and $1$, not $\pm 1$.) Let's look at what $R^2$ means in the setting where we use a linear regression with an intercept. While there are many equivalent definitions in this setting, the definition that I find to apply in the most generality is comparing the performance of our model to the performance of a baseline model that only has an intercept and always predicts the pooled mean of $y$. $$ R^2 = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - \bar y \right)^2 } = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 } $$ When we assume a Gaussian conditional distribution, the numerator is equivalent to the negative log likelihood (in the technical sense) of our model, and the denominator is equivalent to the negative log likelihood of that baseline model. $$ R^2 = 1-\dfrac{-NLL(model)}{-NLL(baseline)}=1-\dfrac{NLL(model)}{NLL(baseline)}= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 }\\ = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - 0 \right)^2 }\\ = 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n y_i^2 } $$ When we get rid of the intercept and also set the remaining coefficients to zero, that "baseline" is zero. We use a baseline model that always predicts zero. $$ R^2 = 1-\dfrac{NLL(model)}{NLL(baseline)}$$ Now let's turn to logistic regression. There is a different likelihood, but we can still consider the negative log likelihood of our model compared to the negative log likelihood of a baseline model that always predicts a log-odds of zero, equivalent to always predicting a probability of $0.5$. Negative log likelihood is equivalent to the log-loss. $$ -\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg] $$ Consequently, a likelihood-based $R^2$ (akin to McFadden's $R^2$ for a model with an intercept) for a no-intercept logistic regression, would be: $$R^2_{\text{likelihood-based}}=1-\dfrac{-\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg]}{ -\dfrac{1}{n}\sum_{i=1}^n\bigg[ y_i\log(0.5) + (1-y_i)\log(0.5) \bigg]}\\ =1-\dfrac{\sum_{i=1}^n\bigg[ y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i) \bigg]}{ \sum_{i=1}^n\bigg[ y_i\log(0.5) + (1-y_i)\log(0.5) \bigg]} $$ Alternatively, we can use the regular $R^2$ formula, with our $y_{baseline}=0.5$. This would be equivalent to evaluating the model on the Brier score instead of the likelihood. $$ R^2_{\text{Brier-based}}= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - y_{baseline} \right)^2 }= 1 - \dfrac{ \sum_{i=1}^n\left( y_i - \hat y_i \right)^2 }{ \sum_{i=1}^n\left( y_i - 0.5 \right)^2 } $$ EDIT I don't agree with everything on this page by UCLA, but it is a good reference for $R^2$-style metrics for logistic regression. In particular, I dislike considering classification accuracy ("Count" on that page) to be $R^2$-style, since it makes no comparison to a baseline value. The final metric, adjusted count, does make a comparison to a baseline model, however.
$R^2$ of Logistic Regression Without Intercept? (Throughout, I assume the labels are $0$ and $1$, not $\pm 1$.) Let's look at what $R^2$ means in the setting where we use a linear regression with an intercept. While there are many equivalent defini
50,288
Hypothesis test: numeric vs. ranked
Here are simulations comparing two samples of size 15 from $\mathsf{Norm}(0,1)$ and $\mathsf{Norm}(1,1),$ respectively. My simulation shows that the pooled t test has better power than the two-sample Wilcoxon test, which is well-known, and that neither test has power $0.8.$ set.seed(2020) pv = replicate(10^4, t.test(rnorm(15,0,1), rnorm(15,1,1), var.eq=T)$p.val) mean(pv <= 0.05) [1] 0.7525 set.seed(911) pv = replicate(10^4, wilcox.test(rnorm(15,0,1), rnorm(15,1,1))$p.val) mean(pv <= 0.05) [1] 0.7118 It seems that I have misunderstood what you are doing, that your simulation code is wrong, or both. It might be helpful to have a clearer explanation of what you are doing with ranks, and to see the inner loop of your program where you compute power. [It makes no sense to take averages of ranks for the two samples separately: for example, if $n=15,$ then both sets of ranks would run from 1 through 15 and both sets of ranks would always sum to 120. You might want to look at what the Wilcoxon rank-sum test does with ranks of the two samples.] Here are simulations with sample sizes $n=25$ and difference $0.5$ in population means. In neither case is power anywhere near 80%. set.seed(1066) pv = replicate(10^4, t.test(rnorm(25,0,1), rnorm(25,.5,1), var.eq=T)$p.val) mean(pv <= 0.05) [1] 0.3978 set.seed(1776) pv = replicate(10^4, wilcox.test(rnorm(25,0,1), rnorm(25,.5,1))$p.val) mean(pv <= 0.05) [1] 0.3867 Note: For pooled t tests, here is an online 'power and sample size' calculator, that works for reasonable parameters.
Hypothesis test: numeric vs. ranked
Here are simulations comparing two samples of size 15 from $\mathsf{Norm}(0,1)$ and $\mathsf{Norm}(1,1),$ respectively. My simulation shows that the pooled t test has better power than the two-sample
Hypothesis test: numeric vs. ranked Here are simulations comparing two samples of size 15 from $\mathsf{Norm}(0,1)$ and $\mathsf{Norm}(1,1),$ respectively. My simulation shows that the pooled t test has better power than the two-sample Wilcoxon test, which is well-known, and that neither test has power $0.8.$ set.seed(2020) pv = replicate(10^4, t.test(rnorm(15,0,1), rnorm(15,1,1), var.eq=T)$p.val) mean(pv <= 0.05) [1] 0.7525 set.seed(911) pv = replicate(10^4, wilcox.test(rnorm(15,0,1), rnorm(15,1,1))$p.val) mean(pv <= 0.05) [1] 0.7118 It seems that I have misunderstood what you are doing, that your simulation code is wrong, or both. It might be helpful to have a clearer explanation of what you are doing with ranks, and to see the inner loop of your program where you compute power. [It makes no sense to take averages of ranks for the two samples separately: for example, if $n=15,$ then both sets of ranks would run from 1 through 15 and both sets of ranks would always sum to 120. You might want to look at what the Wilcoxon rank-sum test does with ranks of the two samples.] Here are simulations with sample sizes $n=25$ and difference $0.5$ in population means. In neither case is power anywhere near 80%. set.seed(1066) pv = replicate(10^4, t.test(rnorm(25,0,1), rnorm(25,.5,1), var.eq=T)$p.val) mean(pv <= 0.05) [1] 0.3978 set.seed(1776) pv = replicate(10^4, wilcox.test(rnorm(25,0,1), rnorm(25,.5,1))$p.val) mean(pv <= 0.05) [1] 0.3867 Note: For pooled t tests, here is an online 'power and sample size' calculator, that works for reasonable parameters.
Hypothesis test: numeric vs. ranked Here are simulations comparing two samples of size 15 from $\mathsf{Norm}(0,1)$ and $\mathsf{Norm}(1,1),$ respectively. My simulation shows that the pooled t test has better power than the two-sample
50,289
Why do we need to triangulate a convex polygon in order to sample uniformly from it?
The tldr answer is that in the square case, there are multiple ways to express a "deep" interior point as a convex combination of the vertices, but only one way for points that are nearer to the vertices. To give a bit more detail and explain why this happens for squares and not triangles, let me first re-express your algorithm in more formal language. Given an integer $n$, define the unit simplex as $\Delta_n=\{(x_1,\dots, x_n)\in \mathbb{R}^n: \sum_i x_i=1, x_i\geq 0\}$. Now, let $p_1, \dots, p_n$ denote the vertices of a polygon. Your algorithm first samples a vector $(w_1, \dots, w_n)$ uniformly from the simplex (using a normalized exponential distribution is a standard way to do this), and then transforms this to the point $\sum_i w_i p_i$. The first question is why does this work for triangles (that is, when $n=3$)? Well, it is quite easy to see geometrically that $\Delta_3$ is simply an equilateral triangle with vertices at the points $e_1=(1,0,0), e_2=(0,1,0), e_3=(0,0,1)$. In fact, $\Delta_3$ lives in the two-dimensional subspace of $\mathbb{R}^3$ defined by the constraint $x_1+x_2+x_3=1$. Let's call this subspace $H$. Now, I claim that there is an invertible affine-linear map $T: H\to\mathbb{R}^2$ such that $T(e_i)=p_i$ (where recall $p_i$ are the vertices of the triangle we wish to sample from). Why does this claim imply that the random variable $\sum_i w_ip_i$ has a uniform distribution over the triangle? Well by construction the random variable $(w_1,w_2,w_3)=\sum_i w_ie_i$ has a uniform distribution on $\Delta_3$; by the standard change-of-variables formula, the transformed variable $T(\sum_i w_ie_i)=\sum_i w_iT(e_i)=\sum_i w_ip_i$ will have a density that is obtained by multiplying the density of $\sum_i w_ie_i$ by a factor involving the Jacobian of T. Since $T$ is affine-linear its Jacobian is constant, therefore the density of the transformed variable is also constant. I leave it as an exercise to (1) derive an explicit expression for $T$ and (2) to justify the step $T(\sum_i w_ie_i)=\sum_i w_iT(e_i)$ (this step would be immediate for a linear map, but recall $T$ is actually affine linear). The next question is what is the difference for the quadrilateral case ($n=4$)? Well, now the simplex $\Delta_4$ is actually three-dimensional (it is a solid tetrahedron), so it's impossible to invertibly map it to a 2-dimensional quadrilateral. It's fairly clear geometrically that in this case the density of the convex combination at some value $y:=\sum_i w_ip_i$ will be proportional to the area of the set $C_y:=\{w\in \Delta_4: \sum_i w_ip_i=y\}$, that is, the area of the set of all possible convex coefficients that produce the value $y$ (viz coarea formula). So now we are led to the consider the geometry of the sets $C_y$ for different $y$ in the quadrilateral. For simplicity, let's assume that the vertices are given by $(0,0), (1,0), (0,1), (1,1)$. If we take $y=(0,0)$, then it is obvious that there is only one way to write $y$ as a convex combination of the vertices of the square, namely $y=1*(0,0)+0*(1,0)+0*(0,1)+0*(1,1)$. Thus $C_y=\{(1,0,0,0)\}$ consists of a single point. On the other hand, consider the center of mass of the square: $y=(.5,.5)$. There are now multiple ways to write this as a convex combination of the vertices, such as: $$y=.25*(0,0)+.25*(1,0)+.25*(0,1)+.25*(1,1)$$ $$y=.5*(0,0)+0*(1,0)+0*(0,1)+.5*(1,1)$$ $$y=({\frac 1 2}-\alpha)*(0,0)+\alpha*(1,0)+\alpha*(0,1)+({\frac 1 2}-\alpha)*(1,1), 0\leq \alpha\leq 1/2$$ So in this case $C_y$ is isomorphic to a line segment $[0,1/2]$, which is clearly much bigger than a single point.As an exercise, you can try other values of $y$ and convince yourself that the closer $y$ is to a corner, the fewer ways there are to express it as a convex combination of the vertices.
Why do we need to triangulate a convex polygon in order to sample uniformly from it?
The tldr answer is that in the square case, there are multiple ways to express a "deep" interior point as a convex combination of the vertices, but only one way for points that are nearer to the verti
Why do we need to triangulate a convex polygon in order to sample uniformly from it? The tldr answer is that in the square case, there are multiple ways to express a "deep" interior point as a convex combination of the vertices, but only one way for points that are nearer to the vertices. To give a bit more detail and explain why this happens for squares and not triangles, let me first re-express your algorithm in more formal language. Given an integer $n$, define the unit simplex as $\Delta_n=\{(x_1,\dots, x_n)\in \mathbb{R}^n: \sum_i x_i=1, x_i\geq 0\}$. Now, let $p_1, \dots, p_n$ denote the vertices of a polygon. Your algorithm first samples a vector $(w_1, \dots, w_n)$ uniformly from the simplex (using a normalized exponential distribution is a standard way to do this), and then transforms this to the point $\sum_i w_i p_i$. The first question is why does this work for triangles (that is, when $n=3$)? Well, it is quite easy to see geometrically that $\Delta_3$ is simply an equilateral triangle with vertices at the points $e_1=(1,0,0), e_2=(0,1,0), e_3=(0,0,1)$. In fact, $\Delta_3$ lives in the two-dimensional subspace of $\mathbb{R}^3$ defined by the constraint $x_1+x_2+x_3=1$. Let's call this subspace $H$. Now, I claim that there is an invertible affine-linear map $T: H\to\mathbb{R}^2$ such that $T(e_i)=p_i$ (where recall $p_i$ are the vertices of the triangle we wish to sample from). Why does this claim imply that the random variable $\sum_i w_ip_i$ has a uniform distribution over the triangle? Well by construction the random variable $(w_1,w_2,w_3)=\sum_i w_ie_i$ has a uniform distribution on $\Delta_3$; by the standard change-of-variables formula, the transformed variable $T(\sum_i w_ie_i)=\sum_i w_iT(e_i)=\sum_i w_ip_i$ will have a density that is obtained by multiplying the density of $\sum_i w_ie_i$ by a factor involving the Jacobian of T. Since $T$ is affine-linear its Jacobian is constant, therefore the density of the transformed variable is also constant. I leave it as an exercise to (1) derive an explicit expression for $T$ and (2) to justify the step $T(\sum_i w_ie_i)=\sum_i w_iT(e_i)$ (this step would be immediate for a linear map, but recall $T$ is actually affine linear). The next question is what is the difference for the quadrilateral case ($n=4$)? Well, now the simplex $\Delta_4$ is actually three-dimensional (it is a solid tetrahedron), so it's impossible to invertibly map it to a 2-dimensional quadrilateral. It's fairly clear geometrically that in this case the density of the convex combination at some value $y:=\sum_i w_ip_i$ will be proportional to the area of the set $C_y:=\{w\in \Delta_4: \sum_i w_ip_i=y\}$, that is, the area of the set of all possible convex coefficients that produce the value $y$ (viz coarea formula). So now we are led to the consider the geometry of the sets $C_y$ for different $y$ in the quadrilateral. For simplicity, let's assume that the vertices are given by $(0,0), (1,0), (0,1), (1,1)$. If we take $y=(0,0)$, then it is obvious that there is only one way to write $y$ as a convex combination of the vertices of the square, namely $y=1*(0,0)+0*(1,0)+0*(0,1)+0*(1,1)$. Thus $C_y=\{(1,0,0,0)\}$ consists of a single point. On the other hand, consider the center of mass of the square: $y=(.5,.5)$. There are now multiple ways to write this as a convex combination of the vertices, such as: $$y=.25*(0,0)+.25*(1,0)+.25*(0,1)+.25*(1,1)$$ $$y=.5*(0,0)+0*(1,0)+0*(0,1)+.5*(1,1)$$ $$y=({\frac 1 2}-\alpha)*(0,0)+\alpha*(1,0)+\alpha*(0,1)+({\frac 1 2}-\alpha)*(1,1), 0\leq \alpha\leq 1/2$$ So in this case $C_y$ is isomorphic to a line segment $[0,1/2]$, which is clearly much bigger than a single point.As an exercise, you can try other values of $y$ and convince yourself that the closer $y$ is to a corner, the fewer ways there are to express it as a convex combination of the vertices.
Why do we need to triangulate a convex polygon in order to sample uniformly from it? The tldr answer is that in the square case, there are multiple ways to express a "deep" interior point as a convex combination of the vertices, but only one way for points that are nearer to the verti
50,290
Random number generation for conjugate distribution of beta distribution
Here is an excerpt from our book, Introducing Monte Carlo methods with R, indirectly dealing with this case (by importance sampling). The graph of the target shows a smooth and regular shape for the conjugate, meaning a Normal or Student proposal could maybe be of used for accept-reject. An alternative is to use MCMC, eg Gibbs sampling. Example 3.6. [p.71-75] When considering an observation $x$ from a beta $\mathcal{B}(\alpha,\beta)$ distribution, $$ x\sim \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\,x^{\alpha-1} (1-x)^{\beta-1}\,\mathbb{I}_{[0,1]}(x), $$ there exists a family of conjugate priors on $(\alpha,\beta)$ of the form $$ \pi(\alpha,\beta)\propto \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^\lambda\, x_0^{\alpha}y_0^{\beta}\,, $$ where $\lambda,x_0,y_0$ are hyperparameters, since the posterior is then equal to $$ \pi(\alpha,\beta|x)\propto \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda+1}\, [x x_0]^{\alpha}[(1-x)y_0]^{\beta}\,. $$ This family of distributions is intractable if only because of the difficulty of dealing with gamma functions. Simulating directly from $\pi(\alpha,\beta|x)$ is therefore impossible. We thus need to use a substitute distribution $g(\alpha,\beta)$, and we can get a preliminary idea by looking at an image representation of $\pi(\alpha,\beta|x)$. If we take $\lambda=1$, $x_0=y_0=.5$, and $x=.6$, the R code for the conjugate is f=function(a,b){ exp(2*(lgamma(a+b)-lgamma(a)-lgamma(b))+a*log(.3)+b*log(.2))} leading to the following picture of the target: The examination of this figure shows that a normal or a Student's $t$ distribution on the pair $(\alpha,\beta)$ could be appropriate. Choosing a Student's $\mathcal{T}(3,\mu,\Sigma)$ distribution with $\mu=(50,45)$ and $$ \Sigma=\left( \begin{matrix}220 &190\\ 190 &180\end{matrix}\right) $$ does produce a reasonable fit. The covariance matrix\idxs{covariance matrix} above was obtained by trial-and-error, modifying the entries until the sample fits well enough: x=matrix(rt(2*10^4,3),ncol=2) #T sample E=matrix(c(220,190,190,180),ncol=2) #Scale matrix image(aa,bb,post,xlab=expression(alpha),ylab=" ") y=t(t(chol(E))%*%t(x)+c(50,45)) points(y,cex=.6,pch=19) If the quantity of interest is the marginal likelihood, as in Bayesian model comparison (Robert, 2001), \begin{eqnarray*} m(x) &=& \int_{\mathbb R^2_+} f(x|\alpha,\beta)\,\pi(\alpha,\beta)\,\text{d}\alpha \text{d}\beta \\ &=& \dfrac{\int_{\mathbb R^2_+} \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda+1}\, [x x_0]^{\alpha}[(1-x)y_0]^{\beta} \,\text{d}\alpha \text{d}\beta} {x(1-x)\,\int_{\mathbb R^2_+} \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda}\, x_0^{\alpha} y_0^{\beta} \,\text{d}\alpha \text{d}\beta}\,, \end{eqnarray*} we need to approximate both integrals and the same $t$ sample can be used for both since the fit is equally reasonable on the prior surface. This approximation \begin{align}\label{eq:margilike} \hat m(x) = \sum_{i=1}^n &\left\{ \frac{\Gamma(\alpha_i+\beta_i)}{\Gamma(\alpha_i) \Gamma(\beta_i)} \right\}^{\lambda+1}\, [x x_0]^{\alpha_i}[(1-x)y_0]^{\beta_i}\big/g(\alpha_i,\beta_i) \bigg/ \nonumber\\ &x(1-x)\sum_{i=1}^n \left\{ \frac{\Gamma(\alpha_i+\beta_i)}{\Gamma(\alpha_i) \Gamma(\beta_i)} \right\}^{\lambda}\, x_0^{\alpha_i}y_0^{\beta_i}\big/g(\alpha_i,\beta_i)\,, \end{align} where $(\alpha_i,\beta_i)_{1\le i\le n}$ are $n$ iid realizations from $g$, is straightforward to implement in {\tt R}: ine=apply(y,1,min) y=y[ine>0,] x=x[ine>0,] normx=sqrt(x[,1]^2+x[,2]^2) f=function(a) exp(2*(lgamma(a[,1]+a[,2])-lgamma(a[,1]) -lgamma(a[,2]))+a[,1]*log(.3)+a[,2]*log(.2)) h=function(a) exp(1*(lgamma(a[,1]+a[,2])-lgamma(a[,1]) -lgamma(a[,2]))+a[,1]*log(.5)+a[,2]*log(.5)) den=dt(normx,3) > mean(f(y)/den)/mean(h(y)/den) [1] 0.1361185 Our approximation of the marginal likelihood, based on those simulations is thus $0.1361$. Similarly, the posterior expectations of the parameters $\alpha$ and $\beta$ are obtained by > mean(y[,1]*f(y)/den)/mean(f(y)/den) [1] 94.08314 > mean(y[,2]*f(y)/den)/mean(f(y)/den) [1] 80.42832 i.e., are approximately equal to $19.34$ and $16.54$, respectively.
Random number generation for conjugate distribution of beta distribution
Here is an excerpt from our book, Introducing Monte Carlo methods with R, indirectly dealing with this case (by importance sampling). The graph of the target shows a smooth and regular shape for the c
Random number generation for conjugate distribution of beta distribution Here is an excerpt from our book, Introducing Monte Carlo methods with R, indirectly dealing with this case (by importance sampling). The graph of the target shows a smooth and regular shape for the conjugate, meaning a Normal or Student proposal could maybe be of used for accept-reject. An alternative is to use MCMC, eg Gibbs sampling. Example 3.6. [p.71-75] When considering an observation $x$ from a beta $\mathcal{B}(\alpha,\beta)$ distribution, $$ x\sim \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\,x^{\alpha-1} (1-x)^{\beta-1}\,\mathbb{I}_{[0,1]}(x), $$ there exists a family of conjugate priors on $(\alpha,\beta)$ of the form $$ \pi(\alpha,\beta)\propto \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^\lambda\, x_0^{\alpha}y_0^{\beta}\,, $$ where $\lambda,x_0,y_0$ are hyperparameters, since the posterior is then equal to $$ \pi(\alpha,\beta|x)\propto \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda+1}\, [x x_0]^{\alpha}[(1-x)y_0]^{\beta}\,. $$ This family of distributions is intractable if only because of the difficulty of dealing with gamma functions. Simulating directly from $\pi(\alpha,\beta|x)$ is therefore impossible. We thus need to use a substitute distribution $g(\alpha,\beta)$, and we can get a preliminary idea by looking at an image representation of $\pi(\alpha,\beta|x)$. If we take $\lambda=1$, $x_0=y_0=.5$, and $x=.6$, the R code for the conjugate is f=function(a,b){ exp(2*(lgamma(a+b)-lgamma(a)-lgamma(b))+a*log(.3)+b*log(.2))} leading to the following picture of the target: The examination of this figure shows that a normal or a Student's $t$ distribution on the pair $(\alpha,\beta)$ could be appropriate. Choosing a Student's $\mathcal{T}(3,\mu,\Sigma)$ distribution with $\mu=(50,45)$ and $$ \Sigma=\left( \begin{matrix}220 &190\\ 190 &180\end{matrix}\right) $$ does produce a reasonable fit. The covariance matrix\idxs{covariance matrix} above was obtained by trial-and-error, modifying the entries until the sample fits well enough: x=matrix(rt(2*10^4,3),ncol=2) #T sample E=matrix(c(220,190,190,180),ncol=2) #Scale matrix image(aa,bb,post,xlab=expression(alpha),ylab=" ") y=t(t(chol(E))%*%t(x)+c(50,45)) points(y,cex=.6,pch=19) If the quantity of interest is the marginal likelihood, as in Bayesian model comparison (Robert, 2001), \begin{eqnarray*} m(x) &=& \int_{\mathbb R^2_+} f(x|\alpha,\beta)\,\pi(\alpha,\beta)\,\text{d}\alpha \text{d}\beta \\ &=& \dfrac{\int_{\mathbb R^2_+} \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda+1}\, [x x_0]^{\alpha}[(1-x)y_0]^{\beta} \,\text{d}\alpha \text{d}\beta} {x(1-x)\,\int_{\mathbb R^2_+} \left\{ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \right\}^{\lambda}\, x_0^{\alpha} y_0^{\beta} \,\text{d}\alpha \text{d}\beta}\,, \end{eqnarray*} we need to approximate both integrals and the same $t$ sample can be used for both since the fit is equally reasonable on the prior surface. This approximation \begin{align}\label{eq:margilike} \hat m(x) = \sum_{i=1}^n &\left\{ \frac{\Gamma(\alpha_i+\beta_i)}{\Gamma(\alpha_i) \Gamma(\beta_i)} \right\}^{\lambda+1}\, [x x_0]^{\alpha_i}[(1-x)y_0]^{\beta_i}\big/g(\alpha_i,\beta_i) \bigg/ \nonumber\\ &x(1-x)\sum_{i=1}^n \left\{ \frac{\Gamma(\alpha_i+\beta_i)}{\Gamma(\alpha_i) \Gamma(\beta_i)} \right\}^{\lambda}\, x_0^{\alpha_i}y_0^{\beta_i}\big/g(\alpha_i,\beta_i)\,, \end{align} where $(\alpha_i,\beta_i)_{1\le i\le n}$ are $n$ iid realizations from $g$, is straightforward to implement in {\tt R}: ine=apply(y,1,min) y=y[ine>0,] x=x[ine>0,] normx=sqrt(x[,1]^2+x[,2]^2) f=function(a) exp(2*(lgamma(a[,1]+a[,2])-lgamma(a[,1]) -lgamma(a[,2]))+a[,1]*log(.3)+a[,2]*log(.2)) h=function(a) exp(1*(lgamma(a[,1]+a[,2])-lgamma(a[,1]) -lgamma(a[,2]))+a[,1]*log(.5)+a[,2]*log(.5)) den=dt(normx,3) > mean(f(y)/den)/mean(h(y)/den) [1] 0.1361185 Our approximation of the marginal likelihood, based on those simulations is thus $0.1361$. Similarly, the posterior expectations of the parameters $\alpha$ and $\beta$ are obtained by > mean(y[,1]*f(y)/den)/mean(f(y)/den) [1] 94.08314 > mean(y[,2]*f(y)/den)/mean(f(y)/den) [1] 80.42832 i.e., are approximately equal to $19.34$ and $16.54$, respectively.
Random number generation for conjugate distribution of beta distribution Here is an excerpt from our book, Introducing Monte Carlo methods with R, indirectly dealing with this case (by importance sampling). The graph of the target shows a smooth and regular shape for the c
50,291
Why to downweight Precision in nominator of F-beta when I actually want to upweight Precision?
By setting $\beta$ to $0$ we are effectively getting "just Precision" that is because the Recall multipliers cancel each other out and we are left with Precision only. By using a lower $\beta$ we do not down-weight Precision in any way, we are up-weighting as we allow it to dominate the fraction's final value through the numerator. If we already know the cost of our FP/FP we can directly use them. $\beta$ itself reflects our trade-off between Recall and Precision in the sense of $\beta=\frac{\text{Recall}}{\text{Precision}}$; therefore the values of $0.5$ and $2$ merely reflect that we hypotheises that: "we value Precision twice as much as Recall" (for $\beta=0.5$) or "we value Recall twice as much as Precision" (for $\beta=2$). Obviously if we value them equally $\beta=1$ and we get our standard $F_1$ score. Sasaki (2007) The truth of the F-measure presents this discussion very nicely in a formal manner where it grounds it firmly on van Rijsbergen's original 1979 work on Information Retrieval.
Why to downweight Precision in nominator of F-beta when I actually want to upweight Precision?
By setting $\beta$ to $0$ we are effectively getting "just Precision" that is because the Recall multipliers cancel each other out and we are left with Precision only. By using a lower $\beta$ we do n
Why to downweight Precision in nominator of F-beta when I actually want to upweight Precision? By setting $\beta$ to $0$ we are effectively getting "just Precision" that is because the Recall multipliers cancel each other out and we are left with Precision only. By using a lower $\beta$ we do not down-weight Precision in any way, we are up-weighting as we allow it to dominate the fraction's final value through the numerator. If we already know the cost of our FP/FP we can directly use them. $\beta$ itself reflects our trade-off between Recall and Precision in the sense of $\beta=\frac{\text{Recall}}{\text{Precision}}$; therefore the values of $0.5$ and $2$ merely reflect that we hypotheises that: "we value Precision twice as much as Recall" (for $\beta=0.5$) or "we value Recall twice as much as Precision" (for $\beta=2$). Obviously if we value them equally $\beta=1$ and we get our standard $F_1$ score. Sasaki (2007) The truth of the F-measure presents this discussion very nicely in a formal manner where it grounds it firmly on van Rijsbergen's original 1979 work on Information Retrieval.
Why to downweight Precision in nominator of F-beta when I actually want to upweight Precision? By setting $\beta$ to $0$ we are effectively getting "just Precision" that is because the Recall multipliers cancel each other out and we are left with Precision only. By using a lower $\beta$ we do n
50,292
Mean-centering variables in glmer
When you use the scale function on a variable, this will apply to the whole variable. That is not what you want here. You need to try to disentangle the within-whale associations from the between-whale associations. One good way to do this is by mean-centering the variable(s) in question by group - that is, by whale in your case. Then you ALSO have to include the mean variable in the model. In R I would suggest using the dplyr package to create the whale means, and the built-in merge function to add the means to your data. Then you simply create the whale mean-centred variable by duducting the whale mean from it. For example: mydata <- merge(mydata, mydata %>% group_by(id) %>% summarise(duration_whale_mean = mean(duration))) mydata$duration_mean_cent <- mydata$duration - mydata$duration_whale_mean Then in your model you will have: foraging ~ duration_mean_cent + duration_whale_mean + ... (and you will not use the duration variable in the model.
Mean-centering variables in glmer
When you use the scale function on a variable, this will apply to the whole variable. That is not what you want here. You need to try to disentangle the within-whale associations from the between-whal
Mean-centering variables in glmer When you use the scale function on a variable, this will apply to the whole variable. That is not what you want here. You need to try to disentangle the within-whale associations from the between-whale associations. One good way to do this is by mean-centering the variable(s) in question by group - that is, by whale in your case. Then you ALSO have to include the mean variable in the model. In R I would suggest using the dplyr package to create the whale means, and the built-in merge function to add the means to your data. Then you simply create the whale mean-centred variable by duducting the whale mean from it. For example: mydata <- merge(mydata, mydata %>% group_by(id) %>% summarise(duration_whale_mean = mean(duration))) mydata$duration_mean_cent <- mydata$duration - mydata$duration_whale_mean Then in your model you will have: foraging ~ duration_mean_cent + duration_whale_mean + ... (and you will not use the duration variable in the model.
Mean-centering variables in glmer When you use the scale function on a variable, this will apply to the whole variable. That is not what you want here. You need to try to disentangle the within-whale associations from the between-whal
50,293
Derivation of M step for Gaussian mixture model
TL;DR, we have that $$\mu^*_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$$ $$\Sigma^*_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu^*_k)(x_i - \mu^*_k)'}{\sum_{i=1}^n W_{ik}}$$ In particular, this is the same as finding the MLE of a gaussian rv, but we weight by $W_{ik}$ for each $k$. See below for the derivation, which is fairly similar to MLE for multivariate gaussian. It may help to approach the E step a bit differently. In your second equation for the E step, you correctly have that you want to maximize $$\sum_{i=1}^{n} \sum_{j=1}^{K} P\left(Z_i=j|X,\theta\right) log \left(\pi_j \frac{1}{|\Sigma_j|^{1/2}(2\pi)^{d/2}} \operatorname{exp}\left(-\frac{1}{2}(x_i-\mu_i)^{T}\Sigma_j^{-1}(x_i-\mu_i)\right)|X,\theta\right)$$ but we can more simply write that as $$\sum_{i=1}^{n} \sum_{j=1}^{K} P\left(Z_i=j|X,\theta\right)\left(log(\pi_j) + log\left(\mathcal{N}(x_i;\mu_j,\Sigma_j)\right)\right)$$ where $\mathcal{N}$ denotes the gaussian density. Following your notation, let $W_{ij} = P\left(Z_i=j|X,\theta\right)$. As pointed out in the comments, we want to basically take derivatives with respect to $\mu_k$ and $\Sigma_k$ for each $k=1,\dots,K$, set to $0$, and solve to find the maximum. Our first step is to note that for a given $k$, taking derivative with respect to either $k$ parameter will be zero for any $j\neq k$ in that summation, and so for any $(i,j)$ where $j\neq k$, the derivative will just be zero. So maximizing the above is the same as maximizing $$\sum_{i=1}^{n} W_{ik}\left(log(\pi_k) + log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)\right)$$ A key point of the EM algorithm is precisely that $W_{ik}$ is estimated in the E step, and so we can think of it as a constant for our cases, and while we're at it, since $$W_{ik}\left(log(\pi_k) + log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)\right) = W_{ik}log(\pi_k) + W_{ik}log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ for any $i$, we can also ignore that first part as taking derivative with respect to either parameter will be zero. So maximizing the E step for the $k$-th parameters is the same as maximizing $$\sum_{i=1}^{n} W_{ik} log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ Suppose that $\Sigma_k \in \mathbb{R}^{d\times d}$. Then we know that the PDF of the guassian normal is $$\frac{1}{2\pi^{d/2}\det(\Sigma_k)^{-1/2}} \exp(-\frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x-\mu_k))$$ and taking log and using all the properties of log (in particular, $log(xz/y) = log(x) + log(z) - log(y)$ and $log(e(x)) = x)$), we have $$log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right) = log(1) - log(2pi^{-d/2}) - \frac{1}{2}log(\det(\Sigma_k)) - \frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ and again, since we are taking derivative, all the parts that don't include $\mu_k$ or $\Sigma_k$ will be set to zero, so maximizing $$\sum_{i=1}^{n} W_{ik} log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ is the same as maximizing $$\sum_{i=1}^{n} W_{ik}\left(-\frac{1}{2}log(\det(\Sigma_k)) - \frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)\right)$$ which simplifies to $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ Okay, we are finally ready to take derivatives, but we will need to know some vector and matrix derivative properties, so let's draw from the lovely Matrix Cookbook. From it, we know that $\frac{\partial x'Ax}{\partial x} = 2Ax$ if $x$ does not depend on $A$ and $A$ is symmetric. Since $\Sigma_k^{-1}$ is positive semidefinite, it is symmetric. So taking derivative with respect to $\mu_k$, we get rid of the first part, and for the second part we basically chain rule by taking with respect to $(x_i-\mu_k)$ and our derivative rule and then taking derivative of that with $\mu_k) and get that $$\frac{\partial \frac{-1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)}{\partial \mu_k} = \sum_{i=1}^n W_{ik}\Sigma_k^{-1}(\mu_k - x_i) = 0 $$ which implies that $$\sum_{i=1}^n W_{ik}\Sigma_k^{-1}\mu_k = \sum_{i=1}^n W_{ik}\Sigma_k^{-1}x_i \implies \mu_k\sum_{i=1}^n W_{ik} = \sum_{i=1}^n W_{ik}x_i$$ and so $\mu_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$. Yay! Now let's do $\Sigma_k$. This one is trickier, but the key facts you need to know are that $\frac{\partial{x'Ax}}{\partial A} = xx'$, and that $\frac{\partial log(\det(A))}{\partial A} = A^{-T}$. Again check out the Matrix Cookbook to see why. We will also use the fact that $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) = \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1}))$$ which follows from pushing the $-1$ into the log and using the fact that $det(A^{-1}) = det(A)^{-1}$. Then we can re-write $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k) = \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1})) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ Taking derivative with respect to $\Sigma_k^{-1}$, we have $$\frac{\partial \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1})) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)}{\partial \Sigma_k^{-1}} = \frac{1}{2}\sum_{i=1}^n W_{ik}\Sigma_k - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'$$ And setting this to zero and solving for $\Sigma_k$ gives us that $$0 = \sum_{i=1}^n W_{ik}\Sigma_k -\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'$$ which simplifies to $$\Sigma_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'}{\sum_{i=1}^n W_{ik}}$$ using the previously maximized $\mu_k$ here, and we are done!
Derivation of M step for Gaussian mixture model
TL;DR, we have that $$\mu^*_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$$ $$\Sigma^*_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu^*_k)(x_i - \mu^*_k)'}{\sum_{i=1}^n W_{ik}}$$ In particular, thi
Derivation of M step for Gaussian mixture model TL;DR, we have that $$\mu^*_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$$ $$\Sigma^*_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu^*_k)(x_i - \mu^*_k)'}{\sum_{i=1}^n W_{ik}}$$ In particular, this is the same as finding the MLE of a gaussian rv, but we weight by $W_{ik}$ for each $k$. See below for the derivation, which is fairly similar to MLE for multivariate gaussian. It may help to approach the E step a bit differently. In your second equation for the E step, you correctly have that you want to maximize $$\sum_{i=1}^{n} \sum_{j=1}^{K} P\left(Z_i=j|X,\theta\right) log \left(\pi_j \frac{1}{|\Sigma_j|^{1/2}(2\pi)^{d/2}} \operatorname{exp}\left(-\frac{1}{2}(x_i-\mu_i)^{T}\Sigma_j^{-1}(x_i-\mu_i)\right)|X,\theta\right)$$ but we can more simply write that as $$\sum_{i=1}^{n} \sum_{j=1}^{K} P\left(Z_i=j|X,\theta\right)\left(log(\pi_j) + log\left(\mathcal{N}(x_i;\mu_j,\Sigma_j)\right)\right)$$ where $\mathcal{N}$ denotes the gaussian density. Following your notation, let $W_{ij} = P\left(Z_i=j|X,\theta\right)$. As pointed out in the comments, we want to basically take derivatives with respect to $\mu_k$ and $\Sigma_k$ for each $k=1,\dots,K$, set to $0$, and solve to find the maximum. Our first step is to note that for a given $k$, taking derivative with respect to either $k$ parameter will be zero for any $j\neq k$ in that summation, and so for any $(i,j)$ where $j\neq k$, the derivative will just be zero. So maximizing the above is the same as maximizing $$\sum_{i=1}^{n} W_{ik}\left(log(\pi_k) + log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)\right)$$ A key point of the EM algorithm is precisely that $W_{ik}$ is estimated in the E step, and so we can think of it as a constant for our cases, and while we're at it, since $$W_{ik}\left(log(\pi_k) + log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)\right) = W_{ik}log(\pi_k) + W_{ik}log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ for any $i$, we can also ignore that first part as taking derivative with respect to either parameter will be zero. So maximizing the E step for the $k$-th parameters is the same as maximizing $$\sum_{i=1}^{n} W_{ik} log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ Suppose that $\Sigma_k \in \mathbb{R}^{d\times d}$. Then we know that the PDF of the guassian normal is $$\frac{1}{2\pi^{d/2}\det(\Sigma_k)^{-1/2}} \exp(-\frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x-\mu_k))$$ and taking log and using all the properties of log (in particular, $log(xz/y) = log(x) + log(z) - log(y)$ and $log(e(x)) = x)$), we have $$log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right) = log(1) - log(2pi^{-d/2}) - \frac{1}{2}log(\det(\Sigma_k)) - \frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ and again, since we are taking derivative, all the parts that don't include $\mu_k$ or $\Sigma_k$ will be set to zero, so maximizing $$\sum_{i=1}^{n} W_{ik} log\left(\mathcal{N}(x_i;\mu_k,\Sigma_k)\right)$$ is the same as maximizing $$\sum_{i=1}^{n} W_{ik}\left(-\frac{1}{2}log(\det(\Sigma_k)) - \frac{1}{2}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)\right)$$ which simplifies to $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ Okay, we are finally ready to take derivatives, but we will need to know some vector and matrix derivative properties, so let's draw from the lovely Matrix Cookbook. From it, we know that $\frac{\partial x'Ax}{\partial x} = 2Ax$ if $x$ does not depend on $A$ and $A$ is symmetric. Since $\Sigma_k^{-1}$ is positive semidefinite, it is symmetric. So taking derivative with respect to $\mu_k$, we get rid of the first part, and for the second part we basically chain rule by taking with respect to $(x_i-\mu_k)$ and our derivative rule and then taking derivative of that with $\mu_k) and get that $$\frac{\partial \frac{-1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)}{\partial \mu_k} = \sum_{i=1}^n W_{ik}\Sigma_k^{-1}(\mu_k - x_i) = 0 $$ which implies that $$\sum_{i=1}^n W_{ik}\Sigma_k^{-1}\mu_k = \sum_{i=1}^n W_{ik}\Sigma_k^{-1}x_i \implies \mu_k\sum_{i=1}^n W_{ik} = \sum_{i=1}^n W_{ik}x_i$$ and so $\mu_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$. Yay! Now let's do $\Sigma_k$. This one is trickier, but the key facts you need to know are that $\frac{\partial{x'Ax}}{\partial A} = xx'$, and that $\frac{\partial log(\det(A))}{\partial A} = A^{-T}$. Again check out the Matrix Cookbook to see why. We will also use the fact that $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) = \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1}))$$ which follows from pushing the $-1$ into the log and using the fact that $det(A^{-1}) = det(A)^{-1}$. Then we can re-write $$-\frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k)) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k) = \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1})) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)$$ Taking derivative with respect to $\Sigma_k^{-1}$, we have $$\frac{\partial \frac{1}{2}\sum_{i=1}^{n} W_{ik}log(\det(\Sigma_k^{-1})) - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i-\mu_k)'\Sigma_k^{-1}(x_i-\mu_k)}{\partial \Sigma_k^{-1}} = \frac{1}{2}\sum_{i=1}^n W_{ik}\Sigma_k - \frac{1}{2}\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'$$ And setting this to zero and solving for $\Sigma_k$ gives us that $$0 = \sum_{i=1}^n W_{ik}\Sigma_k -\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'$$ which simplifies to $$\Sigma_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu_k)(x_i - \mu_k)'}{\sum_{i=1}^n W_{ik}}$$ using the previously maximized $\mu_k$ here, and we are done!
Derivation of M step for Gaussian mixture model TL;DR, we have that $$\mu^*_k = \frac{\sum_{i=1}^n W_{ik}x_i}{\sum_{i=1}^n W_{ik}}$$ $$\Sigma^*_k = \frac{\sum_{i=1}^{n} W_{ik}(x_i -\mu^*_k)(x_i - \mu^*_k)'}{\sum_{i=1}^n W_{ik}}$$ In particular, thi
50,294
When was Auto-Encoder used for anomaly detection for the first time?
A bit of reference chasing, combined with Google Scholar searches, suggests that the origin was Japkowicz, N., Myers C., & Gluck M., (1995), “A Novelty Detection Approach to Classification”, in Mellish, C. (ed.) The International Joint Conference on Artificial Intelligence (IJCAI-95), Montreal, Canada. IJCAII & Morgan Kaufmann. San Mateo, CA. pp 518-523. The paper is well-cited, demonstrates the utility of autoencoders for anomaly detection (see the CH46 helicopter gearbox fault detection example) and appears to have been treated as the key reference by others at the time. For example, this 1996 thesis states, "Recently, many studies have been released which use either auto-encoding networks or PCA to detect anomalies. The technique often used is based on [Japkowicz et al., 1995]".
When was Auto-Encoder used for anomaly detection for the first time?
A bit of reference chasing, combined with Google Scholar searches, suggests that the origin was Japkowicz, N., Myers C., & Gluck M., (1995), “A Novelty Detection Approach to Classification”, in Mellis
When was Auto-Encoder used for anomaly detection for the first time? A bit of reference chasing, combined with Google Scholar searches, suggests that the origin was Japkowicz, N., Myers C., & Gluck M., (1995), “A Novelty Detection Approach to Classification”, in Mellish, C. (ed.) The International Joint Conference on Artificial Intelligence (IJCAI-95), Montreal, Canada. IJCAII & Morgan Kaufmann. San Mateo, CA. pp 518-523. The paper is well-cited, demonstrates the utility of autoencoders for anomaly detection (see the CH46 helicopter gearbox fault detection example) and appears to have been treated as the key reference by others at the time. For example, this 1996 thesis states, "Recently, many studies have been released which use either auto-encoding networks or PCA to detect anomalies. The technique often used is based on [Japkowicz et al., 1995]".
When was Auto-Encoder used for anomaly detection for the first time? A bit of reference chasing, combined with Google Scholar searches, suggests that the origin was Japkowicz, N., Myers C., & Gluck M., (1995), “A Novelty Detection Approach to Classification”, in Mellis
50,295
How to compute the Quantile Treatment Effect?
I think they will be the same in a setting like a homogenous treatment effect world ($Y1=Y0 + m$) or even an affine transformation ($Y1=k \cdot Y0 + m$) that preserves rank (i.e., $k>0$), but in general they will not overlap, so your concern is valid. The second is definitely the more interesting counterfactual quantity, but people will often calculate the first because they lack the individual-level counterfactual data to calculate the second quantity (or a model to fill it in). This shortcut makes some sense if you are not worried about rank reversals. Note that this problem does not arise with means. To see the difference between the two, suppose $Y0$ is symmetric about 0 (say $N(0,1)$), and $Y1=-k \cdot Y0$ and we care about the 95 percentile. The 95th percentile of the treatment effect is very large since those are the folks who go from the negative bottom of the $Y0$ distribution to the positive top of $Y1$. But differences between the two 95th percentiles will be more modest if $k$ is not too large. It could even be negative if there is shrinkage in the support of $Y1$ (say for $k=0.5$ above), leading you to make the wrong inference about the sign of the 95th percentile of the effect (much less its magnitude). If the treatment is a small change, you might be willing to assume away rank reversals or highly non-linear transformations where the link between the two methods does not hold. Here's a toy example illustrating the last example with $Y0 \sim N(0,1)$ and $Y1=-0.5 \cdot Y0 + 0$. I have plotted the distributions of $Y0$, $Y1$ and $Y1-Y0$, along with the 95th percentile for each. As you can see, the 95th quantile of the effect is $2.5$, whereas the difference between the 95th quantiles is $0.82 - 1.59 = -0.77$.
How to compute the Quantile Treatment Effect?
I think they will be the same in a setting like a homogenous treatment effect world ($Y1=Y0 + m$) or even an affine transformation ($Y1=k \cdot Y0 + m$) that preserves rank (i.e., $k>0$), but in gener
How to compute the Quantile Treatment Effect? I think they will be the same in a setting like a homogenous treatment effect world ($Y1=Y0 + m$) or even an affine transformation ($Y1=k \cdot Y0 + m$) that preserves rank (i.e., $k>0$), but in general they will not overlap, so your concern is valid. The second is definitely the more interesting counterfactual quantity, but people will often calculate the first because they lack the individual-level counterfactual data to calculate the second quantity (or a model to fill it in). This shortcut makes some sense if you are not worried about rank reversals. Note that this problem does not arise with means. To see the difference between the two, suppose $Y0$ is symmetric about 0 (say $N(0,1)$), and $Y1=-k \cdot Y0$ and we care about the 95 percentile. The 95th percentile of the treatment effect is very large since those are the folks who go from the negative bottom of the $Y0$ distribution to the positive top of $Y1$. But differences between the two 95th percentiles will be more modest if $k$ is not too large. It could even be negative if there is shrinkage in the support of $Y1$ (say for $k=0.5$ above), leading you to make the wrong inference about the sign of the 95th percentile of the effect (much less its magnitude). If the treatment is a small change, you might be willing to assume away rank reversals or highly non-linear transformations where the link between the two methods does not hold. Here's a toy example illustrating the last example with $Y0 \sim N(0,1)$ and $Y1=-0.5 \cdot Y0 + 0$. I have plotted the distributions of $Y0$, $Y1$ and $Y1-Y0$, along with the 95th percentile for each. As you can see, the 95th quantile of the effect is $2.5$, whereas the difference between the 95th quantiles is $0.82 - 1.59 = -0.77$.
How to compute the Quantile Treatment Effect? I think they will be the same in a setting like a homogenous treatment effect world ($Y1=Y0 + m$) or even an affine transformation ($Y1=k \cdot Y0 + m$) that preserves rank (i.e., $k>0$), but in gener
50,296
How do you calculate log likelihood p(x) for a VAE?
The IWAE ELBO provides a tighter bound to the true log-likelihood $\log p(x)$. This bound gets tighter as the number of importance weighted samples $k$ increases. Therefore, the authors chose a large enough $k$, in this paper $k$=5000, to approximate the true likelihood of the test data as $\widehat{\log p(x)}$. As such, one can assume that $\log p(x) \approx \widehat{\log p(x)} = \mathcal{L}_{k=5000}$. As pointed out in the comment by @CP Tai, you can find more information about it in the paper from page 7 onwards.
How do you calculate log likelihood p(x) for a VAE?
The IWAE ELBO provides a tighter bound to the true log-likelihood $\log p(x)$. This bound gets tighter as the number of importance weighted samples $k$ increases. Therefore, the authors chose a large
How do you calculate log likelihood p(x) for a VAE? The IWAE ELBO provides a tighter bound to the true log-likelihood $\log p(x)$. This bound gets tighter as the number of importance weighted samples $k$ increases. Therefore, the authors chose a large enough $k$, in this paper $k$=5000, to approximate the true likelihood of the test data as $\widehat{\log p(x)}$. As such, one can assume that $\log p(x) \approx \widehat{\log p(x)} = \mathcal{L}_{k=5000}$. As pointed out in the comment by @CP Tai, you can find more information about it in the paper from page 7 onwards.
How do you calculate log likelihood p(x) for a VAE? The IWAE ELBO provides a tighter bound to the true log-likelihood $\log p(x)$. This bound gets tighter as the number of importance weighted samples $k$ increases. Therefore, the authors chose a large
50,297
PCA with polynomial kernel vs single layer autoencoder?
Assuming your polynomial kernel is given, for $d \in \mathbb I, c \in \mathbb R$ as $$K(x,y) = \langle \phi(x)|\phi(y)\rangle = (\langle x|y\rangle + c)^d$$ The functional form of PCA is given by: $$\phi(x')=WW^T\phi(x)\approx\phi(x)$$ In PCA $W$ is usually a rank-deficient matrix, so that the representation is lower dimensional than the original data. On the other hand, an autoencoder with activation function $f$ in the hidden layer and identity activation in the output layer has the following functional form: $$x' = W_2^Tf(W_1^Tx) \approx x$$ If we assume common activation functions, it can be seen that, at least in their functional forms, the kernel-PCA and the autoencoder won't ever coincide for non-trivial polynomial order $d$. A deep autoencoder wouldn't match the kernel PCA on its functional form either, though it could match the PCA manifold, since deep neural networks are often approximate universal approximators. A simple example can be sketched with $d=2$. $$ \begin{align} K(x,y) &= (\langle x|y\rangle + c)^2 = \langle x|y\rangle^2 + 2 \langle x|y\rangle c + c^2 \\ &= \left(\sum_i^p x_iy_i\right)^2 + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \left(\sum_i^p x_iy_i\left(\sum_j^p x_jy_j\right)\right) + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \left(\sum_i^p\left(\sum_j^p x_iy_ix_jy_j\right)\right) + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \sum_i^p x_i^2y_i^2 + \sum_{i=2}^p\sum_{j=1}^{i-1} \left(\sqrt2x_ix_j\right)\left(\sqrt2y_iy_j\right) + \sum_i^p \left(\sqrt{2c}x_i\right)\left(\sqrt{2c}y_i\right)+\sum_i^p c\cdot c\\ \end{align}$$ We can see from this expansion that $$\phi(x) = \left\{x_1^2, \dots,x_p^2, \dots, \sqrt2x_ix_j, \dots, \sqrt{2c}x_1, \dots, \sqrt{2c}x_p, c\right\}$$ In other words, the kernel gives you the original terms $x_i$, plus all powers and the interactions up to order $d$. This is not what you'll typically see in an autoencoder, which is usually based on the non-linear transformation of linear projections only. Interactions can be emulated, but are not enforced. Using the multinomial theorem this result can be generalized to other choices of $d$.
PCA with polynomial kernel vs single layer autoencoder?
Assuming your polynomial kernel is given, for $d \in \mathbb I, c \in \mathbb R$ as $$K(x,y) = \langle \phi(x)|\phi(y)\rangle = (\langle x|y\rangle + c)^d$$ The functional form of PCA is given by: $$\
PCA with polynomial kernel vs single layer autoencoder? Assuming your polynomial kernel is given, for $d \in \mathbb I, c \in \mathbb R$ as $$K(x,y) = \langle \phi(x)|\phi(y)\rangle = (\langle x|y\rangle + c)^d$$ The functional form of PCA is given by: $$\phi(x')=WW^T\phi(x)\approx\phi(x)$$ In PCA $W$ is usually a rank-deficient matrix, so that the representation is lower dimensional than the original data. On the other hand, an autoencoder with activation function $f$ in the hidden layer and identity activation in the output layer has the following functional form: $$x' = W_2^Tf(W_1^Tx) \approx x$$ If we assume common activation functions, it can be seen that, at least in their functional forms, the kernel-PCA and the autoencoder won't ever coincide for non-trivial polynomial order $d$. A deep autoencoder wouldn't match the kernel PCA on its functional form either, though it could match the PCA manifold, since deep neural networks are often approximate universal approximators. A simple example can be sketched with $d=2$. $$ \begin{align} K(x,y) &= (\langle x|y\rangle + c)^2 = \langle x|y\rangle^2 + 2 \langle x|y\rangle c + c^2 \\ &= \left(\sum_i^p x_iy_i\right)^2 + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \left(\sum_i^p x_iy_i\left(\sum_j^p x_jy_j\right)\right) + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \left(\sum_i^p\left(\sum_j^p x_iy_ix_jy_j\right)\right) + 2\left(\sum_i^p x_iy_ic\right)+c^2\\ &= \sum_i^p x_i^2y_i^2 + \sum_{i=2}^p\sum_{j=1}^{i-1} \left(\sqrt2x_ix_j\right)\left(\sqrt2y_iy_j\right) + \sum_i^p \left(\sqrt{2c}x_i\right)\left(\sqrt{2c}y_i\right)+\sum_i^p c\cdot c\\ \end{align}$$ We can see from this expansion that $$\phi(x) = \left\{x_1^2, \dots,x_p^2, \dots, \sqrt2x_ix_j, \dots, \sqrt{2c}x_1, \dots, \sqrt{2c}x_p, c\right\}$$ In other words, the kernel gives you the original terms $x_i$, plus all powers and the interactions up to order $d$. This is not what you'll typically see in an autoencoder, which is usually based on the non-linear transformation of linear projections only. Interactions can be emulated, but are not enforced. Using the multinomial theorem this result can be generalized to other choices of $d$.
PCA with polynomial kernel vs single layer autoencoder? Assuming your polynomial kernel is given, for $d \in \mathbb I, c \in \mathbb R$ as $$K(x,y) = \langle \phi(x)|\phi(y)\rangle = (\langle x|y\rangle + c)^d$$ The functional form of PCA is given by: $$\
50,298
Asymptotic t test question- regression when you do not assume normality of errors
Since you are considering large sample theory where $n$ tends to infinity, there are some additional assumptions you may need in order to make the assertion. (1) $(\eta_1,\ldots,\eta_n)$ are uncorrelated with equal variance (2) The design matrix $X$ grows as $n$ becomes larger. We should have something like: $\frac1n X'X$ tends to a finite limit in some way. For example, if we assume that $X_1,\ldots,X_n$ are iid from a distribution with mean 0 and finite variance, then $\frac1n X'X$ tends to the variance matrix in probability. These are the things that you need to say that the test statistics are "asymptotically" normal.
Asymptotic t test question- regression when you do not assume normality of errors
Since you are considering large sample theory where $n$ tends to infinity, there are some additional assumptions you may need in order to make the assertion. (1) $(\eta_1,\ldots,\eta_n)$ are uncorrela
Asymptotic t test question- regression when you do not assume normality of errors Since you are considering large sample theory where $n$ tends to infinity, there are some additional assumptions you may need in order to make the assertion. (1) $(\eta_1,\ldots,\eta_n)$ are uncorrelated with equal variance (2) The design matrix $X$ grows as $n$ becomes larger. We should have something like: $\frac1n X'X$ tends to a finite limit in some way. For example, if we assume that $X_1,\ldots,X_n$ are iid from a distribution with mean 0 and finite variance, then $\frac1n X'X$ tends to the variance matrix in probability. These are the things that you need to say that the test statistics are "asymptotically" normal.
Asymptotic t test question- regression when you do not assume normality of errors Since you are considering large sample theory where $n$ tends to infinity, there are some additional assumptions you may need in order to make the assertion. (1) $(\eta_1,\ldots,\eta_n)$ are uncorrela
50,299
Show that $\{X(t), = \cos(t+U)\}$, $U \sim \mathrm{Unif} (0, 2\pi)$ is a wide-sense stationary process
It is a typo, but you should get zero for the integral anyway. You should have: $$\begin{aligned} \mathbb{E}(\tfrac{1}{2} \cos(t_1+t_2 + 2U)) &= \int \limits_0^{2\pi} \frac{1}{4 \pi} \cdot \cos(t_1+t_2 + 2u) \ du \\[6pt] &= \Bigg[ \frac{1}{8 \pi} \cdot \sin(t_1+t_2 + 2u) \Bigg]_{u=0}^{u=2\pi} \\[6pt] &= \frac{1}{8 \pi} \Bigg[ \sin(t_1+t_2 + 4 \pi) - \sin(t_1+t_2) \Bigg] \\[6pt] &= \frac{1}{8 \pi} \times 0 = 0. \\[6pt] \end{aligned}$$
Show that $\{X(t), = \cos(t+U)\}$, $U \sim \mathrm{Unif} (0, 2\pi)$ is a wide-sense stationary proce
It is a typo, but you should get zero for the integral anyway. You should have: $$\begin{aligned} \mathbb{E}(\tfrac{1}{2} \cos(t_1+t_2 + 2U)) &= \int \limits_0^{2\pi} \frac{1}{4 \pi} \cdot \cos(t_1+t
Show that $\{X(t), = \cos(t+U)\}$, $U \sim \mathrm{Unif} (0, 2\pi)$ is a wide-sense stationary process It is a typo, but you should get zero for the integral anyway. You should have: $$\begin{aligned} \mathbb{E}(\tfrac{1}{2} \cos(t_1+t_2 + 2U)) &= \int \limits_0^{2\pi} \frac{1}{4 \pi} \cdot \cos(t_1+t_2 + 2u) \ du \\[6pt] &= \Bigg[ \frac{1}{8 \pi} \cdot \sin(t_1+t_2 + 2u) \Bigg]_{u=0}^{u=2\pi} \\[6pt] &= \frac{1}{8 \pi} \Bigg[ \sin(t_1+t_2 + 4 \pi) - \sin(t_1+t_2) \Bigg] \\[6pt] &= \frac{1}{8 \pi} \times 0 = 0. \\[6pt] \end{aligned}$$
Show that $\{X(t), = \cos(t+U)\}$, $U \sim \mathrm{Unif} (0, 2\pi)$ is a wide-sense stationary proce It is a typo, but you should get zero for the integral anyway. You should have: $$\begin{aligned} \mathbb{E}(\tfrac{1}{2} \cos(t_1+t_2 + 2U)) &= \int \limits_0^{2\pi} \frac{1}{4 \pi} \cdot \cos(t_1+t
50,300
Small dataset and optimal parameters for XGboost
It is difficult to give a smooth answer without having the data and all required subject knowledge at hand. Still I can throw in some comments. Your validation strategy is fine as long as you decide everything by cross validation and not by the test data (but see 5.). If your best solution picks value at the border, then this is usually not a good grid. It happens for several of your parameters. For such small data, tree depth 5+ seems too much. XGBoost has important additional regularization parameters like l1 and l2 penalties. Usually these need to be tuned as well. Are the rows really independent or are there clusters of rows that invalidate your validation strategy?
Small dataset and optimal parameters for XGboost
It is difficult to give a smooth answer without having the data and all required subject knowledge at hand. Still I can throw in some comments. Your validation strategy is fine as long as you decide
Small dataset and optimal parameters for XGboost It is difficult to give a smooth answer without having the data and all required subject knowledge at hand. Still I can throw in some comments. Your validation strategy is fine as long as you decide everything by cross validation and not by the test data (but see 5.). If your best solution picks value at the border, then this is usually not a good grid. It happens for several of your parameters. For such small data, tree depth 5+ seems too much. XGBoost has important additional regularization parameters like l1 and l2 penalties. Usually these need to be tuned as well. Are the rows really independent or are there clusters of rows that invalidate your validation strategy?
Small dataset and optimal parameters for XGboost It is difficult to give a smooth answer without having the data and all required subject knowledge at hand. Still I can throw in some comments. Your validation strategy is fine as long as you decide