idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,901 | Not sure if standard error of p-values makes sense in Fisher Exact Test | You have correctly computed the variance of $f$, not the variance of $\hat{p}$. To estimate the variance of $\hat{p}$ just go one step further:
$$\text{var}(\hat{p})= \text{var}\left(\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\right) = \frac{1}{N} \text{var}(f). $$
Moreover, your formula simplifies greatly: it's an easy algebraic step to derive the estimator
$$\widehat{\text{var}(f)} = \hat{p}(1 - \hat{p}).$$
Whence, to find the standard error of $\hat{p}$, divide $\hat{p}(1 - \hat{p})$ by $N$ and take the square root.
After all, the distribution of $\hat{p}$ is binomial and these are the familiar formulas for sampling from a binomial distribution. | Not sure if standard error of p-values makes sense in Fisher Exact Test | You have correctly computed the variance of $f$, not the variance of $\hat{p}$. To estimate the variance of $\hat{p}$ just go one step further:
$$\text{var}(\hat{p})= \text{var}\left(\frac{1}{N}\sum_ | Not sure if standard error of p-values makes sense in Fisher Exact Test
You have correctly computed the variance of $f$, not the variance of $\hat{p}$. To estimate the variance of $\hat{p}$ just go one step further:
$$\text{var}(\hat{p})= \text{var}\left(\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\right) = \frac{1}{N} \text{var}(f). $$
Moreover, your formula simplifies greatly: it's an easy algebraic step to derive the estimator
$$\widehat{\text{var}(f)} = \hat{p}(1 - \hat{p}).$$
Whence, to find the standard error of $\hat{p}$, divide $\hat{p}(1 - \hat{p})$ by $N$ and take the square root.
After all, the distribution of $\hat{p}$ is binomial and these are the familiar formulas for sampling from a binomial distribution. | Not sure if standard error of p-values makes sense in Fisher Exact Test
You have correctly computed the variance of $f$, not the variance of $\hat{p}$. To estimate the variance of $\hat{p}$ just go one step further:
$$\text{var}(\hat{p})= \text{var}\left(\frac{1}{N}\sum_ |
53,902 | Not sure if standard error of p-values makes sense in Fisher Exact Test | In general, if you are monte-carlo-ing a p-value, the only standard error that makes sense is the monte-carlo error in estimating it, and then only to show that you have done enough permutations. Under the null hypothesis, the p-value is just Uniform(0,1). Unless the p-value is really extreme, most people will be happy with you just reporting it and saying how many permutations you did - I definitely would not report a standard error in any results table. | Not sure if standard error of p-values makes sense in Fisher Exact Test | In general, if you are monte-carlo-ing a p-value, the only standard error that makes sense is the monte-carlo error in estimating it, and then only to show that you have done enough permutations. Und | Not sure if standard error of p-values makes sense in Fisher Exact Test
In general, if you are monte-carlo-ing a p-value, the only standard error that makes sense is the monte-carlo error in estimating it, and then only to show that you have done enough permutations. Under the null hypothesis, the p-value is just Uniform(0,1). Unless the p-value is really extreme, most people will be happy with you just reporting it and saying how many permutations you did - I definitely would not report a standard error in any results table. | Not sure if standard error of p-values makes sense in Fisher Exact Test
In general, if you are monte-carlo-ing a p-value, the only standard error that makes sense is the monte-carlo error in estimating it, and then only to show that you have done enough permutations. Und |
53,903 | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata | For logistic regression, Stata defines residuals and related quantities to be those you'd get if you grouped all the observations with the same values for all the predictor variables, counted up the successes and failures for those observations, and fitted a logistic regression model to the resulting binomial data instead of the original Bernoulli data. This is a useful thing to do as (if there are multiple observations with the same covariate pattern) the resulting residuals behave more like those you're used to from least squares.
To get the same residuals from R, I suspect you will need to group the data and fit the model to the grouped data. But I'm not clear whether R is using the same definition of 'standardized residuals' as Stata as I don't presently have access to the numerous textbooks that the R documentation references.
Here's an excerpt from 'Methods and formulas' section of the Stata manual entry for 'logistic postestimation' (one thing I like about Stata is that the manuals provide the full formulas for everything):
Define $M_j$ for each observation as the total number of observations
sharing $j$’s covariate pattern. Define $Y_j$ as the total number of
positive responses among observations sharing $j$’s covariate pattern.
The Pearson residual for the $j$th observation is defined as
$$r_j = \frac{Y_j - M_j p_j}{\sqrt{M_j p_j(1 - p_j)}}$$
...
The unadjusted diagonal elements of the hat matrix $h_{Uj}$ are given by
$h_{Uj} = (\mathbf{XVX}')_{jj}$,
where $\mathbf{V}$ is the estimated covariance matrix of parameters.
The adjusted diagonal elements $h_j$ created by hat are then
$h_j = M_j p_j(1 - p_j)h_{Uj}$.
The standardized Pearson residual $r_{Sj}$ is $r_j / \sqrt{1 - h_j}.$ | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata | For logistic regression, Stata defines residuals and related quantities to be those you'd get if you grouped all the observations with the same values for all the predictor variables, counted up the s | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata
For logistic regression, Stata defines residuals and related quantities to be those you'd get if you grouped all the observations with the same values for all the predictor variables, counted up the successes and failures for those observations, and fitted a logistic regression model to the resulting binomial data instead of the original Bernoulli data. This is a useful thing to do as (if there are multiple observations with the same covariate pattern) the resulting residuals behave more like those you're used to from least squares.
To get the same residuals from R, I suspect you will need to group the data and fit the model to the grouped data. But I'm not clear whether R is using the same definition of 'standardized residuals' as Stata as I don't presently have access to the numerous textbooks that the R documentation references.
Here's an excerpt from 'Methods and formulas' section of the Stata manual entry for 'logistic postestimation' (one thing I like about Stata is that the manuals provide the full formulas for everything):
Define $M_j$ for each observation as the total number of observations
sharing $j$’s covariate pattern. Define $Y_j$ as the total number of
positive responses among observations sharing $j$’s covariate pattern.
The Pearson residual for the $j$th observation is defined as
$$r_j = \frac{Y_j - M_j p_j}{\sqrt{M_j p_j(1 - p_j)}}$$
...
The unadjusted diagonal elements of the hat matrix $h_{Uj}$ are given by
$h_{Uj} = (\mathbf{XVX}')_{jj}$,
where $\mathbf{V}$ is the estimated covariance matrix of parameters.
The adjusted diagonal elements $h_j$ created by hat are then
$h_j = M_j p_j(1 - p_j)h_{Uj}$.
The standardized Pearson residual $r_{Sj}$ is $r_j / \sqrt{1 - h_j}.$ | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata
For logistic regression, Stata defines residuals and related quantities to be those you'd get if you grouped all the observations with the same values for all the predictor variables, counted up the s |
53,904 | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata | Pearson residuals are obtained by dividing the each observation's raw residual by the square root of the corresponding variance. The idea is to get something that has variance 1, approximately. In your example, try this;
set.seed(3141)
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- rbinom(100, 1, 0.25)
glm1 <- glm(y~x1+x2, family=binomial)
f1 <- fitted(glm1) # the fitted probability of y=1, for each observation
plot( residuals(glm1, "pearson"), (y-f1)/sqrt(f1*(1-f1)))
abline(0,1) # they match
The 'gap' occurs because the residuals where $Y=1$ are on one side, and those with $Y=0$ are on the other. Standardized residuals are a different animal; they divide by the estimated standard deviation of the residual; you can obtain them in R using rstandard(), though for non-linear GLMs it uses a linear approximation in the calculation.
NB residuals of any form tend not to be terribly helpful in logistic regression. With independent binary data, the only real concern is whether we've specified the mean correctly - and with modest sample sizes, plots of residuals typically provide little power to assess that. | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata | Pearson residuals are obtained by dividing the each observation's raw residual by the square root of the corresponding variance. The idea is to get something that has variance 1, approximately. In you | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata
Pearson residuals are obtained by dividing the each observation's raw residual by the square root of the corresponding variance. The idea is to get something that has variance 1, approximately. In your example, try this;
set.seed(3141)
x1 <- rnorm(100)
x2 <- rnorm(100)
y <- rbinom(100, 1, 0.25)
glm1 <- glm(y~x1+x2, family=binomial)
f1 <- fitted(glm1) # the fitted probability of y=1, for each observation
plot( residuals(glm1, "pearson"), (y-f1)/sqrt(f1*(1-f1)))
abline(0,1) # they match
The 'gap' occurs because the residuals where $Y=1$ are on one side, and those with $Y=0$ are on the other. Standardized residuals are a different animal; they divide by the estimated standard deviation of the residual; you can obtain them in R using rstandard(), though for non-linear GLMs it uses a linear approximation in the calculation.
NB residuals of any form tend not to be terribly helpful in logistic regression. With independent binary data, the only real concern is whether we've specified the mean correctly - and with modest sample sizes, plots of residuals typically provide little power to assess that. | Logistic Regression - Getting Pearson Standardized Residuals in R vs Stata
Pearson residuals are obtained by dividing the each observation's raw residual by the square root of the corresponding variance. The idea is to get something that has variance 1, approximately. In you |
53,905 | Meta-analysis in R with multiple SNPs | The MetABEL part of GenABEL does this. For 150 SNPs, you might find coding the loop yourself quicker than ensuring it's doing exactly what you want. (Neither should take very long) | Meta-analysis in R with multiple SNPs | The MetABEL part of GenABEL does this. For 150 SNPs, you might find coding the loop yourself quicker than ensuring it's doing exactly what you want. (Neither should take very long) | Meta-analysis in R with multiple SNPs
The MetABEL part of GenABEL does this. For 150 SNPs, you might find coding the loop yourself quicker than ensuring it's doing exactly what you want. (Neither should take very long) | Meta-analysis in R with multiple SNPs
The MetABEL part of GenABEL does this. For 150 SNPs, you might find coding the loop yourself quicker than ensuring it's doing exactly what you want. (Neither should take very long) |
53,906 | Meta-analysis in R with multiple SNPs | A quicker option would be to use the built-in meta-analysis option from PLINK:
old plink site: http://zzz.bwh.harvard.edu/plink/metaanal.shtml
new 1.9 site: https://www.cog-genomics.org/plink/1.9/postproc#meta_analysis
It is quite straightforward, and no additional coding is needed. You could then use the meta package from R to make forest plots of significant results. My experience with the meta package is that it isn't very practical when doing more than a few meta-analyses. | Meta-analysis in R with multiple SNPs | A quicker option would be to use the built-in meta-analysis option from PLINK:
old plink site: http://zzz.bwh.harvard.edu/plink/metaanal.shtml
new 1.9 site: https://www.cog-genomics.org/plink/1.9/pos | Meta-analysis in R with multiple SNPs
A quicker option would be to use the built-in meta-analysis option from PLINK:
old plink site: http://zzz.bwh.harvard.edu/plink/metaanal.shtml
new 1.9 site: https://www.cog-genomics.org/plink/1.9/postproc#meta_analysis
It is quite straightforward, and no additional coding is needed. You could then use the meta package from R to make forest plots of significant results. My experience with the meta package is that it isn't very practical when doing more than a few meta-analyses. | Meta-analysis in R with multiple SNPs
A quicker option would be to use the built-in meta-analysis option from PLINK:
old plink site: http://zzz.bwh.harvard.edu/plink/metaanal.shtml
new 1.9 site: https://www.cog-genomics.org/plink/1.9/pos |
53,907 | How to convert nominal dataset into numerical dataset? | Although formally you may do K-means clustering on nominal data after converting nominal variables into dummy variables, this is regarded inadequate approach. To use K-means meaningfully, you must have all variables at scale (interval or ratio) level.
One of the ways to quantify a set of nominal variables is to apply multiple correspondence analysis. It can be seen as a dimension-reduction technique, like PCA, only for nominal data. You could use the resultant quantifications (the coordinates) as the input to K-means, if you like. | How to convert nominal dataset into numerical dataset? | Although formally you may do K-means clustering on nominal data after converting nominal variables into dummy variables, this is regarded inadequate approach. To use K-means meaningfully, you must hav | How to convert nominal dataset into numerical dataset?
Although formally you may do K-means clustering on nominal data after converting nominal variables into dummy variables, this is regarded inadequate approach. To use K-means meaningfully, you must have all variables at scale (interval or ratio) level.
One of the ways to quantify a set of nominal variables is to apply multiple correspondence analysis. It can be seen as a dimension-reduction technique, like PCA, only for nominal data. You could use the resultant quantifications (the coordinates) as the input to K-means, if you like. | How to convert nominal dataset into numerical dataset?
Although formally you may do K-means clustering on nominal data after converting nominal variables into dummy variables, this is regarded inadequate approach. To use K-means meaningfully, you must hav |
53,908 | How to convert nominal dataset into numerical dataset? | For nominal data that represents token occurrences in a text document as in RCV1v2 you can use the TF-IDF transform as a way to normalize the data. | How to convert nominal dataset into numerical dataset? | For nominal data that represents token occurrences in a text document as in RCV1v2 you can use the TF-IDF transform as a way to normalize the data. | How to convert nominal dataset into numerical dataset?
For nominal data that represents token occurrences in a text document as in RCV1v2 you can use the TF-IDF transform as a way to normalize the data. | How to convert nominal dataset into numerical dataset?
For nominal data that represents token occurrences in a text document as in RCV1v2 you can use the TF-IDF transform as a way to normalize the data. |
53,909 | How to convert nominal dataset into numerical dataset? | I'd calculate a similarity matrix using jaccard distance and then run k-means | How to convert nominal dataset into numerical dataset? | I'd calculate a similarity matrix using jaccard distance and then run k-means | How to convert nominal dataset into numerical dataset?
I'd calculate a similarity matrix using jaccard distance and then run k-means | How to convert nominal dataset into numerical dataset?
I'd calculate a similarity matrix using jaccard distance and then run k-means |
53,910 | How to convert nominal dataset into numerical dataset? | k-means is not appropriate for this kind of data, for a very simple reason.
Assume you convert the data set so that value a = 1, value b = 2, value c = 3, a cluster that consists of 10 objects with label a and 10 objects with label c will have a center at the value of b. And even when you have binary data, what is an average of 0.1314 supposed to mean?
Do not convert the data set. Choose an appropriate distance function that can handle binary attributes, and use an algorithm that only needs distances and not a vector space. Methods that require an euclidean vectors space (such as k-means, in order to compute a mean) are not meaningful for that that just isn't from an euclidean space. | How to convert nominal dataset into numerical dataset? | k-means is not appropriate for this kind of data, for a very simple reason.
Assume you convert the data set so that value a = 1, value b = 2, value c = 3, a cluster that consists of 10 objects with la | How to convert nominal dataset into numerical dataset?
k-means is not appropriate for this kind of data, for a very simple reason.
Assume you convert the data set so that value a = 1, value b = 2, value c = 3, a cluster that consists of 10 objects with label a and 10 objects with label c will have a center at the value of b. And even when you have binary data, what is an average of 0.1314 supposed to mean?
Do not convert the data set. Choose an appropriate distance function that can handle binary attributes, and use an algorithm that only needs distances and not a vector space. Methods that require an euclidean vectors space (such as k-means, in order to compute a mean) are not meaningful for that that just isn't from an euclidean space. | How to convert nominal dataset into numerical dataset?
k-means is not appropriate for this kind of data, for a very simple reason.
Assume you convert the data set so that value a = 1, value b = 2, value c = 3, a cluster that consists of 10 objects with la |
53,911 | Why does GBM predict different values for the same data | The factors, as always. Seems like the model is not using the actual value of the factor, but rather something like the position in the factor-levels.
I was able to reproduce your error with the data OrchardSprays
data(OrchardSprays)
model <- gbm(decrease ~ rowpos+colpos+treatment, data=OrchardSprays, n.trees=1000, distribution="gaussian", interaction.depth=3, bag.fraction=0.5, train.fraction=1.0, shrinkage=0.1, keep.data=TRUE)
firstrow <- OrchardSprays[1,]
str(firstrow)
manualFirstrow <- data.frame(decrease=57,rowpos=1,colpos=1,treatment="D")
str(manualFirstrow)
predict(model,newdata=firstrow,n.trees=100)
predict(model,newdata=manualFirstrow,n.trees=100)
predict(model,newdata=data.frame(decrease=57,rowpos=1,colpos=1,treatment="A"),n.trees=100)
output:
> predict(model,newdata=firstrow,n.trees=100)
[1] 50.31276
> predict(model,newdata=manualFirstrow,n.trees=100)
[1] 20.67818
> predict(model,newdata=data.frame(decrease=57,rowpos=1,colpos=1,treatment="A"),n.trees=100)
[1] 20.67818
since A has position 1 in the levels of OrchardSprays$treatment. Adding the levels to the data declaration does the trick
manualFirstrow <- data.frame(decrease=57,rowpos=1,colpos=1,treatment=factor("D",levels(OrchardSprays$treatment)))
str(manualFirstrow)
predict(model,newdata=firstrow,n.trees=100)
predict(model,newdata=manualFirstrow,n.trees=100)
output:
> predict(model,newdata=firstrow,n.trees=100)
[1] 50.31276
> predict(model,newdata=manualFirstrow,n.trees=100)
[1] 50.31276 | Why does GBM predict different values for the same data | The factors, as always. Seems like the model is not using the actual value of the factor, but rather something like the position in the factor-levels.
I was able to reproduce your error with the data | Why does GBM predict different values for the same data
The factors, as always. Seems like the model is not using the actual value of the factor, but rather something like the position in the factor-levels.
I was able to reproduce your error with the data OrchardSprays
data(OrchardSprays)
model <- gbm(decrease ~ rowpos+colpos+treatment, data=OrchardSprays, n.trees=1000, distribution="gaussian", interaction.depth=3, bag.fraction=0.5, train.fraction=1.0, shrinkage=0.1, keep.data=TRUE)
firstrow <- OrchardSprays[1,]
str(firstrow)
manualFirstrow <- data.frame(decrease=57,rowpos=1,colpos=1,treatment="D")
str(manualFirstrow)
predict(model,newdata=firstrow,n.trees=100)
predict(model,newdata=manualFirstrow,n.trees=100)
predict(model,newdata=data.frame(decrease=57,rowpos=1,colpos=1,treatment="A"),n.trees=100)
output:
> predict(model,newdata=firstrow,n.trees=100)
[1] 50.31276
> predict(model,newdata=manualFirstrow,n.trees=100)
[1] 20.67818
> predict(model,newdata=data.frame(decrease=57,rowpos=1,colpos=1,treatment="A"),n.trees=100)
[1] 20.67818
since A has position 1 in the levels of OrchardSprays$treatment. Adding the levels to the data declaration does the trick
manualFirstrow <- data.frame(decrease=57,rowpos=1,colpos=1,treatment=factor("D",levels(OrchardSprays$treatment)))
str(manualFirstrow)
predict(model,newdata=firstrow,n.trees=100)
predict(model,newdata=manualFirstrow,n.trees=100)
output:
> predict(model,newdata=firstrow,n.trees=100)
[1] 50.31276
> predict(model,newdata=manualFirstrow,n.trees=100)
[1] 50.31276 | Why does GBM predict different values for the same data
The factors, as always. Seems like the model is not using the actual value of the factor, but rather something like the position in the factor-levels.
I was able to reproduce your error with the data |
53,912 | How much sub-questions impact one ordinal question in a survey? | I love your question, and I'm going to answer it a little more broadly then you asked. There are four main approaches that I would use to identify opportunities to improve Net Promoter Score, in order of analytical sophistication and the maturity of the project you're using NPS to evaluate. (For those not familiar, Net Promoter is a common satisfaction score methodology used by many companies. Unless you tend to avoid surveys, you have probably answered the "net promoter" question many times.)
Segmentation
Qualitative evaluation of open-ended responses
Quantitative evaluation of open-ended responses
Experimentation
Segmentation: A business can usually divide its customers into a small number of groups along a couple of key dimensions. For example, between "new" and "returning" customers, or "enterprise" and "small business". These will often closely align with the distinctions made in pricing and in the businesses financial reporting, and, happily, they often have dramatically different NP scores. The differences can sometimes surprise. The great thing about this approach is that it requires very little communications overhead. Most people listening to your data will already be familiar with the segments you describe and can envision the aspects of the product that could drive some groups to be more or less satisfied than others. It also makes it relatively easy to organize followup work and report on subsequent improvements.
Qualitative evaluation of open-ends: A common component of Net Promoter implementations is a followup question along the lines of "why did you answer the way that you did". Often, for a project new to Net Promoter, simply reading these responses and associating them with their ordinal responses will have people slapping their foreheads. "I thought our customers liked that feature!" "People love the rubberized grips - let's put them on everything!" This approach often peters out fairly soon, as a business identifies all the surface-level opportunities and makes strategic decisions to resolve or not resolve them.
Quantitative evaluation of open-ends: Following up on the qualitative evaluation, quantitative evaluation can yield some additional surprises. This approach would generally be to label each open-ended response with one or more categories, which can be done by hand at small scales, or using a variety of software at larger scales. This can reveal additional insights along the lines of "I knew that was an issue... but I didn't realize it affected 30% of our customers!" or "Most people who complain about xyz also complain about abc - maybe there is a common root cause"
Experimentation: It's often difficult to perform rigorous controlled-variable random-assignment experiments with Net Promoter. Because Net Promoter is intended to be a very holistic measure of the relationship between a company and its customers, companies are usually limited in their ability to experiment with the entirety of the relationship, due to the expense and risk of maintaining what might amount to two separate businesses at the same time. That said, other less rigorous forms of experiments are possible.
Every major decision the business makes can be evaluated for its potential to impact NPS, and then the resulting changes can be subsequently measured. Because it fails to isolate a single variable at a time, care must be taken to build realistic predictions and also to be very cautious about results that were not predicted in advance.
Sometimes we are lucky enough that natural experimental groups emerge that allow us to experiment with the past. A national retailer may roll out a new store format only on the west coast due to the logistical constraints of construction, or a content company may need to censor some types of searches in countries where the local regulations are different. It's not random assignment, but sometimes it's close.
Prediction: I know that I didn't address your real question about using other questions in surveys to "predict" NPS, and that's intentional. The approach that you describe has certain limitations and although I'm planning to try it again myself in the near future, I've had limited success with it in the past.
There are two main predictor approaches I've seen: correlation, and logit regression. Correlation looks at the strength of the relationship between two variables (do they always go up and down together? opposite? no relation?) The idea would be to find the variables most strongly correlated with NPS. Logit regression is a technique that takes all the variables you give it and uses them to try to predict the probability of a particular outcome. For example, you can try to predict the probability of each of your three NPS outcomes (Promoter, Neutral, Detractor). From this you can build a model that shows how much variation in different variables can affect NPS up or down, taking the NPS outcome with the highest predicted probability as the definitive outcome. I have also heard of people using Structural Equation Modelling with NPS, although I know little of this approach.
Some limitations of the predictor approach
Few good independent variables in surveys: In order to be actionable, this approach requires that you think of NPS as a dependent variables and treat other questions as independent. But, even many of the "hard facts" type questions, such as you might use for your key customer segments are not truly independent from NPS. A returning customer is a returning customer in part because they had a positive experience in the past. Most customer satisfaction questions move together in lockstep, unless you have an offering with very distinctive and low-interacting components that can be performing at two different levels of service (a leisure hotel may try to break into the conference business and do so very poorly at first, in which case separate satisfaction questions on bedroom quality and conference room quality may be useful).
Difficulty communicating: This analytical approach is usually quite far separated from the group that will be implementing recommendations. Because it relies on relatively sophisticated analysis, it's often difficult for implementors to take actions based on these sort of recommendations, because they don't understand what needs to be fixed. | How much sub-questions impact one ordinal question in a survey? | I love your question, and I'm going to answer it a little more broadly then you asked. There are four main approaches that I would use to identify opportunities to improve Net Promoter Score, in order | How much sub-questions impact one ordinal question in a survey?
I love your question, and I'm going to answer it a little more broadly then you asked. There are four main approaches that I would use to identify opportunities to improve Net Promoter Score, in order of analytical sophistication and the maturity of the project you're using NPS to evaluate. (For those not familiar, Net Promoter is a common satisfaction score methodology used by many companies. Unless you tend to avoid surveys, you have probably answered the "net promoter" question many times.)
Segmentation
Qualitative evaluation of open-ended responses
Quantitative evaluation of open-ended responses
Experimentation
Segmentation: A business can usually divide its customers into a small number of groups along a couple of key dimensions. For example, between "new" and "returning" customers, or "enterprise" and "small business". These will often closely align with the distinctions made in pricing and in the businesses financial reporting, and, happily, they often have dramatically different NP scores. The differences can sometimes surprise. The great thing about this approach is that it requires very little communications overhead. Most people listening to your data will already be familiar with the segments you describe and can envision the aspects of the product that could drive some groups to be more or less satisfied than others. It also makes it relatively easy to organize followup work and report on subsequent improvements.
Qualitative evaluation of open-ends: A common component of Net Promoter implementations is a followup question along the lines of "why did you answer the way that you did". Often, for a project new to Net Promoter, simply reading these responses and associating them with their ordinal responses will have people slapping their foreheads. "I thought our customers liked that feature!" "People love the rubberized grips - let's put them on everything!" This approach often peters out fairly soon, as a business identifies all the surface-level opportunities and makes strategic decisions to resolve or not resolve them.
Quantitative evaluation of open-ends: Following up on the qualitative evaluation, quantitative evaluation can yield some additional surprises. This approach would generally be to label each open-ended response with one or more categories, which can be done by hand at small scales, or using a variety of software at larger scales. This can reveal additional insights along the lines of "I knew that was an issue... but I didn't realize it affected 30% of our customers!" or "Most people who complain about xyz also complain about abc - maybe there is a common root cause"
Experimentation: It's often difficult to perform rigorous controlled-variable random-assignment experiments with Net Promoter. Because Net Promoter is intended to be a very holistic measure of the relationship between a company and its customers, companies are usually limited in their ability to experiment with the entirety of the relationship, due to the expense and risk of maintaining what might amount to two separate businesses at the same time. That said, other less rigorous forms of experiments are possible.
Every major decision the business makes can be evaluated for its potential to impact NPS, and then the resulting changes can be subsequently measured. Because it fails to isolate a single variable at a time, care must be taken to build realistic predictions and also to be very cautious about results that were not predicted in advance.
Sometimes we are lucky enough that natural experimental groups emerge that allow us to experiment with the past. A national retailer may roll out a new store format only on the west coast due to the logistical constraints of construction, or a content company may need to censor some types of searches in countries where the local regulations are different. It's not random assignment, but sometimes it's close.
Prediction: I know that I didn't address your real question about using other questions in surveys to "predict" NPS, and that's intentional. The approach that you describe has certain limitations and although I'm planning to try it again myself in the near future, I've had limited success with it in the past.
There are two main predictor approaches I've seen: correlation, and logit regression. Correlation looks at the strength of the relationship between two variables (do they always go up and down together? opposite? no relation?) The idea would be to find the variables most strongly correlated with NPS. Logit regression is a technique that takes all the variables you give it and uses them to try to predict the probability of a particular outcome. For example, you can try to predict the probability of each of your three NPS outcomes (Promoter, Neutral, Detractor). From this you can build a model that shows how much variation in different variables can affect NPS up or down, taking the NPS outcome with the highest predicted probability as the definitive outcome. I have also heard of people using Structural Equation Modelling with NPS, although I know little of this approach.
Some limitations of the predictor approach
Few good independent variables in surveys: In order to be actionable, this approach requires that you think of NPS as a dependent variables and treat other questions as independent. But, even many of the "hard facts" type questions, such as you might use for your key customer segments are not truly independent from NPS. A returning customer is a returning customer in part because they had a positive experience in the past. Most customer satisfaction questions move together in lockstep, unless you have an offering with very distinctive and low-interacting components that can be performing at two different levels of service (a leisure hotel may try to break into the conference business and do so very poorly at first, in which case separate satisfaction questions on bedroom quality and conference room quality may be useful).
Difficulty communicating: This analytical approach is usually quite far separated from the group that will be implementing recommendations. Because it relies on relatively sophisticated analysis, it's often difficult for implementors to take actions based on these sort of recommendations, because they don't understand what needs to be fixed. | How much sub-questions impact one ordinal question in a survey?
I love your question, and I'm going to answer it a little more broadly then you asked. There are four main approaches that I would use to identify opportunities to improve Net Promoter Score, in order |
53,913 | How much sub-questions impact one ordinal question in a survey? | The NPS works well as a performance monitoring or controlling tool. But what you want to find out is something else: You want to identify the true drivers of satisfaction.
The NPS is an aggregate score. If you want to say something about drivers of satisfaction, than you have to develop a model which explains the 10-point-scale "ultimate question".
However, rating scales are contaminated with scale usage heterogeneity (see paper). If you run a regression model, you will find that those sub-questions are highly correlated, causing high VIFs and maybe no significant betas, depending on how many sub-questions you have. You might also be overconfident in the significance of sub-questions, depending on the exact model set-up.
Consider this simple R script:
nobs=1000
p=12
means = rnorm(nobs)
vars = 1/rgamma(nobs,3,3)
data=matrix(,nobs,p)
for(i in 1:nobs){
data[i,]=rnorm(p,means[i],sqrt(vars[i]))
}
cor(data)
Despite the fact that there is no relationship between the 12 variables, the correlations are far from being 0.
If you further reduce the scale level (Promotor yes/no), you throw away information.
However, scale-adjusting models are very challenging, especially when you have ordinal and categorical variables, and if you look for a ready-made solution.
You could treat bottom, middle and top box as categorical outcomes and use a Bayesian Network to model the relationships. BNs treat everything categorical, thus providing a nice general purpose tool.
Have you considered a conjoint experiment? Depending on the service or product this is a nice alternative. | How much sub-questions impact one ordinal question in a survey? | The NPS works well as a performance monitoring or controlling tool. But what you want to find out is something else: You want to identify the true drivers of satisfaction.
The NPS is an aggregate scor | How much sub-questions impact one ordinal question in a survey?
The NPS works well as a performance monitoring or controlling tool. But what you want to find out is something else: You want to identify the true drivers of satisfaction.
The NPS is an aggregate score. If you want to say something about drivers of satisfaction, than you have to develop a model which explains the 10-point-scale "ultimate question".
However, rating scales are contaminated with scale usage heterogeneity (see paper). If you run a regression model, you will find that those sub-questions are highly correlated, causing high VIFs and maybe no significant betas, depending on how many sub-questions you have. You might also be overconfident in the significance of sub-questions, depending on the exact model set-up.
Consider this simple R script:
nobs=1000
p=12
means = rnorm(nobs)
vars = 1/rgamma(nobs,3,3)
data=matrix(,nobs,p)
for(i in 1:nobs){
data[i,]=rnorm(p,means[i],sqrt(vars[i]))
}
cor(data)
Despite the fact that there is no relationship between the 12 variables, the correlations are far from being 0.
If you further reduce the scale level (Promotor yes/no), you throw away information.
However, scale-adjusting models are very challenging, especially when you have ordinal and categorical variables, and if you look for a ready-made solution.
You could treat bottom, middle and top box as categorical outcomes and use a Bayesian Network to model the relationships. BNs treat everything categorical, thus providing a nice general purpose tool.
Have you considered a conjoint experiment? Depending on the service or product this is a nice alternative. | How much sub-questions impact one ordinal question in a survey?
The NPS works well as a performance monitoring or controlling tool. But what you want to find out is something else: You want to identify the true drivers of satisfaction.
The NPS is an aggregate scor |
53,914 | How much sub-questions impact one ordinal question in a survey? | Some good answers already here. Here is a much simpler one, which is really an addition to the good answers you have received, not a substitute. Ordered logistic regression is the appropriate tool for modelling an ordinal response variable. There is an implementation - polr() - in Benables & Ripley's library(MASS) in R.
Then you are back in the standard world of model selection, on which there are many good questions and good answers on Cross-Validated. | How much sub-questions impact one ordinal question in a survey? | Some good answers already here. Here is a much simpler one, which is really an addition to the good answers you have received, not a substitute. Ordered logistic regression is the appropriate tool f | How much sub-questions impact one ordinal question in a survey?
Some good answers already here. Here is a much simpler one, which is really an addition to the good answers you have received, not a substitute. Ordered logistic regression is the appropriate tool for modelling an ordinal response variable. There is an implementation - polr() - in Benables & Ripley's library(MASS) in R.
Then you are back in the standard world of model selection, on which there are many good questions and good answers on Cross-Validated. | How much sub-questions impact one ordinal question in a survey?
Some good answers already here. Here is a much simpler one, which is really an addition to the good answers you have received, not a substitute. Ordered logistic regression is the appropriate tool f |
53,915 | When will PCA be equivalent to ICA? | PCA will be equivalent to ICA if all the correlations in the data are limited to second-order correlations and no higher-order correlations are found. Said another way, when the covariance matrix of the data can explain all the redundancies present in the data, ICA and PCA should return same components. | When will PCA be equivalent to ICA? | PCA will be equivalent to ICA if all the correlations in the data are limited to second-order correlations and no higher-order correlations are found. Said another way, when the covariance matrix of t | When will PCA be equivalent to ICA?
PCA will be equivalent to ICA if all the correlations in the data are limited to second-order correlations and no higher-order correlations are found. Said another way, when the covariance matrix of the data can explain all the redundancies present in the data, ICA and PCA should return same components. | When will PCA be equivalent to ICA?
PCA will be equivalent to ICA if all the correlations in the data are limited to second-order correlations and no higher-order correlations are found. Said another way, when the covariance matrix of t |
53,916 | When will PCA be equivalent to ICA? | My answer here may help you.
I believe that PCA will be equivalent to ICA, only when the independent components are orthogonal to begin with. PCA gives the independent components of your data, only in so far as an orthogonal transformation might. | When will PCA be equivalent to ICA? | My answer here may help you.
I believe that PCA will be equivalent to ICA, only when the independent components are orthogonal to begin with. PCA gives the independent components of your data, only i | When will PCA be equivalent to ICA?
My answer here may help you.
I believe that PCA will be equivalent to ICA, only when the independent components are orthogonal to begin with. PCA gives the independent components of your data, only in so far as an orthogonal transformation might. | When will PCA be equivalent to ICA?
My answer here may help you.
I believe that PCA will be equivalent to ICA, only when the independent components are orthogonal to begin with. PCA gives the independent components of your data, only i |
53,917 | When will PCA be equivalent to ICA? | Here is my understanding, gathered largely from the paper https://www.sciencedirect.com/science/article/abs/pii/0165168494900299
First, some non-rigorous reasoning. PCA produces an orthogonal matrix defined up to a +-1 scaling of the columns, while ICA (when applicable) yields a generic matrix defined up to any scaling and reordering of the columns. If ICA and PCA give the same matrix, then the true matrix $A$ can be obtained from it by column scaling and reordering, and thus has orthogonal columns, and this is the condition of PCA and ICA coincidence. When the sources are Gaussian, ICA simply does not work.
Now for an attempt at a more thorough argument. I am describing this in terms of random variables, not samples from them - the latter will behave similarly, but with an effect of sampling noise. (So, a 'known random vector $x$' below means that the joint distribution of its components is fully known).
ICA
Suppose $x$ and $s$ are column vectors of random variables with zero means, and $x = As$, where A is a full-rank constant matrix (possibly non-square with more rows than columns), and the components of $s$ are independent. ICA is defined as finding the matrix $A$ for a known vector $x$ (and then $s$ can also be found from the equation). It is not possible to find exactly $A$ as any matrix from the set obtained from $A$ by an arbitrary scaling of each column and an arbitrary permutation of the columns will also yield an $s$ whose components are independent. Thus, the best that can be done is finding this set of matrices. And this is possible if the components of true vector $s$ have finite, nonzero variances, and at most one of them is Gaussian (this is the theorem of ICA identifiability, see COROLLARY 13 in the paper).
PCA
Let $x$ be a column vector of random variables with zero mean and a finite covariance matrix. PCA is defined as finding an orthogonal matrix $Q$ such that for $\tilde{s}$ defined by $x = Q \tilde{s}$, the components of $\tilde{s}$ are uncorrelated and their variances form a (weakly) decreasing sequence. Again, there is an indetermination: for any such $Q$, a matrix obtained by multiplying any subset of the columns by -1 will also satisfy the conditions. If the variances of the components of $\tilde{s}$ are all different, this -1 scaling is the only indetermination, otherwise, for any set of components of $s$ with the same variance, $Q$ can be multiplied by an arbitrary rotation matrix in the space of the corresponding columns.
Now let us consider when ICA and PCA can give the same result. Let $x = As$, with $A$ being a square matrix and $s$ satisfying the conditions of ICA identifiability. Let $x = Q \tilde{s}$ be the PCA of $x$, and suppose for now that the variances of the components of $\tilde{s}$ are all different. If $Q$ satisfies the conditions of ICA then it must coincide with the true matrix $A$ up to a scaling and reordering of columns. Thus, $A$ must have orthogonal columns (which is equivalent to $A^T A$ being diagonal). Conversely, if $A$ has orthogonal columns, then by scaling them to unit norm and ordering by strictly decreasing variance of $s_i$, we obtain an orthogonal matrix and uncorrelated sources in the proper order, and thus, one of the possible PCA decompositions.
To sum up, the orthogonality of the columns of $A$ is a necessary and sufficient condition of ICA and PCA 'coincidence' under the above assumptions. The 'coincidence' should be understood as the fact that the sets of matrices obtained by both methods intersect, and each has minimal indetermination.
Concerning the question about an example where the covariance of $x$ is not diagonal, we can take $s_1$ uniform on [-1,1], $s_2$ uniform on [-2,2], and $A$ a rotation in 2D by an angle that is not an integer multiple of $\pi / 2$ (otherwise $cov(x)$ is diagonal). Here is an example for a rotation by $\pi / 6$, with 1000 samples:
The lines are: black - true axes, blue - principal component directions, red - independent component directions. PCA was computed by the MATLAB function pca, while ICA was obtained using the EEGLAB package with a call: "runica(y, 'extended', 1);" The extended algorithm is needed if some components have distributions with negative excess kurtosis (called subgaussian, or platykurtic, distributions), as is the case here for uniform distributions.
If we remove the assumption of different variances of $\tilde{s}_i$, keeping the ICA identifiability and the orthogonality of the columns of $A$, PCA will be unable to select the 'true' basis within any subspace corresponding to the same variance, while ICA, as always, will find this basis up to a reordering and rescaling. Thus, the set of matrices satisfying the PCA conditions will include the set of ICA matrices with normalized columns, but the PCA set will be substantially larger (and thus give less information about the true sources).
Now let us try to remove the assumption of ICA identifiability. As an example, consider the situation when $A$ is square and $s$ has a multivariate Gaussian distribution with a diagonal covariance matrix having different values on the diagonal. In this case, the $\tilde{s}$ obtained by PCA will also be multivariate Gaussian, and its components will be uncorrelated and thus independent. So, the set of matrices produced by PCA will be included in the set provided by ICA. However, the indetermination in ICA will be larger than usual (i.e., than in the case of ICA identifiability). Indeed, if $A$ is the true matrix, and $D$ is the (diagonal) covariance matrix of $s$ then $A D^{1/2} R$ will be an ICA matrix for any rotation R. This can be seen from the fact that $A D^{1/2}$ is an ICA matrix with such a scaling that the corresponding sources have unit covariance (and are multivariate Gaussian). Any rotation of these sources will conserve this property and thus keep them independent. So, under these assumptions, if $A$ does not have orthogonal columns, the PCA matrix set will not include the true $A$, while the ICA set will, but will have a larger than usual indetermination. Conversely, if $A$ has orthogonal columns, PCA will find $A$ with its minimal indetermination, while the ICA matrix set will contain A together with many other matrices (in practice, a random one of them will be output by software).
As a conclusion for applications, if the columns of $A$ are known to be orthogonal and $s_i$ have different variances, go with PCA (regardless of the distribution of $s_i$, but if more than one of them are Gaussian, then, out of the two methods, only PCA will recover $A$ and $s$ under these assumptions). In all other cases, ICA is the way to go. | When will PCA be equivalent to ICA? | Here is my understanding, gathered largely from the paper https://www.sciencedirect.com/science/article/abs/pii/0165168494900299
First, some non-rigorous reasoning. PCA produces an orthogonal matrix d | When will PCA be equivalent to ICA?
Here is my understanding, gathered largely from the paper https://www.sciencedirect.com/science/article/abs/pii/0165168494900299
First, some non-rigorous reasoning. PCA produces an orthogonal matrix defined up to a +-1 scaling of the columns, while ICA (when applicable) yields a generic matrix defined up to any scaling and reordering of the columns. If ICA and PCA give the same matrix, then the true matrix $A$ can be obtained from it by column scaling and reordering, and thus has orthogonal columns, and this is the condition of PCA and ICA coincidence. When the sources are Gaussian, ICA simply does not work.
Now for an attempt at a more thorough argument. I am describing this in terms of random variables, not samples from them - the latter will behave similarly, but with an effect of sampling noise. (So, a 'known random vector $x$' below means that the joint distribution of its components is fully known).
ICA
Suppose $x$ and $s$ are column vectors of random variables with zero means, and $x = As$, where A is a full-rank constant matrix (possibly non-square with more rows than columns), and the components of $s$ are independent. ICA is defined as finding the matrix $A$ for a known vector $x$ (and then $s$ can also be found from the equation). It is not possible to find exactly $A$ as any matrix from the set obtained from $A$ by an arbitrary scaling of each column and an arbitrary permutation of the columns will also yield an $s$ whose components are independent. Thus, the best that can be done is finding this set of matrices. And this is possible if the components of true vector $s$ have finite, nonzero variances, and at most one of them is Gaussian (this is the theorem of ICA identifiability, see COROLLARY 13 in the paper).
PCA
Let $x$ be a column vector of random variables with zero mean and a finite covariance matrix. PCA is defined as finding an orthogonal matrix $Q$ such that for $\tilde{s}$ defined by $x = Q \tilde{s}$, the components of $\tilde{s}$ are uncorrelated and their variances form a (weakly) decreasing sequence. Again, there is an indetermination: for any such $Q$, a matrix obtained by multiplying any subset of the columns by -1 will also satisfy the conditions. If the variances of the components of $\tilde{s}$ are all different, this -1 scaling is the only indetermination, otherwise, for any set of components of $s$ with the same variance, $Q$ can be multiplied by an arbitrary rotation matrix in the space of the corresponding columns.
Now let us consider when ICA and PCA can give the same result. Let $x = As$, with $A$ being a square matrix and $s$ satisfying the conditions of ICA identifiability. Let $x = Q \tilde{s}$ be the PCA of $x$, and suppose for now that the variances of the components of $\tilde{s}$ are all different. If $Q$ satisfies the conditions of ICA then it must coincide with the true matrix $A$ up to a scaling and reordering of columns. Thus, $A$ must have orthogonal columns (which is equivalent to $A^T A$ being diagonal). Conversely, if $A$ has orthogonal columns, then by scaling them to unit norm and ordering by strictly decreasing variance of $s_i$, we obtain an orthogonal matrix and uncorrelated sources in the proper order, and thus, one of the possible PCA decompositions.
To sum up, the orthogonality of the columns of $A$ is a necessary and sufficient condition of ICA and PCA 'coincidence' under the above assumptions. The 'coincidence' should be understood as the fact that the sets of matrices obtained by both methods intersect, and each has minimal indetermination.
Concerning the question about an example where the covariance of $x$ is not diagonal, we can take $s_1$ uniform on [-1,1], $s_2$ uniform on [-2,2], and $A$ a rotation in 2D by an angle that is not an integer multiple of $\pi / 2$ (otherwise $cov(x)$ is diagonal). Here is an example for a rotation by $\pi / 6$, with 1000 samples:
The lines are: black - true axes, blue - principal component directions, red - independent component directions. PCA was computed by the MATLAB function pca, while ICA was obtained using the EEGLAB package with a call: "runica(y, 'extended', 1);" The extended algorithm is needed if some components have distributions with negative excess kurtosis (called subgaussian, or platykurtic, distributions), as is the case here for uniform distributions.
If we remove the assumption of different variances of $\tilde{s}_i$, keeping the ICA identifiability and the orthogonality of the columns of $A$, PCA will be unable to select the 'true' basis within any subspace corresponding to the same variance, while ICA, as always, will find this basis up to a reordering and rescaling. Thus, the set of matrices satisfying the PCA conditions will include the set of ICA matrices with normalized columns, but the PCA set will be substantially larger (and thus give less information about the true sources).
Now let us try to remove the assumption of ICA identifiability. As an example, consider the situation when $A$ is square and $s$ has a multivariate Gaussian distribution with a diagonal covariance matrix having different values on the diagonal. In this case, the $\tilde{s}$ obtained by PCA will also be multivariate Gaussian, and its components will be uncorrelated and thus independent. So, the set of matrices produced by PCA will be included in the set provided by ICA. However, the indetermination in ICA will be larger than usual (i.e., than in the case of ICA identifiability). Indeed, if $A$ is the true matrix, and $D$ is the (diagonal) covariance matrix of $s$ then $A D^{1/2} R$ will be an ICA matrix for any rotation R. This can be seen from the fact that $A D^{1/2}$ is an ICA matrix with such a scaling that the corresponding sources have unit covariance (and are multivariate Gaussian). Any rotation of these sources will conserve this property and thus keep them independent. So, under these assumptions, if $A$ does not have orthogonal columns, the PCA matrix set will not include the true $A$, while the ICA set will, but will have a larger than usual indetermination. Conversely, if $A$ has orthogonal columns, PCA will find $A$ with its minimal indetermination, while the ICA matrix set will contain A together with many other matrices (in practice, a random one of them will be output by software).
As a conclusion for applications, if the columns of $A$ are known to be orthogonal and $s_i$ have different variances, go with PCA (regardless of the distribution of $s_i$, but if more than one of them are Gaussian, then, out of the two methods, only PCA will recover $A$ and $s$ under these assumptions). In all other cases, ICA is the way to go. | When will PCA be equivalent to ICA?
Here is my understanding, gathered largely from the paper https://www.sciencedirect.com/science/article/abs/pii/0165168494900299
First, some non-rigorous reasoning. PCA produces an orthogonal matrix d |
53,918 | When will PCA be equivalent to ICA? | Putting aside the issue whether 'extremely similar' (please define what you mean by this; do you mean within the sampling error?) is the same as 'coincide', I believe the answer is no. To be equivalent, they must imply each other in both directions. If outputs are independent as in ICA (but not Gaussian, because ICA does not work for (circular) Gaussian data), then they are uncorrelated as in PCA. The converse is not always true. So one can only compare both methods for non-Gaussian data. For these, since both methods operate on different moments, they are not equivalent. ICA gives generally oblique components, while PCA components are always orthogonal. Therefore, orthongality (i.e., E(X'.X)=0) is a necessary condition for equivalence. But E(Xi'.Xj)=0 does not imply E(Xi'^2.Xj)=0, E(Xi'. Xj^2)=0, etc. for all higher-order moments and n-tuples to vanish as in ICA, so it is not a sufficient condition. | When will PCA be equivalent to ICA? | Putting aside the issue whether 'extremely similar' (please define what you mean by this; do you mean within the sampling error?) is the same as 'coincide', I believe the answer is no. To be equivalen | When will PCA be equivalent to ICA?
Putting aside the issue whether 'extremely similar' (please define what you mean by this; do you mean within the sampling error?) is the same as 'coincide', I believe the answer is no. To be equivalent, they must imply each other in both directions. If outputs are independent as in ICA (but not Gaussian, because ICA does not work for (circular) Gaussian data), then they are uncorrelated as in PCA. The converse is not always true. So one can only compare both methods for non-Gaussian data. For these, since both methods operate on different moments, they are not equivalent. ICA gives generally oblique components, while PCA components are always orthogonal. Therefore, orthongality (i.e., E(X'.X)=0) is a necessary condition for equivalence. But E(Xi'.Xj)=0 does not imply E(Xi'^2.Xj)=0, E(Xi'. Xj^2)=0, etc. for all higher-order moments and n-tuples to vanish as in ICA, so it is not a sufficient condition. | When will PCA be equivalent to ICA?
Putting aside the issue whether 'extremely similar' (please define what you mean by this; do you mean within the sampling error?) is the same as 'coincide', I believe the answer is no. To be equivalen |
53,919 | Bayesian updating using $n$ noisy observations of Brownian motion | So $X_{n-1}$ is the target of inference. Then the correct Bayesian procedure is to calculate this posterior:
$$p(X_{n-1}|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)$$
(Note I have added the symbol $I$ to explicitly indicate the assumptions and model structure being used). Now because we also know that the other $X_i$ are important, but not present in the above posterior, then they must have been integrated out (via the law of total probability):
$$\int \dots \int p(X_{n-1}X_{n-2}\dots X_0|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)dX_{0}\dots dX_{n-2}$$
Now I will suppress the $\mu_0\sigma_0N_0DI$ from the probabilities for brevity, but they are still there hiding. Now the above is not directly calculable using the information given, but we can use the rules of probability theory to manipulate this into something we do know how to calculate. We can use bayes theorem to get
$$p(X_{n-1}|Y_0\dots Y_{n-1})$$
$$=\frac{\int \dots \int p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)dX_{0}\dots dX_{n-2}}{\int \dots \int p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0) dX_{0}\dots dX_{n-1}}$$
Because we have a ratio of integrals with the same integrand, we can eliminate constants which don't depend on $X_0,\dots,X_{n-1}$. This makes the equations slightly smaller and easier to write. Now we can use the "markovian" property of brownian motion to simplify the joint prior.
$$p(X_{n-1}\dots X_0)=p(X_{n-1}|X_{n-2})p(X_{n-2}|X_{n-3})\dots p(X_1|X_0)p(X_0)$$
Now for brownian motion we have $p(X_i|X_{i-1})\sim N(X_{i-1},D)$ (remembering that conditional on $X_{i-1}$ means it is just a constant, not a random variable) and combined with your prior for $X_0$ we have:
$$p(X_{n-1}\dots X_0)\propto\exp\left(-\frac{1}{2D}\sum_{i=1}^{n-1}(X_i-X_{i-1})^2-\frac{(X_0-\mu_0)^2}{2\sigma_0^2}\right)$$
Now for the likelihood I will make an assumption which I think you have implicitly made. This is that the $Y_i$ are conditionally independent, given $X_i$. I think this is what you mean by "noise is independent from observation to observation". This allows us to factor the likelihood as:
$$p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)=\prod_{i=0}^{n-1}p(Y_{i}|X_{i})\propto\exp\left(-\frac{1}{2N_0}\sum_{i=0}^{n-1}(Y_i-X_i)^2\right)$$
Hence the "integrand" is given by the product of the prior and the likelihood, stripped of constants which don't depend on $X_i$. This is given by:
$$p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)$$
$$\propto\exp\left(-\frac{1}{2N_0}\sum_{i=0}^{n-1}(Y_i-X_i)^2-\frac{1}{2D}\sum_{i=1}^{n-1}(X_i-X_{i-1})^2-\frac{1}{2\sigma_0^2}(X_0-\mu_0)^2\right)$$
$$\propto\exp\left(-\frac{X_0^2-2X_0\mu_0}{2\sigma_0^2}-\sum_{i=0}^{n-1}\frac{X_i^2-2X_iY_i}{2N_0}-\sum_{i=1}^{n-1}\frac{(X_i-X_{i-1})^2}{2D}\right)$$
Now we "simply" integrate this over $X_0,\dots,X_{n-2}$, noting that we only need to keep the terms which depend in $X_0,\dots,X_{n-1}$. This will be a straight-forward but tedious process if we were to directly integrate out from this equation. An easier way to obtain the integral is to evaluate the posterior for $n=2$ and $n=3$ cases which will establish a recursive relationship between the posterior $p(X_t|Y_0,\dots,Y_t)$ and $p(X_{t+1}|Y_0,\dots,Y_{t+1})$
$$\bf{}n=2\text{ case}$$
The integrand we require is given by:
$$p(X_{1}X_0)p(Y_{1}Y_0|X_{1}X_0)$$
$$\propto\exp\left(-\frac{X_0^2-2X_0\mu_0}{2\sigma_0^2}-\frac{X_1^2+X_0^2-2X_1Y_1-2X_0Y_0}{2N_0}-\frac{(X_1-X_0)^2}{2D}\right)$$
$$=\exp\left(-a_0X_0^2+b_0X_0-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}\right)$$
Where $a_0=\frac{1}{2\sigma_0^2}+\frac{1}{2N_0}+\frac{1}{2D}=\frac{1}{2S}+\frac{1}{2D}$ and $b_0=\frac{\mu_0}{\sigma_0^2}+\frac{Y_0}{N_0}+\frac{X_1}{D}=\frac{M}{S}+\frac{X_1}{D}$. I have used $M$ and $S$ from your definition in the question. Now because $-a_0X_0^2+b_0X_0=-a_0\left(X_0-\frac{b_0}{2a_0}\right)^2+\frac{b_0^2}{4a_0}$ we can consider $\hat{X}_0=\frac{b_0}{2a_0}$ as an estimate for $X_0$ with variance equal to $v_0=\frac{1}{2a_0}$. Using these relations we have $\frac{b_0^2}{4a_0}=\frac{\hat{X}_0^2}{2v_0}$. Note that it has the same weighted average form as your $M$ for $n=1$ case. Using this we have a simple gaussian integral, which basically replaces $X_0$ by its estimate. To see this note that $-a_0\hat{X}_0^2+b_0\hat{X}_0=\frac{b_0^2}{4a_0}$, and we have:
$$\int p(X_{1}X_0)p(Y_{1}Y_0|X_{1}X_0)dX_0\propto\sqrt{\frac{\pi}{a_0}}\exp\left(\frac{b_0^2}{4a_0}-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}\right)$$
$$\propto\exp\left(-a_1X_1^2+b_1X_1\right)$$
where $a_1=\frac{1}{2N_0}+\frac{1}{2(D+S)}$ and $b_1=\frac{M}{D+S}+\frac{Y_1}{N_0}$. As before, this gives rise to the estimate $M_1=\frac{b_1}{2a_1}$ with variance $S_1=\frac{1}{2a_1}$. Additionally the form for the above expression is a gaussian kernel (i.e. a gaussian pdf stripped of normalising constants). So the posterior for $n=2$ is given by:
$$p(X_1|Y_0Y_1\mu_0\sigma_0N_0DI)=\frac{1}{\sqrt{2\pi S_1}}\exp\left(-\frac{1}{2S_1}\left[X_1-M_1\right]^2\right)$$
Where $M_1=\frac{MN_0+Y_1(D+S)}{N_0+D+S}$ is the estimate and $S_1=\frac{N_0(D+S)}{N_0+D+S}$. Notice here that this is the same form as for $n=1$ except with $M$ as the new "prior estimate" and $S+D$ as the new "prior variance". Note that $D$ in the prior variance accounts for the uncertainty in the inference path $X_0\to X_1$ and $S$ accounts for uncertainty in the inference path $\mu_0,Y_0\to X_0$.
$$\bf{}n=3\text{ case}$$
The integration over $X_0$ is the same, however the integration over $X_1$ is different, because now $X_2$ can be used to help estimate $X_1$ - this is due to the additional term $\frac{(X_2-X_1)^2}{2D}$. So we now have (note the similar form to the $n=2$ case):
$$\int p(X_2X_1X_0)p(Y_2Y_1Y_0|X_2X_1X_0)dX_0$$
$$\propto\exp\left(\frac{b_0^2}{4a_0}-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}
-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{(X_2-X_1)^2}{2D}\right)$$
$$\propto\exp\left(-a_1^{'}X_1^2+b_1^{'}X_1-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{X_2^2}{2D}\right)$$
Where $a_1^{'}=\frac{1}{2S_1}+\frac{1}{2D}$ and $b_1^{'}=\frac{M_1}{S_1}+\frac{X_2}{D}$ We are now ready to integrate out $X_1$ and we get:
$$\int p(X_2X_1X_0)p(Y_2Y_1Y_0|X_2X_1X_0)dX_0dX_1$$
$$\propto\exp\left(\frac{[b_1^{'}]^2}{4a_1^{'}}-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{X_2^2}{2D}\right)\propto\exp\left(-a_2X_2^2+b_2X_2\right)$$
Where $a_2=\frac{1}{2N_0}+\frac{1}{2(D+S_1)}$ and $b_2=\frac{M_1}{S_1+D}+\frac{Y_2}{N_0}$. As before these give a gaussian kernel:
$$p(X_2|Y_0Y_1Y_2\mu_0\sigma_0N_0DI)=\frac{1}{\sqrt{2\pi S_2}}\exp\left(-\frac{1}{2S_2}\left[X_2-M_2\right]^2\right)$$
Where $M_2=\frac{M_1N_0+Y_2(D+S_1)}{N_0+D+S_1}$ is the estimate and $S_2=\frac{N_0(D+S_1)}{N_0+D+S_1}$. Notice here that this is the same form as for $n=2$ except with $M_1$ as the new "prior estimate" and $S_1+D$ as the new "prior variance". Note that $D$ in the prior variance accounts for the uncertainty in the inference path $X_1\to X_2$ and $S_1$ accounts for uncertainty in the inference path $\mu_0,Y_0,Y_1\to X_1$.
$$\bf{}\text{General case}$$
this has an obvious general case via mathematical induction on the previous two cases. The result can be stated recursively. Set $M_0=\frac{Y_0\sigma_0^2+\mu_0N_0}{\sigma^2_0+N_0}$ and $S_0=\frac{\sigma^2_0N_0}{\sigma^2_0+N_0}$. Then, given the posterior mean and variance at point after observing $Y_0,\dots,Y_t$ are $M_t$ and $S_t$ respectively the posterior mean and variance given $Y_0,\dots,Y_{t+1}$ is given by:
$$M_{t+1}=\frac{M_tN_0+Y_{t+1}(D+S_t)}{N_0+D+S_t}$$
$$S_{t+1}=\frac{N_0(D+S_t)}{N_0+D+S_t}$$
All posterior distributions are normal. Hence for the most recent part of the chain the posterior mean is given by:
$$M_{n-1}=\frac{M_{n-2}N_0+Y_{n-1}(D+S_{n-2})}{N_0+D+S_{n-2}}$$
$$S_{n-1}=\frac{N_0(D+S_{n-2})}{N_0+D+S_{n-2}}$$
You would then recursively "build up" this estimate starting from $M_0$ and $S_0$ | Bayesian updating using $n$ noisy observations of Brownian motion | So $X_{n-1}$ is the target of inference. Then the correct Bayesian procedure is to calculate this posterior:
$$p(X_{n-1}|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)$$
(Note I have added the symbol $I$ to exp | Bayesian updating using $n$ noisy observations of Brownian motion
So $X_{n-1}$ is the target of inference. Then the correct Bayesian procedure is to calculate this posterior:
$$p(X_{n-1}|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)$$
(Note I have added the symbol $I$ to explicitly indicate the assumptions and model structure being used). Now because we also know that the other $X_i$ are important, but not present in the above posterior, then they must have been integrated out (via the law of total probability):
$$\int \dots \int p(X_{n-1}X_{n-2}\dots X_0|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)dX_{0}\dots dX_{n-2}$$
Now I will suppress the $\mu_0\sigma_0N_0DI$ from the probabilities for brevity, but they are still there hiding. Now the above is not directly calculable using the information given, but we can use the rules of probability theory to manipulate this into something we do know how to calculate. We can use bayes theorem to get
$$p(X_{n-1}|Y_0\dots Y_{n-1})$$
$$=\frac{\int \dots \int p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)dX_{0}\dots dX_{n-2}}{\int \dots \int p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0) dX_{0}\dots dX_{n-1}}$$
Because we have a ratio of integrals with the same integrand, we can eliminate constants which don't depend on $X_0,\dots,X_{n-1}$. This makes the equations slightly smaller and easier to write. Now we can use the "markovian" property of brownian motion to simplify the joint prior.
$$p(X_{n-1}\dots X_0)=p(X_{n-1}|X_{n-2})p(X_{n-2}|X_{n-3})\dots p(X_1|X_0)p(X_0)$$
Now for brownian motion we have $p(X_i|X_{i-1})\sim N(X_{i-1},D)$ (remembering that conditional on $X_{i-1}$ means it is just a constant, not a random variable) and combined with your prior for $X_0$ we have:
$$p(X_{n-1}\dots X_0)\propto\exp\left(-\frac{1}{2D}\sum_{i=1}^{n-1}(X_i-X_{i-1})^2-\frac{(X_0-\mu_0)^2}{2\sigma_0^2}\right)$$
Now for the likelihood I will make an assumption which I think you have implicitly made. This is that the $Y_i$ are conditionally independent, given $X_i$. I think this is what you mean by "noise is independent from observation to observation". This allows us to factor the likelihood as:
$$p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)=\prod_{i=0}^{n-1}p(Y_{i}|X_{i})\propto\exp\left(-\frac{1}{2N_0}\sum_{i=0}^{n-1}(Y_i-X_i)^2\right)$$
Hence the "integrand" is given by the product of the prior and the likelihood, stripped of constants which don't depend on $X_i$. This is given by:
$$p(X_{n-1}\dots X_0)p(Y_{n-1}\dots Y_0|X_{n-1}\dots X_0)$$
$$\propto\exp\left(-\frac{1}{2N_0}\sum_{i=0}^{n-1}(Y_i-X_i)^2-\frac{1}{2D}\sum_{i=1}^{n-1}(X_i-X_{i-1})^2-\frac{1}{2\sigma_0^2}(X_0-\mu_0)^2\right)$$
$$\propto\exp\left(-\frac{X_0^2-2X_0\mu_0}{2\sigma_0^2}-\sum_{i=0}^{n-1}\frac{X_i^2-2X_iY_i}{2N_0}-\sum_{i=1}^{n-1}\frac{(X_i-X_{i-1})^2}{2D}\right)$$
Now we "simply" integrate this over $X_0,\dots,X_{n-2}$, noting that we only need to keep the terms which depend in $X_0,\dots,X_{n-1}$. This will be a straight-forward but tedious process if we were to directly integrate out from this equation. An easier way to obtain the integral is to evaluate the posterior for $n=2$ and $n=3$ cases which will establish a recursive relationship between the posterior $p(X_t|Y_0,\dots,Y_t)$ and $p(X_{t+1}|Y_0,\dots,Y_{t+1})$
$$\bf{}n=2\text{ case}$$
The integrand we require is given by:
$$p(X_{1}X_0)p(Y_{1}Y_0|X_{1}X_0)$$
$$\propto\exp\left(-\frac{X_0^2-2X_0\mu_0}{2\sigma_0^2}-\frac{X_1^2+X_0^2-2X_1Y_1-2X_0Y_0}{2N_0}-\frac{(X_1-X_0)^2}{2D}\right)$$
$$=\exp\left(-a_0X_0^2+b_0X_0-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}\right)$$
Where $a_0=\frac{1}{2\sigma_0^2}+\frac{1}{2N_0}+\frac{1}{2D}=\frac{1}{2S}+\frac{1}{2D}$ and $b_0=\frac{\mu_0}{\sigma_0^2}+\frac{Y_0}{N_0}+\frac{X_1}{D}=\frac{M}{S}+\frac{X_1}{D}$. I have used $M$ and $S$ from your definition in the question. Now because $-a_0X_0^2+b_0X_0=-a_0\left(X_0-\frac{b_0}{2a_0}\right)^2+\frac{b_0^2}{4a_0}$ we can consider $\hat{X}_0=\frac{b_0}{2a_0}$ as an estimate for $X_0$ with variance equal to $v_0=\frac{1}{2a_0}$. Using these relations we have $\frac{b_0^2}{4a_0}=\frac{\hat{X}_0^2}{2v_0}$. Note that it has the same weighted average form as your $M$ for $n=1$ case. Using this we have a simple gaussian integral, which basically replaces $X_0$ by its estimate. To see this note that $-a_0\hat{X}_0^2+b_0\hat{X}_0=\frac{b_0^2}{4a_0}$, and we have:
$$\int p(X_{1}X_0)p(Y_{1}Y_0|X_{1}X_0)dX_0\propto\sqrt{\frac{\pi}{a_0}}\exp\left(\frac{b_0^2}{4a_0}-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}\right)$$
$$\propto\exp\left(-a_1X_1^2+b_1X_1\right)$$
where $a_1=\frac{1}{2N_0}+\frac{1}{2(D+S)}$ and $b_1=\frac{M}{D+S}+\frac{Y_1}{N_0}$. As before, this gives rise to the estimate $M_1=\frac{b_1}{2a_1}$ with variance $S_1=\frac{1}{2a_1}$. Additionally the form for the above expression is a gaussian kernel (i.e. a gaussian pdf stripped of normalising constants). So the posterior for $n=2$ is given by:
$$p(X_1|Y_0Y_1\mu_0\sigma_0N_0DI)=\frac{1}{\sqrt{2\pi S_1}}\exp\left(-\frac{1}{2S_1}\left[X_1-M_1\right]^2\right)$$
Where $M_1=\frac{MN_0+Y_1(D+S)}{N_0+D+S}$ is the estimate and $S_1=\frac{N_0(D+S)}{N_0+D+S}$. Notice here that this is the same form as for $n=1$ except with $M$ as the new "prior estimate" and $S+D$ as the new "prior variance". Note that $D$ in the prior variance accounts for the uncertainty in the inference path $X_0\to X_1$ and $S$ accounts for uncertainty in the inference path $\mu_0,Y_0\to X_0$.
$$\bf{}n=3\text{ case}$$
The integration over $X_0$ is the same, however the integration over $X_1$ is different, because now $X_2$ can be used to help estimate $X_1$ - this is due to the additional term $\frac{(X_2-X_1)^2}{2D}$. So we now have (note the similar form to the $n=2$ case):
$$\int p(X_2X_1X_0)p(Y_2Y_1Y_0|X_2X_1X_0)dX_0$$
$$\propto\exp\left(\frac{b_0^2}{4a_0}-\frac{X_1^2-2X_1Y_1}{2N_0}-\frac{X_1^2}{2D}
-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{(X_2-X_1)^2}{2D}\right)$$
$$\propto\exp\left(-a_1^{'}X_1^2+b_1^{'}X_1-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{X_2^2}{2D}\right)$$
Where $a_1^{'}=\frac{1}{2S_1}+\frac{1}{2D}$ and $b_1^{'}=\frac{M_1}{S_1}+\frac{X_2}{D}$ We are now ready to integrate out $X_1$ and we get:
$$\int p(X_2X_1X_0)p(Y_2Y_1Y_0|X_2X_1X_0)dX_0dX_1$$
$$\propto\exp\left(\frac{[b_1^{'}]^2}{4a_1^{'}}-\frac{X_2^2-2X_2Y_2}{2N_0}-\frac{X_2^2}{2D}\right)\propto\exp\left(-a_2X_2^2+b_2X_2\right)$$
Where $a_2=\frac{1}{2N_0}+\frac{1}{2(D+S_1)}$ and $b_2=\frac{M_1}{S_1+D}+\frac{Y_2}{N_0}$. As before these give a gaussian kernel:
$$p(X_2|Y_0Y_1Y_2\mu_0\sigma_0N_0DI)=\frac{1}{\sqrt{2\pi S_2}}\exp\left(-\frac{1}{2S_2}\left[X_2-M_2\right]^2\right)$$
Where $M_2=\frac{M_1N_0+Y_2(D+S_1)}{N_0+D+S_1}$ is the estimate and $S_2=\frac{N_0(D+S_1)}{N_0+D+S_1}$. Notice here that this is the same form as for $n=2$ except with $M_1$ as the new "prior estimate" and $S_1+D$ as the new "prior variance". Note that $D$ in the prior variance accounts for the uncertainty in the inference path $X_1\to X_2$ and $S_1$ accounts for uncertainty in the inference path $\mu_0,Y_0,Y_1\to X_1$.
$$\bf{}\text{General case}$$
this has an obvious general case via mathematical induction on the previous two cases. The result can be stated recursively. Set $M_0=\frac{Y_0\sigma_0^2+\mu_0N_0}{\sigma^2_0+N_0}$ and $S_0=\frac{\sigma^2_0N_0}{\sigma^2_0+N_0}$. Then, given the posterior mean and variance at point after observing $Y_0,\dots,Y_t$ are $M_t$ and $S_t$ respectively the posterior mean and variance given $Y_0,\dots,Y_{t+1}$ is given by:
$$M_{t+1}=\frac{M_tN_0+Y_{t+1}(D+S_t)}{N_0+D+S_t}$$
$$S_{t+1}=\frac{N_0(D+S_t)}{N_0+D+S_t}$$
All posterior distributions are normal. Hence for the most recent part of the chain the posterior mean is given by:
$$M_{n-1}=\frac{M_{n-2}N_0+Y_{n-1}(D+S_{n-2})}{N_0+D+S_{n-2}}$$
$$S_{n-1}=\frac{N_0(D+S_{n-2})}{N_0+D+S_{n-2}}$$
You would then recursively "build up" this estimate starting from $M_0$ and $S_0$ | Bayesian updating using $n$ noisy observations of Brownian motion
So $X_{n-1}$ is the target of inference. Then the correct Bayesian procedure is to calculate this posterior:
$$p(X_{n-1}|Y_0\dots Y_{n-1}\mu_0\sigma_0N_0DI)$$
(Note I have added the symbol $I$ to exp |
53,920 | Bayesian updating using $n$ noisy observations of Brownian motion | I'm a bit late to the party, but if this is the sort of problem you'll be tackling then you'll want to read Chapter 2 of Rasmussen and Williams, Gaussian Processes for Machine Learning. Equations 2.22-2.24 solve a problem that includes yours as a special case. | Bayesian updating using $n$ noisy observations of Brownian motion | I'm a bit late to the party, but if this is the sort of problem you'll be tackling then you'll want to read Chapter 2 of Rasmussen and Williams, Gaussian Processes for Machine Learning. Equations 2.22 | Bayesian updating using $n$ noisy observations of Brownian motion
I'm a bit late to the party, but if this is the sort of problem you'll be tackling then you'll want to read Chapter 2 of Rasmussen and Williams, Gaussian Processes for Machine Learning. Equations 2.22-2.24 solve a problem that includes yours as a special case. | Bayesian updating using $n$ noisy observations of Brownian motion
I'm a bit late to the party, but if this is the sort of problem you'll be tackling then you'll want to read Chapter 2 of Rasmussen and Williams, Gaussian Processes for Machine Learning. Equations 2.22 |
53,921 | Delta method and correlated variables | Yes, you can still use the delta method with correlated variables.
Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is based upon the Taylor expansion:
$f(\hat{\theta}) \approx f(\theta) + (\hat{\theta} - \theta)^Tf'(\theta)$
Rearranging terms and squaring both sides results in:
$(f(\hat{\theta}) - f(\theta))^2 \approx (\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking expectations:
$\text{Var} f(\hat{\theta}) \approx \mathbb{E}(\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking derivatives of $f$ and evaluating $f'$ at $\hat{\theta}$ gives:
$f'(\hat{\theta})f'(\hat{\theta})^T = \frac{1}{(1-\hat{\omega})^2}
\begin{bmatrix}
1 & \hat{\beta} / (1 - \hat{\omega}) \\
\hat{\beta} / (1 - \hat{\omega}) & \hat{\beta}^2 / (1 - \hat{\omega})^2
\end{bmatrix}
$
Writing out the full expression for $\text{Var}f(\hat{\theta})$ and substituting estimates:
$\widehat{\text{Var}} f(\hat{\theta}) = \frac{1}{(1-\hat{\omega})^2}(\hat{\sigma}^2_{\beta} + 2\hat{\sigma}_{\beta \omega} \hat{\beta} / (1-\hat{\omega}) + \hat{\sigma}^2_{\omega}\hat{\beta}^2 / (1 - \hat{\omega})^2)$
You can see that positive correlation between $\beta$ and $\omega$ is going to increase the variance of the estimate of the long-run effect; it means there's a negative correlation between the estimates of $\beta$ and $1 - \omega$, the numerator and denominator of the long-run effect, so the estimated numerator and denominator tend to move in opposite directions, which naturally increases variability relative to the uncorrelated case.
Note that the delta method can fail miserably, so you might want to check its performance via simulation, e.g., by specifying all the parameters and creating many data sets with different errors, estimating the long run effect for each data set, calculating the standard deviation of the long run effect estimates, and comparing that to the delta method estimates of the standard error for the various data sets. | Delta method and correlated variables | Yes, you can still use the delta method with correlated variables.
Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is | Delta method and correlated variables
Yes, you can still use the delta method with correlated variables.
Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is based upon the Taylor expansion:
$f(\hat{\theta}) \approx f(\theta) + (\hat{\theta} - \theta)^Tf'(\theta)$
Rearranging terms and squaring both sides results in:
$(f(\hat{\theta}) - f(\theta))^2 \approx (\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking expectations:
$\text{Var} f(\hat{\theta}) \approx \mathbb{E}(\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking derivatives of $f$ and evaluating $f'$ at $\hat{\theta}$ gives:
$f'(\hat{\theta})f'(\hat{\theta})^T = \frac{1}{(1-\hat{\omega})^2}
\begin{bmatrix}
1 & \hat{\beta} / (1 - \hat{\omega}) \\
\hat{\beta} / (1 - \hat{\omega}) & \hat{\beta}^2 / (1 - \hat{\omega})^2
\end{bmatrix}
$
Writing out the full expression for $\text{Var}f(\hat{\theta})$ and substituting estimates:
$\widehat{\text{Var}} f(\hat{\theta}) = \frac{1}{(1-\hat{\omega})^2}(\hat{\sigma}^2_{\beta} + 2\hat{\sigma}_{\beta \omega} \hat{\beta} / (1-\hat{\omega}) + \hat{\sigma}^2_{\omega}\hat{\beta}^2 / (1 - \hat{\omega})^2)$
You can see that positive correlation between $\beta$ and $\omega$ is going to increase the variance of the estimate of the long-run effect; it means there's a negative correlation between the estimates of $\beta$ and $1 - \omega$, the numerator and denominator of the long-run effect, so the estimated numerator and denominator tend to move in opposite directions, which naturally increases variability relative to the uncorrelated case.
Note that the delta method can fail miserably, so you might want to check its performance via simulation, e.g., by specifying all the parameters and creating many data sets with different errors, estimating the long run effect for each data set, calculating the standard deviation of the long run effect estimates, and comparing that to the delta method estimates of the standard error for the various data sets. | Delta method and correlated variables
Yes, you can still use the delta method with correlated variables.
Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is |
53,922 | Can one force randomness in a sample? | The procedure you describe here is known as quota sampling. This is a non-ramdom sampling method. The procedure is frequently used in situations when there is no sampling frame available. In such a situation it provides a convenient and cheap way to draw a sample.
Stratified sampling is different. In stratified sampling the population is divided into a certain number of non-overlapping sub-populations, and each population is sampled independently. Or to put it differently, the sampling rate varies between the sub-populations.
Example: To draw a simple random sample from a group, you apply a uniform sampling rate to a population, say 10%. In stratified sampling you split the populations into two groups, say rural and urban. For each group you apply a different sampling rate, e.g. 5% for the urban population and 20% for the rural population. | Can one force randomness in a sample? | The procedure you describe here is known as quota sampling. This is a non-ramdom sampling method. The procedure is frequently used in situations when there is no sampling frame available. In such a si | Can one force randomness in a sample?
The procedure you describe here is known as quota sampling. This is a non-ramdom sampling method. The procedure is frequently used in situations when there is no sampling frame available. In such a situation it provides a convenient and cheap way to draw a sample.
Stratified sampling is different. In stratified sampling the population is divided into a certain number of non-overlapping sub-populations, and each population is sampled independently. Or to put it differently, the sampling rate varies between the sub-populations.
Example: To draw a simple random sample from a group, you apply a uniform sampling rate to a population, say 10%. In stratified sampling you split the populations into two groups, say rural and urban. For each group you apply a different sampling rate, e.g. 5% for the urban population and 20% for the rural population. | Can one force randomness in a sample?
The procedure you describe here is known as quota sampling. This is a non-ramdom sampling method. The procedure is frequently used in situations when there is no sampling frame available. In such a si |
53,923 | Can one force randomness in a sample? | It might be easy to make the mistake (if you are careless) and interpret it as stratified sampling instead of quota sampling. As whuber pointed out in the comments below, stratified sampling requires the the partitioning of the population into groups BEFORE sampling.
"This is known as stratified sampling. It does not make it "less random" than simple random sampling.
For smaller samples, it is actually more appropriate. With simple random sampling, your samples are more likely to over-represent a category of the population." | Can one force randomness in a sample? | It might be easy to make the mistake (if you are careless) and interpret it as stratified sampling instead of quota sampling. As whuber pointed out in the comments below, stratified sampling requires | Can one force randomness in a sample?
It might be easy to make the mistake (if you are careless) and interpret it as stratified sampling instead of quota sampling. As whuber pointed out in the comments below, stratified sampling requires the the partitioning of the population into groups BEFORE sampling.
"This is known as stratified sampling. It does not make it "less random" than simple random sampling.
For smaller samples, it is actually more appropriate. With simple random sampling, your samples are more likely to over-represent a category of the population." | Can one force randomness in a sample?
It might be easy to make the mistake (if you are careless) and interpret it as stratified sampling instead of quota sampling. As whuber pointed out in the comments below, stratified sampling requires |
53,924 | A problem with two ways of calculating AIC | Your first way is the correct and general definition. The second and third ways assume a Gaussian likelihood.
The second method ignores the constant so the two are not equivalent. It is ok to ignore the constant as it just shifts the AIC for all models, and so makes no difference to the relative ordering of models. Consequently some textbooks and a lot of software use the second formula. But you can't compare the AIC from the two formulas as a result.
In general, don't compare AIC values from different software packages as they often do different things with the constant. Some packages (e.g., Eviews) will also do some additional scaling. | A problem with two ways of calculating AIC | Your first way is the correct and general definition. The second and third ways assume a Gaussian likelihood.
The second method ignores the constant so the two are not equivalent. It is ok to ignore t | A problem with two ways of calculating AIC
Your first way is the correct and general definition. The second and third ways assume a Gaussian likelihood.
The second method ignores the constant so the two are not equivalent. It is ok to ignore the constant as it just shifts the AIC for all models, and so makes no difference to the relative ordering of models. Consequently some textbooks and a lot of software use the second formula. But you can't compare the AIC from the two formulas as a result.
In general, don't compare AIC values from different software packages as they often do different things with the constant. Some packages (e.g., Eviews) will also do some additional scaling. | A problem with two ways of calculating AIC
Your first way is the correct and general definition. The second and third ways assume a Gaussian likelihood.
The second method ignores the constant so the two are not equivalent. It is ok to ignore t |
53,925 | For two offset sampled data series, what is the best estimate of the offset between them? | The maximum of the cross-correlation function can be used for that.
http://en.wikipedia.org/wiki/Cross-correlation
(at least I know that it was used successfully in EEG time-series analysis, paper by Woody).
This is an example, I hacked together:
The top plot shows two noisy chirp-signals, the red one is offset by some 80 sampling points. The cross-correlation plot shows a clear peak at the correct offset.
In MATLAB you can get the tau-estimate with
[xc,lags]=xcorr(y1,y2);
[m,i]=max(xc);
tau=lags(i);
where y1 and y2 are the two input signals.
This is the MATLAB code for generating the figure:
% generate 2 signals
t = 0:0.001:0.5;
y = chirp(t,0,1,150);
y2= [cos(0:0.1:2*pi) y];
y2= y2(1:size(y,2));
% add noise
yr =y +0.9*rand([1 size(y,2)]);
y2r=y2+0.9*rand([1 size(y,2)]);
% plot signals
subplot( 2,1,1)
plot(yr);
hold on
plot( y2r, 'r');
hold off
% plot cross-correlation
subplot( 2, 1,2)
[z, lags]=xcorr(yr, y2r);
plot( lags, z(1:end)); | For two offset sampled data series, what is the best estimate of the offset between them? | The maximum of the cross-correlation function can be used for that.
http://en.wikipedia.org/wiki/Cross-correlation
(at least I know that it was used successfully in EEG time-series analysis, paper by | For two offset sampled data series, what is the best estimate of the offset between them?
The maximum of the cross-correlation function can be used for that.
http://en.wikipedia.org/wiki/Cross-correlation
(at least I know that it was used successfully in EEG time-series analysis, paper by Woody).
This is an example, I hacked together:
The top plot shows two noisy chirp-signals, the red one is offset by some 80 sampling points. The cross-correlation plot shows a clear peak at the correct offset.
In MATLAB you can get the tau-estimate with
[xc,lags]=xcorr(y1,y2);
[m,i]=max(xc);
tau=lags(i);
where y1 and y2 are the two input signals.
This is the MATLAB code for generating the figure:
% generate 2 signals
t = 0:0.001:0.5;
y = chirp(t,0,1,150);
y2= [cos(0:0.1:2*pi) y];
y2= y2(1:size(y,2));
% add noise
yr =y +0.9*rand([1 size(y,2)]);
y2r=y2+0.9*rand([1 size(y,2)]);
% plot signals
subplot( 2,1,1)
plot(yr);
hold on
plot( y2r, 'r');
hold off
% plot cross-correlation
subplot( 2, 1,2)
[z, lags]=xcorr(yr, y2r);
plot( lags, z(1:end)); | For two offset sampled data series, what is the best estimate of the offset between them?
The maximum of the cross-correlation function can be used for that.
http://en.wikipedia.org/wiki/Cross-correlation
(at least I know that it was used successfully in EEG time-series analysis, paper by |
53,926 | For two offset sampled data series, what is the best estimate of the offset between them? | The problem is known as "time delay estimation". If the delay ("tau") is assumed to be constant, the most usual techniques are based on searching a maximum on the cross correlation function. But there are several refinements, many of which corresponds to some "generalized cross-correlation" (which are conceptually similiar to applying some pre-equalization to the signals), I've had good results with the "SCOT" generalized correlation. There are many papers, search for "generalized correlation" "Time delay" "SCOT" "PHAT"
For example: http://dodreports.com/pdf/ada393232.pdf. | For two offset sampled data series, what is the best estimate of the offset between them? | The problem is known as "time delay estimation". If the delay ("tau") is assumed to be constant, the most usual techniques are based on searching a maximum on the cross correlation function. But there | For two offset sampled data series, what is the best estimate of the offset between them?
The problem is known as "time delay estimation". If the delay ("tau") is assumed to be constant, the most usual techniques are based on searching a maximum on the cross correlation function. But there are several refinements, many of which corresponds to some "generalized cross-correlation" (which are conceptually similiar to applying some pre-equalization to the signals), I've had good results with the "SCOT" generalized correlation. There are many papers, search for "generalized correlation" "Time delay" "SCOT" "PHAT"
For example: http://dodreports.com/pdf/ada393232.pdf. | For two offset sampled data series, what is the best estimate of the offset between them?
The problem is known as "time delay estimation". If the delay ("tau") is assumed to be constant, the most usual techniques are based on searching a maximum on the cross correlation function. But there |
53,927 | Interpretation of coefficients in polynomial regression for predictive modeling | Explanation at UCLA
Another link
I think the general answer is : not that easily. There are ways to interpret the derivative, talk about which way the curve opens, etc. But nothing simple and clear like in the linear model. My hunch is that you shouldn't be modeling this as a quadratic, tho.
I would also chuck out the zeros and call your model 'Balances of accounts which have not been closed'. | Interpretation of coefficients in polynomial regression for predictive modeling | Explanation at UCLA
Another link
I think the general answer is : not that easily. There are ways to interpret the derivative, talk about which way the curve opens, etc. But nothing simple and clear | Interpretation of coefficients in polynomial regression for predictive modeling
Explanation at UCLA
Another link
I think the general answer is : not that easily. There are ways to interpret the derivative, talk about which way the curve opens, etc. But nothing simple and clear like in the linear model. My hunch is that you shouldn't be modeling this as a quadratic, tho.
I would also chuck out the zeros and call your model 'Balances of accounts which have not been closed'. | Interpretation of coefficients in polynomial regression for predictive modeling
Explanation at UCLA
Another link
I think the general answer is : not that easily. There are ways to interpret the derivative, talk about which way the curve opens, etc. But nothing simple and clear |
53,928 | Interpretation of coefficients in polynomial regression for predictive modeling | Like other polynomial models, your model is likely to be worse an a linear model if you extrapolate outside the time for which you have data as the time * time term is likely to dominate and the sign of its coefficient will determine whether you predict a large positive or large negative balances, when for many people balances are rather more stable. | Interpretation of coefficients in polynomial regression for predictive modeling | Like other polynomial models, your model is likely to be worse an a linear model if you extrapolate outside the time for which you have data as the time * time term is likely to dominate and the sign | Interpretation of coefficients in polynomial regression for predictive modeling
Like other polynomial models, your model is likely to be worse an a linear model if you extrapolate outside the time for which you have data as the time * time term is likely to dominate and the sign of its coefficient will determine whether you predict a large positive or large negative balances, when for many people balances are rather more stable. | Interpretation of coefficients in polynomial regression for predictive modeling
Like other polynomial models, your model is likely to be worse an a linear model if you extrapolate outside the time for which you have data as the time * time term is likely to dominate and the sign |
53,929 | Interpretation of coefficients in polynomial regression for predictive modeling | As Henry politely/correctly said the problem with this dated approach of fitting linear, squares, cubics, et cetera is that you are forcing/fitting using potentially ( always in my opinion ) unwarranted deterministic structure onto the model. You "pay" for the fitting when you either interpolate or predict. Consider what happens when you fit a cubic to 4 data points. The fit is perfect / all coefficients are significant / the r-square is 1.00 but the forecast is more than likely ridiculous. A simpler and much more correct approach is to model the y variable as a function of it's past and as a function of user-suggested input series ( including any necessary lags ) and also to incorporate any empirically identified Intervention Series such as Pulses, Level Shifts, Seasonal Pulses and or Local Time Trends. This is known by many names e.g. Transfer Function and ARMAX to name two. | Interpretation of coefficients in polynomial regression for predictive modeling | As Henry politely/correctly said the problem with this dated approach of fitting linear, squares, cubics, et cetera is that you are forcing/fitting using potentially ( always in my opinion ) unwarrant | Interpretation of coefficients in polynomial regression for predictive modeling
As Henry politely/correctly said the problem with this dated approach of fitting linear, squares, cubics, et cetera is that you are forcing/fitting using potentially ( always in my opinion ) unwarranted deterministic structure onto the model. You "pay" for the fitting when you either interpolate or predict. Consider what happens when you fit a cubic to 4 data points. The fit is perfect / all coefficients are significant / the r-square is 1.00 but the forecast is more than likely ridiculous. A simpler and much more correct approach is to model the y variable as a function of it's past and as a function of user-suggested input series ( including any necessary lags ) and also to incorporate any empirically identified Intervention Series such as Pulses, Level Shifts, Seasonal Pulses and or Local Time Trends. This is known by many names e.g. Transfer Function and ARMAX to name two. | Interpretation of coefficients in polynomial regression for predictive modeling
As Henry politely/correctly said the problem with this dated approach of fitting linear, squares, cubics, et cetera is that you are forcing/fitting using potentially ( always in my opinion ) unwarrant |
53,930 | How to analyse a moderator effect in hierarchical multiple regression with covariates? | Terminology and Overview
In the context of multiple regression:
a moderator effect is just an interaction between two predictors, typically created by multiplying the two predictors together, often after first centering the predictors.
a covariate is just a predictor that was not used in the formation of the moderator and that is conceptualised as something that needs to be controlled for.
Thus, you should be able to run a hiearchical regression with moderators and covariates in just about any statistical software that supports multiple regression.
Typical approach to testing moderator effect after controlling for covariates
SPSS: If you are doing the hierarchical regression in SPSS, you'd probably enter the predictors in blocks. Here's a tutorial.
R: If you are doing this in R, you'd probably define separate linear models lm each adding additional predictors and use anova to compare the models. Here's a tutorial.
Once you understand hierarchical regression in your chosen tool a simple recipe would be as follows. Let's assume that you have the following variables
main effect precitors: IV1 IV2
interaction effect: multiplication of IV1 and IV2
covariates CV1 CV2
In some cases you may need to create the moderator
If you are using SPSS, you will need to multiply the two predictor variables together (e.g., compute iv1byiv2 = iv1 * iv2.). If you want to interpret the regression coefficients, you may find it useful to center iv1 and iv2 before creating the interaction term.
If you are using R, you can just use the notation iv1*iv2 in the linear model notation.
You can then estimate the models
Block/model 1: Enter covariates m1 <- lm(DV~CV1+CV2)
Block/model 2: Enter main effect predictors m2 <- lm(DV~CV1+CV2+IV1+IV2)
Block/model 3: Enter interaction effect m3 <- lm(DV~CV1+CV2+IV1*IV2)
You can then interpret the significance of the r-square change between block 2 and 3 as a test of whether there is an interaction effect: anova(m2, m3)
Simple slopes analysis
If you want to perform simple slopes analysis, you can take the regression formula provided by the final multiple regression and calculate some appropriate values to plot.
You can do this by hand or you can use predict in R. For example, you might calculate the values predicted by the regression equation using the following values
IV1 IV2 CV1 CV2
-2sd -2sd mean mean
-2sd +2sd mean mean
+2sd -2sd mean mean
+2sd +2sd mean mean
You can then plot these values using whatever plotting tool that you like (e.g., R, SPSS, Excel).
Personally, I find Conditioning Plots a better option than simple slopes analysis. R has the coplot function. The idea is to show a scatter plot of the relationship between IV and DV in a set of arranged scatterplots defined by ranges of the moderator. When I searched, I found an example of using conditioning plots for moderator regression on page 585 of Handbook of Research Methods in Personality Psychology | How to analyse a moderator effect in hierarchical multiple regression with covariates? | Terminology and Overview
In the context of multiple regression:
a moderator effect is just an interaction between two predictors, typically created by multiplying the two predictors together, often a | How to analyse a moderator effect in hierarchical multiple regression with covariates?
Terminology and Overview
In the context of multiple regression:
a moderator effect is just an interaction between two predictors, typically created by multiplying the two predictors together, often after first centering the predictors.
a covariate is just a predictor that was not used in the formation of the moderator and that is conceptualised as something that needs to be controlled for.
Thus, you should be able to run a hiearchical regression with moderators and covariates in just about any statistical software that supports multiple regression.
Typical approach to testing moderator effect after controlling for covariates
SPSS: If you are doing the hierarchical regression in SPSS, you'd probably enter the predictors in blocks. Here's a tutorial.
R: If you are doing this in R, you'd probably define separate linear models lm each adding additional predictors and use anova to compare the models. Here's a tutorial.
Once you understand hierarchical regression in your chosen tool a simple recipe would be as follows. Let's assume that you have the following variables
main effect precitors: IV1 IV2
interaction effect: multiplication of IV1 and IV2
covariates CV1 CV2
In some cases you may need to create the moderator
If you are using SPSS, you will need to multiply the two predictor variables together (e.g., compute iv1byiv2 = iv1 * iv2.). If you want to interpret the regression coefficients, you may find it useful to center iv1 and iv2 before creating the interaction term.
If you are using R, you can just use the notation iv1*iv2 in the linear model notation.
You can then estimate the models
Block/model 1: Enter covariates m1 <- lm(DV~CV1+CV2)
Block/model 2: Enter main effect predictors m2 <- lm(DV~CV1+CV2+IV1+IV2)
Block/model 3: Enter interaction effect m3 <- lm(DV~CV1+CV2+IV1*IV2)
You can then interpret the significance of the r-square change between block 2 and 3 as a test of whether there is an interaction effect: anova(m2, m3)
Simple slopes analysis
If you want to perform simple slopes analysis, you can take the regression formula provided by the final multiple regression and calculate some appropriate values to plot.
You can do this by hand or you can use predict in R. For example, you might calculate the values predicted by the regression equation using the following values
IV1 IV2 CV1 CV2
-2sd -2sd mean mean
-2sd +2sd mean mean
+2sd -2sd mean mean
+2sd +2sd mean mean
You can then plot these values using whatever plotting tool that you like (e.g., R, SPSS, Excel).
Personally, I find Conditioning Plots a better option than simple slopes analysis. R has the coplot function. The idea is to show a scatter plot of the relationship between IV and DV in a set of arranged scatterplots defined by ranges of the moderator. When I searched, I found an example of using conditioning plots for moderator regression on page 585 of Handbook of Research Methods in Personality Psychology | How to analyse a moderator effect in hierarchical multiple regression with covariates?
Terminology and Overview
In the context of multiple regression:
a moderator effect is just an interaction between two predictors, typically created by multiplying the two predictors together, often a |
53,931 | Logarithms and regression | By virtue of properties of logarithms, the original model,
$$\log(Y/B) = \beta_0 + \beta_1\log(A/B) + \beta_2\log(A/C) + \varepsilon$$
($\varepsilon$ is a zero - mean random variable) can be written
$$\log(Y) - \log(B) = \beta_0 + \beta_1 (\log(A)-\log(B)) + \beta_2(\log(A)-\log(C)) + \varepsilon,$$
which algebraically is identical to
$$\log(Y) = \beta_0 + (\beta_1+\beta_2)\log(A) + (1-\beta_1)\log(B) - \beta_2\log(C) + \varepsilon.$$
Note particularly that nothing funny has happened to the random term $\varepsilon$, so that fitting one (via least squares or maximum likelihood) will give the same results as fitting the other.
Conversely, if you start with the general model
$$\log(Y) = \gamma_0 + \gamma_1\log(A) + \gamma_2\log(B) + \gamma_3\log(C) + \delta,$$
comparing coefficients shows that $\gamma_0 = \beta_0$, $\delta = \varepsilon$, and
$$\gamma_1 + \gamma_2 + \gamma_3 = (\beta_1+\beta_2) + (1-\beta_1) + (-\beta_2) = 1.$$
These are the only relations among the coefficients, as evidenced by the fact you can recover the betas from the gammas in many ways such as
$$\beta_1 = 1 - \gamma_2, \quad \beta_2 = -\gamma_3 \quad \text{or}$$
$$\beta_1 = \gamma_1 + \gamma_3, \quad \beta_2 = \gamma_1 + \gamma_2 -1,$$
for example.
Thus the original model is the general model with a single linear constraint. Either method will result in equivalent fits and the same parameter estimates. Therefore,
Yes, there is a way to include $B$ as an independent variable, as shown; and
The original model makes sense. | Logarithms and regression | By virtue of properties of logarithms, the original model,
$$\log(Y/B) = \beta_0 + \beta_1\log(A/B) + \beta_2\log(A/C) + \varepsilon$$
($\varepsilon$ is a zero - mean random variable) can be written
$ | Logarithms and regression
By virtue of properties of logarithms, the original model,
$$\log(Y/B) = \beta_0 + \beta_1\log(A/B) + \beta_2\log(A/C) + \varepsilon$$
($\varepsilon$ is a zero - mean random variable) can be written
$$\log(Y) - \log(B) = \beta_0 + \beta_1 (\log(A)-\log(B)) + \beta_2(\log(A)-\log(C)) + \varepsilon,$$
which algebraically is identical to
$$\log(Y) = \beta_0 + (\beta_1+\beta_2)\log(A) + (1-\beta_1)\log(B) - \beta_2\log(C) + \varepsilon.$$
Note particularly that nothing funny has happened to the random term $\varepsilon$, so that fitting one (via least squares or maximum likelihood) will give the same results as fitting the other.
Conversely, if you start with the general model
$$\log(Y) = \gamma_0 + \gamma_1\log(A) + \gamma_2\log(B) + \gamma_3\log(C) + \delta,$$
comparing coefficients shows that $\gamma_0 = \beta_0$, $\delta = \varepsilon$, and
$$\gamma_1 + \gamma_2 + \gamma_3 = (\beta_1+\beta_2) + (1-\beta_1) + (-\beta_2) = 1.$$
These are the only relations among the coefficients, as evidenced by the fact you can recover the betas from the gammas in many ways such as
$$\beta_1 = 1 - \gamma_2, \quad \beta_2 = -\gamma_3 \quad \text{or}$$
$$\beta_1 = \gamma_1 + \gamma_3, \quad \beta_2 = \gamma_1 + \gamma_2 -1,$$
for example.
Thus the original model is the general model with a single linear constraint. Either method will result in equivalent fits and the same parameter estimates. Therefore,
Yes, there is a way to include $B$ as an independent variable, as shown; and
The original model makes sense. | Logarithms and regression
By virtue of properties of logarithms, the original model,
$$\log(Y/B) = \beta_0 + \beta_1\log(A/B) + \beta_2\log(A/C) + \varepsilon$$
($\varepsilon$ is a zero - mean random variable) can be written
$ |
53,932 | Logarithms and regression | Just doing
$$ \log(Y) \sim \log(A) + \log(B) + \log(C) $$
would be equivalent to the model you're fitting, under the constraint that the coefficients sum to 1 (thanks whuber for making this clear). The coefficients would be changed to compensate for the redundancies in your current formulation, but the fit would be the same.
Edit: If you ignore the constraint and fit the model above then note that the $R^2$ value will change (and typically be much lower) when compared to your original formulation. This is because $B$ appears on both sides of the regression function, so there may appear to be an impressive $R^2$ regardless of whether $Y$ relates to $A,B,C$. See
A=runif(100)
B=runif(100)
C=runif(100)
Y=runif(100)
g1 = lm( log(Y/B) ~ log(A/B) + log(A/C))
g2 = lm( log(Y) ~ log(A)+log(B)+log(C))
as an example. Clearly none of the variables are related to each other, but model g1 shows a pretty impressive $R^2$ (around $45\%$ in my simulation). | Logarithms and regression | Just doing
$$ \log(Y) \sim \log(A) + \log(B) + \log(C) $$
would be equivalent to the model you're fitting, under the constraint that the coefficients sum to 1 (thanks whuber for making this clear). | Logarithms and regression
Just doing
$$ \log(Y) \sim \log(A) + \log(B) + \log(C) $$
would be equivalent to the model you're fitting, under the constraint that the coefficients sum to 1 (thanks whuber for making this clear). The coefficients would be changed to compensate for the redundancies in your current formulation, but the fit would be the same.
Edit: If you ignore the constraint and fit the model above then note that the $R^2$ value will change (and typically be much lower) when compared to your original formulation. This is because $B$ appears on both sides of the regression function, so there may appear to be an impressive $R^2$ regardless of whether $Y$ relates to $A,B,C$. See
A=runif(100)
B=runif(100)
C=runif(100)
Y=runif(100)
g1 = lm( log(Y/B) ~ log(A/B) + log(A/C))
g2 = lm( log(Y) ~ log(A)+log(B)+log(C))
as an example. Clearly none of the variables are related to each other, but model g1 shows a pretty impressive $R^2$ (around $45\%$ in my simulation). | Logarithms and regression
Just doing
$$ \log(Y) \sim \log(A) + \log(B) + \log(C) $$
would be equivalent to the model you're fitting, under the constraint that the coefficients sum to 1 (thanks whuber for making this clear). |
53,933 | Varying group coefficients in lme4 | First of all, AIC/BIC do not make sense in mixed models. I mean, if you can explain what your $n$ that goes into your BIC is (number of groups? number of observations? something in between? how about level 1 and level 2 variables that obviously have different amount information in them?)... So I wouldn't pay any attention to these.
Second, I am surprised your model with the random effects for group level variables was identified at all. Let us think about an extreme case: a binary group level variable in the a model lmer(Y ~ X + G + (1 + X + G| group). What is it that it describes? That a group has an additional random shift when G==1, i.e., group-level heteroskedasticity. So that appears to be something rather odd to estimate.
So all in all, I would run this as
M2 <- lmer(Y ~ X1 + X2 + X3 + X4 + G1 + G2 + G3 + G4 + (1 + X1 + X2 + X3 + X4 | group), family=binomial(link="logit"))
i.e., only with an individual level covariates having random effects assigned to them. | Varying group coefficients in lme4 | First of all, AIC/BIC do not make sense in mixed models. I mean, if you can explain what your $n$ that goes into your BIC is (number of groups? number of observations? something in between? how about | Varying group coefficients in lme4
First of all, AIC/BIC do not make sense in mixed models. I mean, if you can explain what your $n$ that goes into your BIC is (number of groups? number of observations? something in between? how about level 1 and level 2 variables that obviously have different amount information in them?)... So I wouldn't pay any attention to these.
Second, I am surprised your model with the random effects for group level variables was identified at all. Let us think about an extreme case: a binary group level variable in the a model lmer(Y ~ X + G + (1 + X + G| group). What is it that it describes? That a group has an additional random shift when G==1, i.e., group-level heteroskedasticity. So that appears to be something rather odd to estimate.
So all in all, I would run this as
M2 <- lmer(Y ~ X1 + X2 + X3 + X4 + G1 + G2 + G3 + G4 + (1 + X1 + X2 + X3 + X4 | group), family=binomial(link="logit"))
i.e., only with an individual level covariates having random effects assigned to them. | Varying group coefficients in lme4
First of all, AIC/BIC do not make sense in mixed models. I mean, if you can explain what your $n$ that goes into your BIC is (number of groups? number of observations? something in between? how about |
53,934 | Varying group coefficients in lme4 | Its better, I think, that you write your model first, then we can get how to estimate it in R.
Do you wanna the slope of your model to vary by group? And do you think that the second level predictors (ie. predictors at the group level) can be used to predict the varying slopes? If both answers are yes, here how (I think) you can estimate it in R (i'm not 100% sure about this. I just take a look at the book, at chapter 14):
Assuming only one predictor at level one (for the sake of simplicity) and one predictor at level two, and that the second level is given by group:
Model:
$Pr(y_{i}=1) = a + b_{j}*x_{1}$
$b_{j} = \gamma_{0} + \gamma_{1}*g_{1} + e_{j}$
In r:
glmer(y ~ x1 + (x1+0|group) + g1, family=binomial)
As you can see, the difference from your code is that g1 doesn't go within the random effects. You have to expand them to the individual data level to fit the model.
It happens that in cases like this, it's probably better to use a fully Baysian modeling (using WinBugs, Jags or alike), because in this setting I think you can't model properly the second level.
I hope it helps. | Varying group coefficients in lme4 | Its better, I think, that you write your model first, then we can get how to estimate it in R.
Do you wanna the slope of your model to vary by group? And do you think that the second level predictors | Varying group coefficients in lme4
Its better, I think, that you write your model first, then we can get how to estimate it in R.
Do you wanna the slope of your model to vary by group? And do you think that the second level predictors (ie. predictors at the group level) can be used to predict the varying slopes? If both answers are yes, here how (I think) you can estimate it in R (i'm not 100% sure about this. I just take a look at the book, at chapter 14):
Assuming only one predictor at level one (for the sake of simplicity) and one predictor at level two, and that the second level is given by group:
Model:
$Pr(y_{i}=1) = a + b_{j}*x_{1}$
$b_{j} = \gamma_{0} + \gamma_{1}*g_{1} + e_{j}$
In r:
glmer(y ~ x1 + (x1+0|group) + g1, family=binomial)
As you can see, the difference from your code is that g1 doesn't go within the random effects. You have to expand them to the individual data level to fit the model.
It happens that in cases like this, it's probably better to use a fully Baysian modeling (using WinBugs, Jags or alike), because in this setting I think you can't model properly the second level.
I hope it helps. | Varying group coefficients in lme4
Its better, I think, that you write your model first, then we can get how to estimate it in R.
Do you wanna the slope of your model to vary by group? And do you think that the second level predictors |
53,935 | Text mining software (beyond R) | NLTK is free and reasonably easy to use, though it does require installing Python. It comes with a wide range of text data sets. There's an O'Reilly book for it, which is a help for students.
WEKA is general machine learning software with a nice interface. There's some text classification datasets for it, though not the range of text mining datasets there is for NLTK. There's a book for WEKA too. | Text mining software (beyond R) | NLTK is free and reasonably easy to use, though it does require installing Python. It comes with a wide range of text data sets. There's an O'Reilly book for it, which is a help for students.
WEKA i | Text mining software (beyond R)
NLTK is free and reasonably easy to use, though it does require installing Python. It comes with a wide range of text data sets. There's an O'Reilly book for it, which is a help for students.
WEKA is general machine learning software with a nice interface. There's some text classification datasets for it, though not the range of text mining datasets there is for NLTK. There's a book for WEKA too. | Text mining software (beyond R)
NLTK is free and reasonably easy to use, though it does require installing Python. It comes with a wide range of text data sets. There's an O'Reilly book for it, which is a help for students.
WEKA i |
53,936 | Text mining software (beyond R) | Take a look at Rapidminer. Good documentation and a good community where people can discuss various questions. Works both on Mac and Windows.
http://rapid-i.com | Text mining software (beyond R) | Take a look at Rapidminer. Good documentation and a good community where people can discuss various questions. Works both on Mac and Windows.
http://rapid-i.com | Text mining software (beyond R)
Take a look at Rapidminer. Good documentation and a good community where people can discuss various questions. Works both on Mac and Windows.
http://rapid-i.com | Text mining software (beyond R)
Take a look at Rapidminer. Good documentation and a good community where people can discuss various questions. Works both on Mac and Windows.
http://rapid-i.com |
53,937 | Text mining software (beyond R) | DMelt is a free software package for computation, visualization and data mining. It is designed to work with large data ("big data") and perform statistical analysis and data mining. It is similar to R, but is integrated with a number of programming languages (such as Python) | Text mining software (beyond R) | DMelt is a free software package for computation, visualization and data mining. It is designed to work with large data ("big data") and perform statistical analysis and data mining. It is similar to | Text mining software (beyond R)
DMelt is a free software package for computation, visualization and data mining. It is designed to work with large data ("big data") and perform statistical analysis and data mining. It is similar to R, but is integrated with a number of programming languages (such as Python) | Text mining software (beyond R)
DMelt is a free software package for computation, visualization and data mining. It is designed to work with large data ("big data") and perform statistical analysis and data mining. It is similar to |
53,938 | Determining the reliability of weather forecast | There's certainly a huge literature on the accuracy of weather forecasts, including some work that led to widely used general techniques for evaluating predictions.
A data set of historical forecasts from a particular model is available from NOAA. The project sounds very cool, but the data looks quite complex. This is not the forecast from your local TV station! | Determining the reliability of weather forecast | There's certainly a huge literature on the accuracy of weather forecasts, including some work that led to widely used general techniques for evaluating predictions.
A data set of historical forecast | Determining the reliability of weather forecast
There's certainly a huge literature on the accuracy of weather forecasts, including some work that led to widely used general techniques for evaluating predictions.
A data set of historical forecasts from a particular model is available from NOAA. The project sounds very cool, but the data looks quite complex. This is not the forecast from your local TV station! | Determining the reliability of weather forecast
There's certainly a huge literature on the accuracy of weather forecasts, including some work that led to widely used general techniques for evaluating predictions.
A data set of historical forecast |
53,939 | Determining the reliability of weather forecast | I too was after the same type of data a couple of years ago, and amazingly enough I was unable to find the forecasts themselves once the particular date has passed. It means that you'd want to collect the forecasts actively over a period of time. While you're at it, I'd also suggest collecting a few types of forecast horizons: one day ahead, two days ahead, and until 10 days ahead (some websites offer that far ahead weather forecasts). | Determining the reliability of weather forecast | I too was after the same type of data a couple of years ago, and amazingly enough I was unable to find the forecasts themselves once the particular date has passed. It means that you'd want to collect | Determining the reliability of weather forecast
I too was after the same type of data a couple of years ago, and amazingly enough I was unable to find the forecasts themselves once the particular date has passed. It means that you'd want to collect the forecasts actively over a period of time. While you're at it, I'd also suggest collecting a few types of forecast horizons: one day ahead, two days ahead, and until 10 days ahead (some websites offer that far ahead weather forecasts). | Determining the reliability of weather forecast
I too was after the same type of data a couple of years ago, and amazingly enough I was unable to find the forecasts themselves once the particular date has passed. It means that you'd want to collect |
53,940 | Present results from a multinomial model graphically | In case you are nevertheless interested in the coefficients,
and your "bunch of models" is equal to each other (e.g. out of bootstrap or iterated/repeated cross validation, or model ensemble)
Then you could plot the coefficients over the variate (boxplot, mean ± standard deviation, ...):
Of course, if you have only few variates, you may summarize your table along these lines.
In addition, you can multiply your coefficients with your (test) data element wise and plot those results over the variates. This can give you an idea which variate contributes how strongly to the final result (works already with just one model):
I've been using this to discuss LDA models (that's why the graphic says LD coefficient), but it basically works the same as long as the scores (results $\beta X$) can be brought to have the same meaning across the models and there's a coefficient for each variate. Note that I work with spectroscopic data sets, so I have the additional advantage that the variates have an intrinsic order with physical meaning (the spectral dimension, e.g. wavelength or wavenumber as in the example).
If you need details, here's the whole story.
And here's a code example (the data set is not pre-processed, so the model probably doesn't make much sense)
library (hyperSpec) # I'm working with spectra
# and use the chondro data set
## make a model
library (MASS)
model <- lda (clusters ~ spc, data = chondro$.) # this is a really terrible
# thing to do: the data set
# has only rank 10!
## make the coefficient plot
coef <- decomposition (chondro, t (coef (model)), scores = FALSE)
plot (coef, stacked = TRUE)
decomposition makes a hyperSpec object from the coefficients. If you are not working with spectra, you may want to plot coef (model) directly
If I have more models, I plot e.g. mean ± 1 sd of the coefficients
Now the contribution spectra:
contributions <- lapply (1 : 2,
function (i) sweep (chondro, 2, coef [[i,]], `*`)
)
contributions <- do.call (rbind, contributions)
contributions$coef <- rep (1 : 2, each = nrow (chondro))
tmp <- aggregate (contributions,
list (contributions$clusters, contributions$coef),
FUN = mean_pm_sd)
cols <- c ("dark blue", "orange", "#C02020")
plotspc (tmp, stacked = ".aggregate", fill = ".aggregate",
col = rep (cols, 2)) | Present results from a multinomial model graphically | In case you are nevertheless interested in the coefficients,
and your "bunch of models" is equal to each other (e.g. out of bootstrap or iterated/repeated cross validation, or model ensemble)
Then yo | Present results from a multinomial model graphically
In case you are nevertheless interested in the coefficients,
and your "bunch of models" is equal to each other (e.g. out of bootstrap or iterated/repeated cross validation, or model ensemble)
Then you could plot the coefficients over the variate (boxplot, mean ± standard deviation, ...):
Of course, if you have only few variates, you may summarize your table along these lines.
In addition, you can multiply your coefficients with your (test) data element wise and plot those results over the variates. This can give you an idea which variate contributes how strongly to the final result (works already with just one model):
I've been using this to discuss LDA models (that's why the graphic says LD coefficient), but it basically works the same as long as the scores (results $\beta X$) can be brought to have the same meaning across the models and there's a coefficient for each variate. Note that I work with spectroscopic data sets, so I have the additional advantage that the variates have an intrinsic order with physical meaning (the spectral dimension, e.g. wavelength or wavenumber as in the example).
If you need details, here's the whole story.
And here's a code example (the data set is not pre-processed, so the model probably doesn't make much sense)
library (hyperSpec) # I'm working with spectra
# and use the chondro data set
## make a model
library (MASS)
model <- lda (clusters ~ spc, data = chondro$.) # this is a really terrible
# thing to do: the data set
# has only rank 10!
## make the coefficient plot
coef <- decomposition (chondro, t (coef (model)), scores = FALSE)
plot (coef, stacked = TRUE)
decomposition makes a hyperSpec object from the coefficients. If you are not working with spectra, you may want to plot coef (model) directly
If I have more models, I plot e.g. mean ± 1 sd of the coefficients
Now the contribution spectra:
contributions <- lapply (1 : 2,
function (i) sweep (chondro, 2, coef [[i,]], `*`)
)
contributions <- do.call (rbind, contributions)
contributions$coef <- rep (1 : 2, each = nrow (chondro))
tmp <- aggregate (contributions,
list (contributions$clusters, contributions$coef),
FUN = mean_pm_sd)
cols <- c ("dark blue", "orange", "#C02020")
plotspc (tmp, stacked = ".aggregate", fill = ".aggregate",
col = rep (cols, 2)) | Present results from a multinomial model graphically
In case you are nevertheless interested in the coefficients,
and your "bunch of models" is equal to each other (e.g. out of bootstrap or iterated/repeated cross validation, or model ensemble)
Then yo |
53,941 | Present results from a multinomial model graphically | What you are typically interested in is not the coefficients but probably marginal effects or something similar. Here is a package that plots these for you:
http://cran.r-project.org/web/packages/effects
I think it was featured in R magazine, but the examples are picturesque enough. | Present results from a multinomial model graphically | What you are typically interested in is not the coefficients but probably marginal effects or something similar. Here is a package that plots these for you:
http://cran.r-project.org/web/packages/effe | Present results from a multinomial model graphically
What you are typically interested in is not the coefficients but probably marginal effects or something similar. Here is a package that plots these for you:
http://cran.r-project.org/web/packages/effects
I think it was featured in R magazine, but the examples are picturesque enough. | Present results from a multinomial model graphically
What you are typically interested in is not the coefficients but probably marginal effects or something similar. Here is a package that plots these for you:
http://cran.r-project.org/web/packages/effe |
53,942 | Meaning of the "existence" proof | Existence proofs are notoriously difficult to justify and appreciate. One reason is that they don't seem to have any consequences. Whether or not you read and understand an existence proof does not change your subsequent work with a given stochastic process because that work relies on properties of the process and not the fact that it exists.
The purpose of existence proofs is to justify the mathematical foundation of working with a given process. Without these proofs it is like in Star Trek. They can do all sorts of cool things, but in reality the materials and technology do not exist. Hence it is all fiction. If your stochastic process does not exists, that is, if there is no mathematical object with a given set of properties, your subsequent work is all fiction. So it is a fundamental step in science to justify that we are doing science and not science fiction.
Edit: In response to @Nick's comment. When it comes to stochastic processes it is, indeed, of central importance to be able to produce an infinite dimensional measure from a consistent family of finite dimensional distributions. This is what Kolmogorov's consistency theorem is about. This theorem, or a variation of it, often lurks in the background even if it is not used explicitly. For instance, a stochastic process could be a solution to a stochastic differential equation (SDE) involving Brownian motion. Then the existence of the process is a question about whether the SDE has a solution - based on the existence of Brownian motion. The existence of Brownian motion can be proved using Kolmogorov's consistency theorem.
There are, however, alternative ways to obtain existence of processes. My favorite alternative is via compactness arguments.
I kept my reply above non-specific because it goes for all other mathematical topics as well. The existence of, for instance, the uniform distribution on $[0,1]$, which is absolutely non-trivial. And the existence of the real numbers for that matter. I wonder how many on this site have proved that the real numbers exist, or do we just take for granted that we can "fill out the gaps" in the rational numbers to get completeness?
N.B. Just to clarify, by the first line in my original reply it was not my intention to imply that an existence proof can not be justified or appreciated, but to a newcomer it may look like a soccer match where the players are given an object and the referee says it's a ball, but then the players use the entire first half to clarify that, indeed, it is a ball before they start playing. | Meaning of the "existence" proof | Existence proofs are notoriously difficult to justify and appreciate. One reason is that they don't seem to have any consequences. Whether or not you read and understand an existence proof does not ch | Meaning of the "existence" proof
Existence proofs are notoriously difficult to justify and appreciate. One reason is that they don't seem to have any consequences. Whether or not you read and understand an existence proof does not change your subsequent work with a given stochastic process because that work relies on properties of the process and not the fact that it exists.
The purpose of existence proofs is to justify the mathematical foundation of working with a given process. Without these proofs it is like in Star Trek. They can do all sorts of cool things, but in reality the materials and technology do not exist. Hence it is all fiction. If your stochastic process does not exists, that is, if there is no mathematical object with a given set of properties, your subsequent work is all fiction. So it is a fundamental step in science to justify that we are doing science and not science fiction.
Edit: In response to @Nick's comment. When it comes to stochastic processes it is, indeed, of central importance to be able to produce an infinite dimensional measure from a consistent family of finite dimensional distributions. This is what Kolmogorov's consistency theorem is about. This theorem, or a variation of it, often lurks in the background even if it is not used explicitly. For instance, a stochastic process could be a solution to a stochastic differential equation (SDE) involving Brownian motion. Then the existence of the process is a question about whether the SDE has a solution - based on the existence of Brownian motion. The existence of Brownian motion can be proved using Kolmogorov's consistency theorem.
There are, however, alternative ways to obtain existence of processes. My favorite alternative is via compactness arguments.
I kept my reply above non-specific because it goes for all other mathematical topics as well. The existence of, for instance, the uniform distribution on $[0,1]$, which is absolutely non-trivial. And the existence of the real numbers for that matter. I wonder how many on this site have proved that the real numbers exist, or do we just take for granted that we can "fill out the gaps" in the rational numbers to get completeness?
N.B. Just to clarify, by the first line in my original reply it was not my intention to imply that an existence proof can not be justified or appreciated, but to a newcomer it may look like a soccer match where the players are given an object and the referee says it's a ball, but then the players use the entire first half to clarify that, indeed, it is a ball before they start playing. | Meaning of the "existence" proof
Existence proofs are notoriously difficult to justify and appreciate. One reason is that they don't seem to have any consequences. Whether or not you read and understand an existence proof does not ch |
53,943 | Plotting top 5 most frequent factors using R | This seems more of a question for SO, but anyway:
EDIT: reproducible, and a lot simpler
f <- data.frame(c=factor(sample(rep(letters[1:8], 10),40)))
t <- table(f$c) # frequency of values in f$c
plot( sort(t, decreasing=TRUE)[1:5] ), type="h") | Plotting top 5 most frequent factors using R | This seems more of a question for SO, but anyway:
EDIT: reproducible, and a lot simpler
f <- data.frame(c=factor(sample(rep(letters[1:8], 10),40)))
t <- table(f$c) # frequency of values in f$c
plot( s | Plotting top 5 most frequent factors using R
This seems more of a question for SO, but anyway:
EDIT: reproducible, and a lot simpler
f <- data.frame(c=factor(sample(rep(letters[1:8], 10),40)))
t <- table(f$c) # frequency of values in f$c
plot( sort(t, decreasing=TRUE)[1:5] ), type="h") | Plotting top 5 most frequent factors using R
This seems more of a question for SO, but anyway:
EDIT: reproducible, and a lot simpler
f <- data.frame(c=factor(sample(rep(letters[1:8], 10),40)))
t <- table(f$c) # frequency of values in f$c
plot( s |
53,944 | Random effect nested under fixed effect model in R | It doesn't make sense to both include tank as a random effect and nest tank within the pop/temp fixed effect. You only need one of these, depending on how tank is coded.
If tank is coded 1-8, you only need the tank random effect. Nesting it within the pop/temp fixed effect results in the same 8 units, so is not necessary.
If tank is coded 1-2 (that is, which rep it was), you only need to nest tank within the pop/temp fixed effect, because that gives you your 8 unique tanks. Including the tank random effect is only desired if the tanks were first divided into two groups and then randomized to treatment; if the eight tanks were completely randomized to treatment, this is not necessary.
You could do this with likelihood based solutions such those in nlme and lme4 but if everything is balanced, it might be simpler to use the traditional ANOVA approach using aov.
Creating some sample data:
set.seed(5)
d <- within(expand.grid(pop=factor(c("A","B")),
temp=factor(c("warm", "cold")),
rep=1:2,
fish=1:100), {
tank <- factor(paste(pop, temp, rep, sep="."))
tanke <- round(rnorm(nlevels(tank))[unclass(tank)],1)
e <- round(rnorm(length(pop)),1)
m <- 10 + 2*as.numeric(pop)*as.numeric(temp)
growth <- m + tanke + e
})
Using aov like this:
a0 <- aov(growth ~ pop*temp + Error(tank), data=d)
summary(a0)
or lme like this:
library(nlme)
m1 <- lme(growth ~ pop*temp, random=~1|tank, data=d)
anova(m1) | Random effect nested under fixed effect model in R | It doesn't make sense to both include tank as a random effect and nest tank within the pop/temp fixed effect. You only need one of these, depending on how tank is coded.
If tank is coded 1-8, you onl | Random effect nested under fixed effect model in R
It doesn't make sense to both include tank as a random effect and nest tank within the pop/temp fixed effect. You only need one of these, depending on how tank is coded.
If tank is coded 1-8, you only need the tank random effect. Nesting it within the pop/temp fixed effect results in the same 8 units, so is not necessary.
If tank is coded 1-2 (that is, which rep it was), you only need to nest tank within the pop/temp fixed effect, because that gives you your 8 unique tanks. Including the tank random effect is only desired if the tanks were first divided into two groups and then randomized to treatment; if the eight tanks were completely randomized to treatment, this is not necessary.
You could do this with likelihood based solutions such those in nlme and lme4 but if everything is balanced, it might be simpler to use the traditional ANOVA approach using aov.
Creating some sample data:
set.seed(5)
d <- within(expand.grid(pop=factor(c("A","B")),
temp=factor(c("warm", "cold")),
rep=1:2,
fish=1:100), {
tank <- factor(paste(pop, temp, rep, sep="."))
tanke <- round(rnorm(nlevels(tank))[unclass(tank)],1)
e <- round(rnorm(length(pop)),1)
m <- 10 + 2*as.numeric(pop)*as.numeric(temp)
growth <- m + tanke + e
})
Using aov like this:
a0 <- aov(growth ~ pop*temp + Error(tank), data=d)
summary(a0)
or lme like this:
library(nlme)
m1 <- lme(growth ~ pop*temp, random=~1|tank, data=d)
anova(m1) | Random effect nested under fixed effect model in R
It doesn't make sense to both include tank as a random effect and nest tank within the pop/temp fixed effect. You only need one of these, depending on how tank is coded.
If tank is coded 1-8, you onl |
53,945 | Random effect nested under fixed effect model in R | Because the treatment (pop*temp) does not vary within levels of the random effect (tank), this is a simple nested design (I think). Unless you are particularly interested in the amount of between-tank variation, it will be much easier just to aggregate the data to the level of tank and then run a simple (non-mixed) linear model -- the statistical inferences should be identical to the tests of the fixed effects in the mixed ANOVAs above.
Using the sample data above (I'm sure there's an easier way to do the aggregation):
d2 <- rename(cast(subset(d,select=c(pop,tank,temp,growth)),
pop+temp+tank~.,fun.agg=mean,value="growth"),
c("(all)"="meangrowth"))
m1 <- lm(meangrowth~pop*temp,data=d2)
summary(m1)
See Murtaugh (2007) "Simplicity and complexity in ecological data analysis", Ecology, 88(1), 2007, pp. 56–62. | Random effect nested under fixed effect model in R | Because the treatment (pop*temp) does not vary within levels of the random effect (tank), this is a simple nested design (I think). Unless you are particularly interested in the amount of between-tan | Random effect nested under fixed effect model in R
Because the treatment (pop*temp) does not vary within levels of the random effect (tank), this is a simple nested design (I think). Unless you are particularly interested in the amount of between-tank variation, it will be much easier just to aggregate the data to the level of tank and then run a simple (non-mixed) linear model -- the statistical inferences should be identical to the tests of the fixed effects in the mixed ANOVAs above.
Using the sample data above (I'm sure there's an easier way to do the aggregation):
d2 <- rename(cast(subset(d,select=c(pop,tank,temp,growth)),
pop+temp+tank~.,fun.agg=mean,value="growth"),
c("(all)"="meangrowth"))
m1 <- lm(meangrowth~pop*temp,data=d2)
summary(m1)
See Murtaugh (2007) "Simplicity and complexity in ecological data analysis", Ecology, 88(1), 2007, pp. 56–62. | Random effect nested under fixed effect model in R
Because the treatment (pop*temp) does not vary within levels of the random effect (tank), this is a simple nested design (I think). Unless you are particularly interested in the amount of between-tan |
53,946 | Random effect nested under fixed effect model in R | Did you try: lmer(growth~1+treatment+(1|tank)+(0+treatment|tank)) | Random effect nested under fixed effect model in R | Did you try: lmer(growth~1+treatment+(1|tank)+(0+treatment|tank)) | Random effect nested under fixed effect model in R
Did you try: lmer(growth~1+treatment+(1|tank)+(0+treatment|tank)) | Random effect nested under fixed effect model in R
Did you try: lmer(growth~1+treatment+(1|tank)+(0+treatment|tank)) |
53,947 | Understanding variance estimators | The idea of the unbaised variance estimate, is to have $E(s^{2})=\sigma^{2}$ where the expectation is with respect to the sampling distribution of $s^{2}$ or equivalently, with respect to the sampling distribution of $x_1,\dots,x_N$. So if we knew the true mean and the true variance, but not the value of $s^{2}$, then $s^2$ would have expected value of $\sigma^{2}$.
Now we have:
$$E(s^{2})=E\left[ \frac{1}{N+k}\sum_i (x_i-\bar{x})^2\right]
=\frac{1}{N+k}E\left[\sum_i x_i^2-N\bar{x}^{2}\right]$$
$$=\frac{1}{N+k}\left[\sum_i E(x_i^2)-NE(\bar{x}^{2})\right]$$
We can use the identity $E(Y^{2})=V(Y)+[E(Y)]^{2}$, and the fact that we know $E(x_i)=\mu$ and $V(x_i)=\sigma^{2}$, and the summation becomes:
$$\sum_i E(x_i^2)=\sum_i V(x_i)+[E(x_i)]^2=\sum_i (\sigma^2+\mu^2)=N(\sigma^2+\mu^2)$$
Now to calculate the second expectation, we re-write $NE(\bar{x}^{2})$ as follows:
$$NE(\bar{x}^{2})=NE\left(\left[\frac{1}{N}\sum_i x_i\right]^{2}\right)=\frac{1}{N}E\left(\sum_i \sum_j x_jx_i\right)=\frac{1}{N}\sum_i \sum_j E(x_jx_i)$$
$$=\frac{1}{N}\left(\sum_i E(x_i^2)+\sum_{i\neq j} E(x_jx_i)\right)=(\sigma^2+\mu^2)+\frac{1}{N}\sum_{i\neq j} E(x_jx_i)$$
Now we have another identity we can use $E(YZ)=Cov(Y,Z)+E(Y)E(Z)$. Your problem statement does not specify whether or not the sample is independent, but it does say that they have the same distribution. So we can take $Cov(x_ix_j)=\rho\sigma^2$ for some correlation $-\frac{1}{N-1}\leq\rho\leq 1$ (lower bound required for positive variance). We then get:
$$NE(\bar{x}^{2})=(\sigma^2+\mu^2)+\frac{1}{N}\sum_{i\neq j} (\rho\sigma^2+\mu^2)=(\sigma^2+\mu^2)+\frac{N(N-1)}{N}(\rho\sigma^2+\mu^2)$$
$$=\sigma^2(1+(N-1)\rho)+N\mu^{2}$$
Putting this together we have:
$$E(s^2)=\frac{1}{N+k}\left[N(\sigma^2+\mu^2)-\left(\sigma^2(1+(N-1)\rho)+N\mu^{2}\right)\right]=\frac{N-1}{N+k}\sigma^2(1-\rho)$$
So if we choose $k=-1$ and we assume that $\rho=0$ (i.e. independence), then we have $E(s^2)=\sigma^2$, and we get an unbiased estimate for $s^2$. However, if we assume that $\rho=-\frac{1}{N-1}$ (i.e. the sum is fixed), then we get $E(s^2)=\frac{N-1}{N+k}\sigma^2(1+\frac{1}{N-1})=\frac{N}{N+k}\sigma^2$ and we should set $k=0$ for an unbiased estimate. So one can interpret the intuition behind $N-1$ as accounting for the fact that the true mean has been estimated by the sample mean (and hence is not "fixed").
For minimum mean square error, we require $MSE(s^2)=E\left[(s^2-\sigma^2)^2\right]$ to be a minimum for some choice of $k$. Expanding the mean square error we get:
$$MSE(s^2)=E[s^4]-2\sigma^2E[s^2]+\sigma^4$$
$E(s^2)$ has been calculated already, now to calculate $E(s^4)$. squaring $s^2$ gives us:
$$s^4=\frac{1}{(N+k)^2}\left[\sum_i x_i^2-N\bar{x}^{2}\right]^2$$
$$=\frac{1}{(N+k)^2}\left(\left[\sum_i x_i^2\right]^2-2\left[\sum_i x_i^2\right]\left[N\bar{x}^{2}\right]+\left[N\bar{x}^{2}\right]^2\right)$$
$$=\frac{\left[\sum_i x_i^4+\sum_{i\neq j} x_j^2x_i^2\right]-2\frac{1}{N}\left[\sum_i x_i^2\right]\left[\sum_i x_i^2+\sum_{i\neq j} x_jx_i\right]+\frac{1}{N^2}\left[\sum_i x_i^2+\sum_{i\neq j} x_jx_i\right]^2}{(N+k)^2}$$
$$=\frac{(\frac{1}{N}-1)^2\left[\sum_i x_i^4+\sum_{i\neq j} x_j^2x_i^2\right]+2\frac{1}{N}(\frac{1}{N}-1)\left[\sum_i x_i^2\right]\left[\sum_{i\neq j} x_jx_i\right]+\frac{1}{N^2}\left[\sum_{i\neq j} x_jx_i\right]^2}{(N+k)^2}$$
$$=\frac{f(x_1,\dots,x_N)}{(N+k)^2}$$
And you can see that without some assumptions that the expectation will in general be a function of the fourth order moments $E(x_ix_jx_kx_l),E(x_i^2x_jx_k),E(x_i^3x_j),E(x_i^2x_j^2),E(x_i^4)$ (which are not given in the question). However, its dependence on $k$ is quite simple, so we can still solve the variation problem algebraically with $F=E[f(x_1,\dots,x_N)]$. So we have:
$$MSE(s^2)=\frac{F}{(N+k)^2}-2\sigma^2\left[\frac{N-1}{N+k}\sigma^2(1-\rho)\right]+\sigma^4$$
Taking derivative with respect to $k$, set to zero, and solve for $k$:
$$-2\frac{F}{(N+k)^3}+2\frac{N-1}{(N+k)^2}\sigma^4(1-\rho)=0$$
$$\implies k=\frac{F}{(N-1)\sigma^4(1-\rho)}-N$$
This shows that unless $F=c\sigma^4(1-\rho)$, where $c$ depends only on the sample size, the optimum value of $k$ will be a function of the parameters, and hence you have no "solution" per se because it depends on things you don't know. You can show that if you assume an independent normal distribution for $x_i$ (so $\rho=0$), then $F=(N^2-1)\sigma^4$ and you get $k=+1$ as the optimum value. | Understanding variance estimators | The idea of the unbaised variance estimate, is to have $E(s^{2})=\sigma^{2}$ where the expectation is with respect to the sampling distribution of $s^{2}$ or equivalently, with respect to the sampling | Understanding variance estimators
The idea of the unbaised variance estimate, is to have $E(s^{2})=\sigma^{2}$ where the expectation is with respect to the sampling distribution of $s^{2}$ or equivalently, with respect to the sampling distribution of $x_1,\dots,x_N$. So if we knew the true mean and the true variance, but not the value of $s^{2}$, then $s^2$ would have expected value of $\sigma^{2}$.
Now we have:
$$E(s^{2})=E\left[ \frac{1}{N+k}\sum_i (x_i-\bar{x})^2\right]
=\frac{1}{N+k}E\left[\sum_i x_i^2-N\bar{x}^{2}\right]$$
$$=\frac{1}{N+k}\left[\sum_i E(x_i^2)-NE(\bar{x}^{2})\right]$$
We can use the identity $E(Y^{2})=V(Y)+[E(Y)]^{2}$, and the fact that we know $E(x_i)=\mu$ and $V(x_i)=\sigma^{2}$, and the summation becomes:
$$\sum_i E(x_i^2)=\sum_i V(x_i)+[E(x_i)]^2=\sum_i (\sigma^2+\mu^2)=N(\sigma^2+\mu^2)$$
Now to calculate the second expectation, we re-write $NE(\bar{x}^{2})$ as follows:
$$NE(\bar{x}^{2})=NE\left(\left[\frac{1}{N}\sum_i x_i\right]^{2}\right)=\frac{1}{N}E\left(\sum_i \sum_j x_jx_i\right)=\frac{1}{N}\sum_i \sum_j E(x_jx_i)$$
$$=\frac{1}{N}\left(\sum_i E(x_i^2)+\sum_{i\neq j} E(x_jx_i)\right)=(\sigma^2+\mu^2)+\frac{1}{N}\sum_{i\neq j} E(x_jx_i)$$
Now we have another identity we can use $E(YZ)=Cov(Y,Z)+E(Y)E(Z)$. Your problem statement does not specify whether or not the sample is independent, but it does say that they have the same distribution. So we can take $Cov(x_ix_j)=\rho\sigma^2$ for some correlation $-\frac{1}{N-1}\leq\rho\leq 1$ (lower bound required for positive variance). We then get:
$$NE(\bar{x}^{2})=(\sigma^2+\mu^2)+\frac{1}{N}\sum_{i\neq j} (\rho\sigma^2+\mu^2)=(\sigma^2+\mu^2)+\frac{N(N-1)}{N}(\rho\sigma^2+\mu^2)$$
$$=\sigma^2(1+(N-1)\rho)+N\mu^{2}$$
Putting this together we have:
$$E(s^2)=\frac{1}{N+k}\left[N(\sigma^2+\mu^2)-\left(\sigma^2(1+(N-1)\rho)+N\mu^{2}\right)\right]=\frac{N-1}{N+k}\sigma^2(1-\rho)$$
So if we choose $k=-1$ and we assume that $\rho=0$ (i.e. independence), then we have $E(s^2)=\sigma^2$, and we get an unbiased estimate for $s^2$. However, if we assume that $\rho=-\frac{1}{N-1}$ (i.e. the sum is fixed), then we get $E(s^2)=\frac{N-1}{N+k}\sigma^2(1+\frac{1}{N-1})=\frac{N}{N+k}\sigma^2$ and we should set $k=0$ for an unbiased estimate. So one can interpret the intuition behind $N-1$ as accounting for the fact that the true mean has been estimated by the sample mean (and hence is not "fixed").
For minimum mean square error, we require $MSE(s^2)=E\left[(s^2-\sigma^2)^2\right]$ to be a minimum for some choice of $k$. Expanding the mean square error we get:
$$MSE(s^2)=E[s^4]-2\sigma^2E[s^2]+\sigma^4$$
$E(s^2)$ has been calculated already, now to calculate $E(s^4)$. squaring $s^2$ gives us:
$$s^4=\frac{1}{(N+k)^2}\left[\sum_i x_i^2-N\bar{x}^{2}\right]^2$$
$$=\frac{1}{(N+k)^2}\left(\left[\sum_i x_i^2\right]^2-2\left[\sum_i x_i^2\right]\left[N\bar{x}^{2}\right]+\left[N\bar{x}^{2}\right]^2\right)$$
$$=\frac{\left[\sum_i x_i^4+\sum_{i\neq j} x_j^2x_i^2\right]-2\frac{1}{N}\left[\sum_i x_i^2\right]\left[\sum_i x_i^2+\sum_{i\neq j} x_jx_i\right]+\frac{1}{N^2}\left[\sum_i x_i^2+\sum_{i\neq j} x_jx_i\right]^2}{(N+k)^2}$$
$$=\frac{(\frac{1}{N}-1)^2\left[\sum_i x_i^4+\sum_{i\neq j} x_j^2x_i^2\right]+2\frac{1}{N}(\frac{1}{N}-1)\left[\sum_i x_i^2\right]\left[\sum_{i\neq j} x_jx_i\right]+\frac{1}{N^2}\left[\sum_{i\neq j} x_jx_i\right]^2}{(N+k)^2}$$
$$=\frac{f(x_1,\dots,x_N)}{(N+k)^2}$$
And you can see that without some assumptions that the expectation will in general be a function of the fourth order moments $E(x_ix_jx_kx_l),E(x_i^2x_jx_k),E(x_i^3x_j),E(x_i^2x_j^2),E(x_i^4)$ (which are not given in the question). However, its dependence on $k$ is quite simple, so we can still solve the variation problem algebraically with $F=E[f(x_1,\dots,x_N)]$. So we have:
$$MSE(s^2)=\frac{F}{(N+k)^2}-2\sigma^2\left[\frac{N-1}{N+k}\sigma^2(1-\rho)\right]+\sigma^4$$
Taking derivative with respect to $k$, set to zero, and solve for $k$:
$$-2\frac{F}{(N+k)^3}+2\frac{N-1}{(N+k)^2}\sigma^4(1-\rho)=0$$
$$\implies k=\frac{F}{(N-1)\sigma^4(1-\rho)}-N$$
This shows that unless $F=c\sigma^4(1-\rho)$, where $c$ depends only on the sample size, the optimum value of $k$ will be a function of the parameters, and hence you have no "solution" per se because it depends on things you don't know. You can show that if you assume an independent normal distribution for $x_i$ (so $\rho=0$), then $F=(N^2-1)\sigma^4$ and you get $k=+1$ as the optimum value. | Understanding variance estimators
The idea of the unbaised variance estimate, is to have $E(s^{2})=\sigma^{2}$ where the expectation is with respect to the sampling distribution of $s^{2}$ or equivalently, with respect to the sampling |
53,948 | Understanding variance estimators | What's the problem with just dividing by $N$? You don't take into account that you aren't subtracting the true population mean off of each $x_i$, but rather than estimate of it.
One way that I like to think about it is, suppose that I gave you the sample mean $\bar{x}$. How many data points $N-k$ would I have to give you so that you could tell me the exact values of the remaining $k$? Well, a mean gives one equation involving every observation and, if there was only one unknown observation, we could solve the equation. In summary, knowing the mean and $N - 1$ data points is the same as knowing every data point.
In calculating the sample variance, I know the mean, so it's like I only have $N - 1$ effective data points giving me information; the last one I could guess using the others plus the mean. We divide by the number of effective data points that we have, $N-1$. This is known as a degrees-of-freedom correction (we only have $N-1$ degrees of freedom, parameters that we don't know, given that we know the mean).
Now, here's the math: Let
$$\begin{equation*}
s^2 = \frac{1}{N}\sum_{i=1}^N{(y_i - \bar{y})^2}.
\end{equation*}$$
Then,
$$\begin{align*}
E[s^2] &= E\left[ \frac{1}{N}\sum_{i=1}^N{(y_i - \bar{y})^2} \right] \\
&= E\left[\frac{1}{N}\sum_{i=1}^N{\left((y_i - \mu) - (\bar{y} - \mu)\right)^2} \right] \\
&= \frac{1}{N}\sum_{i=1}^NE\left[(y_i - \mu)^2\right] - 2\frac{1}{N}E\left[(\bar{y} - \mu)\sum_{i=1}^N{(y_i - \mu)}\right] \\
&\qquad + E\left[(\bar{y} - \mu)^2\right] \\
&= \frac{1}{N}\sum_{i=1}^NE\left[(y_i - \mu)^2\right] - E\left[(\bar{y} - \mu)^2\right] \\
&= \text{Var}(y_i) - \frac{\text{Var}(y_i)}{N} = \frac{N-1}{N}\text{Var}(y_i).
\end{align*}$$
This uses the fact that the variance of the sample mean is the variance of $y_i$ divided by $N$.
Hence, an unbiased estimator requires multiplying $s^2$ by $N/(N-1)$, giving the equation that you sought.
As you mention, dividing by $N-1$ is close to dividing by $N$; the two get close as $N$ gets big. Hence, $s^2$ is a consistent estimator---its bias goes to 0 as $N$ gets big. | Understanding variance estimators | What's the problem with just dividing by $N$? You don't take into account that you aren't subtracting the true population mean off of each $x_i$, but rather than estimate of it.
One way that I like to | Understanding variance estimators
What's the problem with just dividing by $N$? You don't take into account that you aren't subtracting the true population mean off of each $x_i$, but rather than estimate of it.
One way that I like to think about it is, suppose that I gave you the sample mean $\bar{x}$. How many data points $N-k$ would I have to give you so that you could tell me the exact values of the remaining $k$? Well, a mean gives one equation involving every observation and, if there was only one unknown observation, we could solve the equation. In summary, knowing the mean and $N - 1$ data points is the same as knowing every data point.
In calculating the sample variance, I know the mean, so it's like I only have $N - 1$ effective data points giving me information; the last one I could guess using the others plus the mean. We divide by the number of effective data points that we have, $N-1$. This is known as a degrees-of-freedom correction (we only have $N-1$ degrees of freedom, parameters that we don't know, given that we know the mean).
Now, here's the math: Let
$$\begin{equation*}
s^2 = \frac{1}{N}\sum_{i=1}^N{(y_i - \bar{y})^2}.
\end{equation*}$$
Then,
$$\begin{align*}
E[s^2] &= E\left[ \frac{1}{N}\sum_{i=1}^N{(y_i - \bar{y})^2} \right] \\
&= E\left[\frac{1}{N}\sum_{i=1}^N{\left((y_i - \mu) - (\bar{y} - \mu)\right)^2} \right] \\
&= \frac{1}{N}\sum_{i=1}^NE\left[(y_i - \mu)^2\right] - 2\frac{1}{N}E\left[(\bar{y} - \mu)\sum_{i=1}^N{(y_i - \mu)}\right] \\
&\qquad + E\left[(\bar{y} - \mu)^2\right] \\
&= \frac{1}{N}\sum_{i=1}^NE\left[(y_i - \mu)^2\right] - E\left[(\bar{y} - \mu)^2\right] \\
&= \text{Var}(y_i) - \frac{\text{Var}(y_i)}{N} = \frac{N-1}{N}\text{Var}(y_i).
\end{align*}$$
This uses the fact that the variance of the sample mean is the variance of $y_i$ divided by $N$.
Hence, an unbiased estimator requires multiplying $s^2$ by $N/(N-1)$, giving the equation that you sought.
As you mention, dividing by $N-1$ is close to dividing by $N$; the two get close as $N$ gets big. Hence, $s^2$ is a consistent estimator---its bias goes to 0 as $N$ gets big. | Understanding variance estimators
What's the problem with just dividing by $N$? You don't take into account that you aren't subtracting the true population mean off of each $x_i$, but rather than estimate of it.
One way that I like to |
53,949 | Dependent variable is a function of independent variables; can I sensibly include them in a regression? | I see no problem with fitting the regression. We do regressions because we believe that the predictors may be related to the response, you just have more knowledge to begin with.
But what questions are you actually trying to answer? The fact that certain coefficients are significant is not surprising, so those were not really interesting questions to begin with. What may be interesting is if they differ from a value other than 0 (the weight in the GPA). That could tell you if they have an indirect effect in addition to the known effect, e.g. math score could be related to science score which is not in your model, but contributes to the GPA. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi | I see no problem with fitting the regression. We do regressions because we believe that the predictors may be related to the response, you just have more knowledge to begin with.
But what questions a | Dependent variable is a function of independent variables; can I sensibly include them in a regression?
I see no problem with fitting the regression. We do regressions because we believe that the predictors may be related to the response, you just have more knowledge to begin with.
But what questions are you actually trying to answer? The fact that certain coefficients are significant is not surprising, so those were not really interesting questions to begin with. What may be interesting is if they differ from a value other than 0 (the weight in the GPA). That could tell you if they have an indirect effect in addition to the known effect, e.g. math score could be related to science score which is not in your model, but contributes to the GPA. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi
I see no problem with fitting the regression. We do regressions because we believe that the predictors may be related to the response, you just have more knowledge to begin with.
But what questions a |
53,950 | Dependent variable is a function of independent variables; can I sensibly include them in a regression? | Another point to consider: what enables a student to do well in one course is related to what enables him/her to do well in another. There are overarching factors (cognitive, personality, circumstances) that play some role in determining each of the individual course grades. So to use regression--to see how X1 relates to GPA while controlling for X2, X3, X4, etc.--, will "cannibalize" each X-Y relationship, partialling a portion of the relationship right out of itself. The coefficients you obtain will be, in Tukey and Mosteller's words, "arbitrary nonsense." Here's how Elazar Pedhazur puts it (Multiple Regression in Behavioral Research, 3rd Ed., 170-2):
“Partial correlation is not an all-purpose method of control […] Controlling variables without regard to the theoretical considerations about the pattern of relations among them may yield misleading or meaningless results […] It makes no sense to control for one measure of mental ability, say, while correlating another measure of mental ability with academic achievement when the aim is to study the relation between mental ability and academic achievement. […] This is tantamount to partialling a relation out of itself and may lead to the fallacious conclusion that mental ability and academic achievement are not correlated.”
I'd recommend studying bivariate correlations among the variables over a regression predicting GPA. Predicting each individual course grade (@probabilityislogic's idea) also seems very worth doing. @Manoel's factor analysis idea makes me pause because you may not have all the variables necessary to map out the key underlying factors. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi | Another point to consider: what enables a student to do well in one course is related to what enables him/her to do well in another. There are overarching factors (cognitive, personality, circumstan | Dependent variable is a function of independent variables; can I sensibly include them in a regression?
Another point to consider: what enables a student to do well in one course is related to what enables him/her to do well in another. There are overarching factors (cognitive, personality, circumstances) that play some role in determining each of the individual course grades. So to use regression--to see how X1 relates to GPA while controlling for X2, X3, X4, etc.--, will "cannibalize" each X-Y relationship, partialling a portion of the relationship right out of itself. The coefficients you obtain will be, in Tukey and Mosteller's words, "arbitrary nonsense." Here's how Elazar Pedhazur puts it (Multiple Regression in Behavioral Research, 3rd Ed., 170-2):
“Partial correlation is not an all-purpose method of control […] Controlling variables without regard to the theoretical considerations about the pattern of relations among them may yield misleading or meaningless results […] It makes no sense to control for one measure of mental ability, say, while correlating another measure of mental ability with academic achievement when the aim is to study the relation between mental ability and academic achievement. […] This is tantamount to partialling a relation out of itself and may lead to the fallacious conclusion that mental ability and academic achievement are not correlated.”
I'd recommend studying bivariate correlations among the variables over a regression predicting GPA. Predicting each individual course grade (@probabilityislogic's idea) also seems very worth doing. @Manoel's factor analysis idea makes me pause because you may not have all the variables necessary to map out the key underlying factors. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi
Another point to consider: what enables a student to do well in one course is related to what enables him/her to do well in another. There are overarching factors (cognitive, personality, circumstan |
53,951 | Dependent variable is a function of independent variables; can I sensibly include them in a regression? | Strong exogeneity is a term related to dynamic models, i.e. when there is time-series data involved. Since you are doing one-time survey, this term does not apply. What might be the problem with the regression though is omitted variable bias. Since GPA is a weighted average purely arithmetical formula applies:
$$GPA = w_1 G_1+...+ w_n G_n$$
where $w_i$ are the weights and $G_i$ are grades. This equation is not stochastic. However we can say that each grade is determined by student's ability plus stochastic term:
$$G_i=f_i(\mathbf{A})+\varepsilon_i$$
where $\mathbf{A}$ is vector of variables which determine students ability, and $f$ is the functional form of the relationship.
When viewed in this light it makes no sense including grades in the regression. What might be of interest how different functional relationships $f_i$ aggregate into functional relationship $f$:
$$GPA=f(\mathbf{A})+\varepsilon$$
But this does not answer the questions in your given example. So as other's suggested it is better to use factor analysis. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi | Strong exogeneity is a term related to dynamic models, i.e. when there is time-series data involved. Since you are doing one-time survey, this term does not apply. What might be the problem with the r | Dependent variable is a function of independent variables; can I sensibly include them in a regression?
Strong exogeneity is a term related to dynamic models, i.e. when there is time-series data involved. Since you are doing one-time survey, this term does not apply. What might be the problem with the regression though is omitted variable bias. Since GPA is a weighted average purely arithmetical formula applies:
$$GPA = w_1 G_1+...+ w_n G_n$$
where $w_i$ are the weights and $G_i$ are grades. This equation is not stochastic. However we can say that each grade is determined by student's ability plus stochastic term:
$$G_i=f_i(\mathbf{A})+\varepsilon_i$$
where $\mathbf{A}$ is vector of variables which determine students ability, and $f$ is the functional form of the relationship.
When viewed in this light it makes no sense including grades in the regression. What might be of interest how different functional relationships $f_i$ aggregate into functional relationship $f$:
$$GPA=f(\mathbf{A})+\varepsilon$$
But this does not answer the questions in your given example. So as other's suggested it is better to use factor analysis. | Dependent variable is a function of independent variables; can I sensibly include them in a regressi
Strong exogeneity is a term related to dynamic models, i.e. when there is time-series data involved. Since you are doing one-time survey, this term does not apply. What might be the problem with the r |
53,952 | Online method for detecting wave amplitude | More than a complete solution this is meant to be a very rough series of "hints" on how to implement one using FFT, there are probably better methods but, if it works...
First of all let's generate the wave, with varying frequency and amplitude
freqs <- c(0.2, 0.05, 0.1)
x <- NULL
y <- NULL
for (n in 1:length(freqs))
{
tmpx <- seq(n*100, (n+1)*100, 0.1)
x <- c(x, tmpx)
y <- c(y, sin(freqs[n] * 2*pi*tmpx))
}
y <- y * c(rep(1:5, each=(length(x)/5)))
y <- y + rnorm(length(x), 0, 0.2)
plot(x, y, "l")
Which gives us this
Now, if we calculate the FFT of the wave using fft and then plot it (I used the plotFFT function I posted here) we get:
Note that I overplotted the 3 frequencies (0.05, 0.1 and 0.2) with which I generated the data. As the data is sinusoidal the FFT does a very good job in retrieving them. Note that this works best when the y-values are 0 centered.
Now let's do a sliding FFT with a window of 50 we get
As expected, at the beginning we only get the 0.2 frequency (first 2 plot, so between 0 and 100), as we go on we get the 0.05 frequency (100-200) and finally the 0.1 frequency comes about (200-300).
The power of the FFT function is proportional to the amplitude of the wave. In fact, if we write down the maximum in each window we get:
1 Max frequency: 0.2 - power: 254
2 Max frequency: 0.2 - power: 452
3 Max frequency: 0.04 - power: 478
4 Max frequency: 0.04 - power: 606
5 Max frequency: 0.1 - power: 1053
6 Max frequency: 0.1 - power: 1253
===
This can be also achieved using a STFT (short-time Fourier transform), which is basically the same thing I showed you before, but with overlapping windows.
This is implemented, for instance, by the evolfft function of the RSEIS package.
It would give you:
stft <- evolfft(y, dt=0.1, Nfft=2048, Ns=100, Nov=90, fl=0, fh=0.5)
plotevol(stft)
This, however, may be trickier to analyze, especially online.
hope this helps somehow | Online method for detecting wave amplitude | More than a complete solution this is meant to be a very rough series of "hints" on how to implement one using FFT, there are probably better methods but, if it works...
First of all let's generate th | Online method for detecting wave amplitude
More than a complete solution this is meant to be a very rough series of "hints" on how to implement one using FFT, there are probably better methods but, if it works...
First of all let's generate the wave, with varying frequency and amplitude
freqs <- c(0.2, 0.05, 0.1)
x <- NULL
y <- NULL
for (n in 1:length(freqs))
{
tmpx <- seq(n*100, (n+1)*100, 0.1)
x <- c(x, tmpx)
y <- c(y, sin(freqs[n] * 2*pi*tmpx))
}
y <- y * c(rep(1:5, each=(length(x)/5)))
y <- y + rnorm(length(x), 0, 0.2)
plot(x, y, "l")
Which gives us this
Now, if we calculate the FFT of the wave using fft and then plot it (I used the plotFFT function I posted here) we get:
Note that I overplotted the 3 frequencies (0.05, 0.1 and 0.2) with which I generated the data. As the data is sinusoidal the FFT does a very good job in retrieving them. Note that this works best when the y-values are 0 centered.
Now let's do a sliding FFT with a window of 50 we get
As expected, at the beginning we only get the 0.2 frequency (first 2 plot, so between 0 and 100), as we go on we get the 0.05 frequency (100-200) and finally the 0.1 frequency comes about (200-300).
The power of the FFT function is proportional to the amplitude of the wave. In fact, if we write down the maximum in each window we get:
1 Max frequency: 0.2 - power: 254
2 Max frequency: 0.2 - power: 452
3 Max frequency: 0.04 - power: 478
4 Max frequency: 0.04 - power: 606
5 Max frequency: 0.1 - power: 1053
6 Max frequency: 0.1 - power: 1253
===
This can be also achieved using a STFT (short-time Fourier transform), which is basically the same thing I showed you before, but with overlapping windows.
This is implemented, for instance, by the evolfft function of the RSEIS package.
It would give you:
stft <- evolfft(y, dt=0.1, Nfft=2048, Ns=100, Nov=90, fl=0, fh=0.5)
plotevol(stft)
This, however, may be trickier to analyze, especially online.
hope this helps somehow | Online method for detecting wave amplitude
More than a complete solution this is meant to be a very rough series of "hints" on how to implement one using FFT, there are probably better methods but, if it works...
First of all let's generate th |
53,953 | Online method for detecting wave amplitude | In answer to another question on this forum I referred the OP to this site where there is open source code for a function called HT_PERIOD to measure the instantaneous period of a time series. There are also functions called HT_PHASE, HT_PHASOR and HT_SINE to respectively measure the phase, the phasor components of the sine wave/cyclic component and to extract the sine wave itself (scaled between -1 and 1) of a time series. As the calculations in these functions are causal they would be appropriate for an on-line function that updates as new data comes in. The code for these functions might be of help to you. | Online method for detecting wave amplitude | In answer to another question on this forum I referred the OP to this site where there is open source code for a function called HT_PERIOD to measure the instantaneous period of a time series. There a | Online method for detecting wave amplitude
In answer to another question on this forum I referred the OP to this site where there is open source code for a function called HT_PERIOD to measure the instantaneous period of a time series. There are also functions called HT_PHASE, HT_PHASOR and HT_SINE to respectively measure the phase, the phasor components of the sine wave/cyclic component and to extract the sine wave itself (scaled between -1 and 1) of a time series. As the calculations in these functions are causal they would be appropriate for an on-line function that updates as new data comes in. The code for these functions might be of help to you. | Online method for detecting wave amplitude
In answer to another question on this forum I referred the OP to this site where there is open source code for a function called HT_PERIOD to measure the instantaneous period of a time series. There a |
53,954 | Elasticity of log-log regression | whuber has made the point in the comment.
If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial X}\cdot\frac{1}{Y} = B_1\frac{1}{X}$, i.e. $B_1 = \frac{\partial Y}{\partial X}\cdot\frac{X}{Y}$.
$E_{y,x} = \lim_{X \rightarrow x} \frac { \Delta Y} { y} / \frac { \Delta X} { x}$, which is the same thing. Take absolute values if you want to avoid negative elasticities. | Elasticity of log-log regression | whuber has made the point in the comment.
If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial | Elasticity of log-log regression
whuber has made the point in the comment.
If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial X}\cdot\frac{1}{Y} = B_1\frac{1}{X}$, i.e. $B_1 = \frac{\partial Y}{\partial X}\cdot\frac{X}{Y}$.
$E_{y,x} = \lim_{X \rightarrow x} \frac { \Delta Y} { y} / \frac { \Delta X} { x}$, which is the same thing. Take absolute values if you want to avoid negative elasticities. | Elasticity of log-log regression
whuber has made the point in the comment.
If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial |
53,955 | Elasticity of log-log regression | Responding 9 years later in case anyone is interested
If we consider $X = e^{{\log{X}}}$ then
$$\beta = \frac{d \log{Y}}{d \log X} = \frac{d \log{Y}}{d Y} \frac{d Y}{d X} \frac{dX}{d \log{X}}= \frac{1}{Y} \frac{dY}{dX} \frac{X}{1}= \frac{X}{Y} \frac{dY}{dX} = \ elasticity$$ | Elasticity of log-log regression | Responding 9 years later in case anyone is interested
If we consider $X = e^{{\log{X}}}$ then
$$\beta = \frac{d \log{Y}}{d \log X} = \frac{d \log{Y}}{d Y} \frac{d Y}{d X} \frac{dX}{d \log{X}}= \frac | Elasticity of log-log regression
Responding 9 years later in case anyone is interested
If we consider $X = e^{{\log{X}}}$ then
$$\beta = \frac{d \log{Y}}{d \log X} = \frac{d \log{Y}}{d Y} \frac{d Y}{d X} \frac{dX}{d \log{X}}= \frac{1}{Y} \frac{dY}{dX} \frac{X}{1}= \frac{X}{Y} \frac{dY}{dX} = \ elasticity$$ | Elasticity of log-log regression
Responding 9 years later in case anyone is interested
If we consider $X = e^{{\log{X}}}$ then
$$\beta = \frac{d \log{Y}}{d \log X} = \frac{d \log{Y}}{d Y} \frac{d Y}{d X} \frac{dX}{d \log{X}}= \frac |
53,956 | Statistical equivalence in the birthday paradox | In a group of 23 people, all pairs must involve just those 23 people: the pairs are thus mathematically (and statistically) dependent. On the other hand, 253 pairs chosen independently and randomly from the 366*365/2 possible pairs will typically involve around 100 separate people. This (strong) dependency means we cannot use simple formulas for combining probabilities.
This vague statement is in the Wikipedia article to counter the false intuition some people have that birthday collisions in small groups must be rare. It is, as the article notes, not at all rigorous. | Statistical equivalence in the birthday paradox | In a group of 23 people, all pairs must involve just those 23 people: the pairs are thus mathematically (and statistically) dependent. On the other hand, 253 pairs chosen independently and randomly f | Statistical equivalence in the birthday paradox
In a group of 23 people, all pairs must involve just those 23 people: the pairs are thus mathematically (and statistically) dependent. On the other hand, 253 pairs chosen independently and randomly from the 366*365/2 possible pairs will typically involve around 100 separate people. This (strong) dependency means we cannot use simple formulas for combining probabilities.
This vague statement is in the Wikipedia article to counter the false intuition some people have that birthday collisions in small groups must be rare. It is, as the article notes, not at all rigorous. | Statistical equivalence in the birthday paradox
In a group of 23 people, all pairs must involve just those 23 people: the pairs are thus mathematically (and statistically) dependent. On the other hand, 253 pairs chosen independently and randomly f |
53,957 | Expected value of certain exponential transformation of standard normal variable | If we let $Y = -\mu X$, then $Y$ is distributed as $N(0, \mu^2)$, and $e^Y$ is a lognormal random variable with parameters $0, \mu^2$. The expected value of a lognormal with parameters $a, b^2$ is $e^{a + b^2/2}$, so $E(e^{-\mu X}) = E(e^Y) = e^{\mu^2/2}$. The given result follows immediately. | Expected value of certain exponential transformation of standard normal variable | If we let $Y = -\mu X$, then $Y$ is distributed as $N(0, \mu^2)$, and $e^Y$ is a lognormal random variable with parameters $0, \mu^2$. The expected value of a lognormal with parameters $a, b^2$ is $e^ | Expected value of certain exponential transformation of standard normal variable
If we let $Y = -\mu X$, then $Y$ is distributed as $N(0, \mu^2)$, and $e^Y$ is a lognormal random variable with parameters $0, \mu^2$. The expected value of a lognormal with parameters $a, b^2$ is $e^{a + b^2/2}$, so $E(e^{-\mu X}) = E(e^Y) = e^{\mu^2/2}$. The given result follows immediately. | Expected value of certain exponential transformation of standard normal variable
If we let $Y = -\mu X$, then $Y$ is distributed as $N(0, \mu^2)$, and $e^Y$ is a lognormal random variable with parameters $0, \mu^2$. The expected value of a lognormal with parameters $a, b^2$ is $e^ |
53,958 | Expected value of certain exponential transformation of standard normal variable | The answer is right, but to make it even more fairly obvious. Imaginae Y = -muX -0.5mu^2, then it follows Y~N(-0.5mu^2,mu^2) as above, and thus expectation of lognormal is e^(-0.5mu^2+0.5mu^2) thus = e^0 = 1 | Expected value of certain exponential transformation of standard normal variable | The answer is right, but to make it even more fairly obvious. Imaginae Y = -muX -0.5mu^2, then it follows Y~N(-0.5mu^2,mu^2) as above, and thus expectation of lognormal is e^(-0.5mu^2+0.5mu^2) thus = | Expected value of certain exponential transformation of standard normal variable
The answer is right, but to make it even more fairly obvious. Imaginae Y = -muX -0.5mu^2, then it follows Y~N(-0.5mu^2,mu^2) as above, and thus expectation of lognormal is e^(-0.5mu^2+0.5mu^2) thus = e^0 = 1 | Expected value of certain exponential transformation of standard normal variable
The answer is right, but to make it even more fairly obvious. Imaginae Y = -muX -0.5mu^2, then it follows Y~N(-0.5mu^2,mu^2) as above, and thus expectation of lognormal is e^(-0.5mu^2+0.5mu^2) thus = |
53,959 | Excel's confidence interval function throws #NUM! when standard deviation is 0 | This behavior is questionable but documented. The help for "confidence" states:
If standard_dev ≤ 0, CONFIDENCE returns the #NUM! error value.
...
If we assume alpha equals 0.05, we need to calculate the area under the standard normal
curve that equals (1 - alpha), or 95 percent. This value is ± 1.96. The confidence
interval is therefore:
$$\bar{x} \pm 1.96\left(\frac{\sigma}{\sqrt{n}}\right).$$
(Yes, this is badly phrased, but that's a direct quote.)
To overcome these (somewhat artificial) limitations, compute the confidence limits yourself (according to this formula) as
=AVERAGE(X) + NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X))
=AVERAGE(X) - NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X))
where 'X' names a range containing your data (such as B33:I33) and '0.05' is $\alpha$ (the complement of the desired confidence), just as before. In your case, because STDEV(X) is 0, both limits will equal the mean. This is legitimate, although it has its own problems (because it almost surely fails to cover the true mean). | Excel's confidence interval function throws #NUM! when standard deviation is 0 | This behavior is questionable but documented. The help for "confidence" states:
If standard_dev ≤ 0, CONFIDENCE returns the #NUM! error value.
...
If we assume alpha equals 0.05, we need to cal | Excel's confidence interval function throws #NUM! when standard deviation is 0
This behavior is questionable but documented. The help for "confidence" states:
If standard_dev ≤ 0, CONFIDENCE returns the #NUM! error value.
...
If we assume alpha equals 0.05, we need to calculate the area under the standard normal
curve that equals (1 - alpha), or 95 percent. This value is ± 1.96. The confidence
interval is therefore:
$$\bar{x} \pm 1.96\left(\frac{\sigma}{\sqrt{n}}\right).$$
(Yes, this is badly phrased, but that's a direct quote.)
To overcome these (somewhat artificial) limitations, compute the confidence limits yourself (according to this formula) as
=AVERAGE(X) + NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X))
=AVERAGE(X) - NORMSINV(1-0.05/2) * STDEV(X)/SQRT(COUNT(X))
where 'X' names a range containing your data (such as B33:I33) and '0.05' is $\alpha$ (the complement of the desired confidence), just as before. In your case, because STDEV(X) is 0, both limits will equal the mean. This is legitimate, although it has its own problems (because it almost surely fails to cover the true mean). | Excel's confidence interval function throws #NUM! when standard deviation is 0
This behavior is questionable but documented. The help for "confidence" states:
If standard_dev ≤ 0, CONFIDENCE returns the #NUM! error value.
...
If we assume alpha equals 0.05, we need to cal |
53,960 | Excel's confidence interval function throws #NUM! when standard deviation is 0 | Let's assume all your 8 subjects chose to answer 3 on the (1, 2, 3, 4, 5) scale. Let's assume that their opinions were continuous in their minds, and they rounded it to the closest values of the scale.
This means that the original opinion of each subject was in the range $[2.5, 3.5)$.
> mean(replicate(1e5, diff(range(rnorm(8)))))
[1] 2.841661
> mean(replicate(1e5, diff(range(rnorm(8)))))
[1] 2.847447
> 1 / 2.845
[1] 0.3514938
The above simulation shows that if you take 8 samples from a normal distribution of sd 0.35 they will cover an interval of the approximate width of 1.
Thus in your population the sd is likely to be 0.35 or less. Rounding to one of 1, 2, 3, 4, 5 is not precise enough to measure the sd in this case. | Excel's confidence interval function throws #NUM! when standard deviation is 0 | Let's assume all your 8 subjects chose to answer 3 on the (1, 2, 3, 4, 5) scale. Let's assume that their opinions were continuous in their minds, and they rounded it to the closest values of the scale | Excel's confidence interval function throws #NUM! when standard deviation is 0
Let's assume all your 8 subjects chose to answer 3 on the (1, 2, 3, 4, 5) scale. Let's assume that their opinions were continuous in their minds, and they rounded it to the closest values of the scale.
This means that the original opinion of each subject was in the range $[2.5, 3.5)$.
> mean(replicate(1e5, diff(range(rnorm(8)))))
[1] 2.841661
> mean(replicate(1e5, diff(range(rnorm(8)))))
[1] 2.847447
> 1 / 2.845
[1] 0.3514938
The above simulation shows that if you take 8 samples from a normal distribution of sd 0.35 they will cover an interval of the approximate width of 1.
Thus in your population the sd is likely to be 0.35 or less. Rounding to one of 1, 2, 3, 4, 5 is not precise enough to measure the sd in this case. | Excel's confidence interval function throws #NUM! when standard deviation is 0
Let's assume all your 8 subjects chose to answer 3 on the (1, 2, 3, 4, 5) scale. Let's assume that their opinions were continuous in their minds, and they rounded it to the closest values of the scale |
53,961 | Excel's confidence interval function throws #NUM! when standard deviation is 0 | If eight samples from a distribution are exactly the same it is probably not a normal distribution or you use rounding at a higher order of magnitude than of the standard deviation. Or are you calculating means on a numerically coded ordinal scale? | Excel's confidence interval function throws #NUM! when standard deviation is 0 | If eight samples from a distribution are exactly the same it is probably not a normal distribution or you use rounding at a higher order of magnitude than of the standard deviation. Or are you calcula | Excel's confidence interval function throws #NUM! when standard deviation is 0
If eight samples from a distribution are exactly the same it is probably not a normal distribution or you use rounding at a higher order of magnitude than of the standard deviation. Or are you calculating means on a numerically coded ordinal scale? | Excel's confidence interval function throws #NUM! when standard deviation is 0
If eight samples from a distribution are exactly the same it is probably not a normal distribution or you use rounding at a higher order of magnitude than of the standard deviation. Or are you calcula |
53,962 | Excel's confidence interval function throws #NUM! when standard deviation is 0 | Let's suppose that you have a number of instances in which the average rating is 3. Each of these will have a variance -- if the raters all answered "3" then that variance will be zero. In such cases, why not use the average of the variances in which the average rating is 3 (including your 0 value)? This will give you a real number and a reasonable confidence interval. I would use median rather than mean to "average" the variances, since it is less subject to extremes (although extremes would be unlikely on a fixed 5 point scale).
Of course, you might decide that any average rating in some range (such as 2.5 to 3.499) counts as "3" in order to give you more values to average.
This procedure is simple and intuitive. I like whuber's approach as well, but then somebody is going to ask you "why 95%? why not some other %". You are less likely to get this question if you take a simple average. | Excel's confidence interval function throws #NUM! when standard deviation is 0 | Let's suppose that you have a number of instances in which the average rating is 3. Each of these will have a variance -- if the raters all answered "3" then that variance will be zero. In such case | Excel's confidence interval function throws #NUM! when standard deviation is 0
Let's suppose that you have a number of instances in which the average rating is 3. Each of these will have a variance -- if the raters all answered "3" then that variance will be zero. In such cases, why not use the average of the variances in which the average rating is 3 (including your 0 value)? This will give you a real number and a reasonable confidence interval. I would use median rather than mean to "average" the variances, since it is less subject to extremes (although extremes would be unlikely on a fixed 5 point scale).
Of course, you might decide that any average rating in some range (such as 2.5 to 3.499) counts as "3" in order to give you more values to average.
This procedure is simple and intuitive. I like whuber's approach as well, but then somebody is going to ask you "why 95%? why not some other %". You are less likely to get this question if you take a simple average. | Excel's confidence interval function throws #NUM! when standard deviation is 0
Let's suppose that you have a number of instances in which the average rating is 3. Each of these will have a variance -- if the raters all answered "3" then that variance will be zero. In such case |
53,963 | Test for the effect of UN ratification on corruption in a set of countries | It's probably not a good idea without digging deeper into the source surveys. TI themselves note that changes in a country's index can come from either a change in corruption or just a change in methodology of the sources that they use. The sources themselves change over time as well, so comparing the index from year to year is actually fairly complicated to do properly.
See the references at http://en.wikipedia.org/wiki/Corruption_Perceptions_Index | Test for the effect of UN ratification on corruption in a set of countries | It's probably not a good idea without digging deeper into the source surveys. TI themselves note that changes in a country's index can come from either a change in corruption or just a change in metho | Test for the effect of UN ratification on corruption in a set of countries
It's probably not a good idea without digging deeper into the source surveys. TI themselves note that changes in a country's index can come from either a change in corruption or just a change in methodology of the sources that they use. The sources themselves change over time as well, so comparing the index from year to year is actually fairly complicated to do properly.
See the references at http://en.wikipedia.org/wiki/Corruption_Perceptions_Index | Test for the effect of UN ratification on corruption in a set of countries
It's probably not a good idea without digging deeper into the source surveys. TI themselves note that changes in a country's index can come from either a change in corruption or just a change in metho |
53,964 | Test for the effect of UN ratification on corruption in a set of countries | One problem with your pre-post paired t-test idea is that it chould give a small p-value if there's a general upward (or downward) trend in the corruption score over time regardless of ratification. You need some sort of comparison group in which the ratification status did not change between the two time points.
In principle, one approach to forming a comparison group could be to find a matched 'control' country that didn't change its ratification status during the same time period for each country (of a sample of those) that did. The 'control' country should have similar baseline corruption score, and ideally be as similar as possible in other respects that might affect the rate of change of corruption over time (that's what I mean by matching). I haven't looked at the data so I've no idea if this is feasible. You'd want to consider removing the country that changed status from your analysis if you can't find a 'control' country that matches closely enough (but what's 'close enough'?).
In any case it certainly wouldn't be simple, but drawing valid and defensible conclusions about causal effects from observational data seldom is. I agree with @JMS that it's probably not worth attempting in your situation. | Test for the effect of UN ratification on corruption in a set of countries | One problem with your pre-post paired t-test idea is that it chould give a small p-value if there's a general upward (or downward) trend in the corruption score over time regardless of ratification. Y | Test for the effect of UN ratification on corruption in a set of countries
One problem with your pre-post paired t-test idea is that it chould give a small p-value if there's a general upward (or downward) trend in the corruption score over time regardless of ratification. You need some sort of comparison group in which the ratification status did not change between the two time points.
In principle, one approach to forming a comparison group could be to find a matched 'control' country that didn't change its ratification status during the same time period for each country (of a sample of those) that did. The 'control' country should have similar baseline corruption score, and ideally be as similar as possible in other respects that might affect the rate of change of corruption over time (that's what I mean by matching). I haven't looked at the data so I've no idea if this is feasible. You'd want to consider removing the country that changed status from your analysis if you can't find a 'control' country that matches closely enough (but what's 'close enough'?).
In any case it certainly wouldn't be simple, but drawing valid and defensible conclusions about causal effects from observational data seldom is. I agree with @JMS that it's probably not worth attempting in your situation. | Test for the effect of UN ratification on corruption in a set of countries
One problem with your pre-post paired t-test idea is that it chould give a small p-value if there's a general upward (or downward) trend in the corruption score over time regardless of ratification. Y |
53,965 | Interpreting interactions between two treatments | If I understand you correctly, your design is:
$\begin{array}{rcccl}
~ & B_{X} & B_{B} & M \\\hline
A_{X} & \mu_{11} & \mu_{12} & \mu_{1.} \\
A_{A} & \mu_{21} & \mu_{22} & \mu_{2.} \\\hline
M & \mu_{.1} & \mu_{.2} & \mu
\end{array}$
The first part of your hypothesis (effect of treatment B within control group of A) then means that $H_{1}^{1}: \mu_{12} - \mu_{11} > 0$.
The second part of your hypothesis (no effect of treatment B within treatment A) would then be $H_{1}^{2}: \mu_{22} - \mu_{21} = 0$.
So your composite hypothesis is $H_{1}: H_{1}^{1} \wedge H_{1}^{2}$. The problem is with the second part because a non-significant post-hoc test for $H_{0}: \mu_{22} - \mu_{11} = 0$ doesn't mean that there is no effect - your test simply might not have enough power to detect the difference.
You could still test the hypothesis $H_{1}': (\mu_{12} - \mu_{11}) > (\mu_{22} - \mu_{21})$, i.e., an interaction contrast. However, this tests the weaker hypothesis that B has a bigger effect within A's control group than within treatment A.
I'm not sure what you mean by "the results of the analysis show that only Group 3 was significantly different than the others". I don't understand how exactly you would test that. You could test $\mu_{12} \neq \frac{1}{3} (\mu_{11} + \mu_{21} + \mu_{22})$, but that is a weaker hypothesis (Group 3 is different from the average of the remaining groups). | Interpreting interactions between two treatments | If I understand you correctly, your design is:
$\begin{array}{rcccl}
~ & B_{X} & B_{B} & M \\\hline
A_{X} & \mu_{11} & \mu_{12} & \mu_{1.} \\
A_{A} & \mu_{21} & \mu_{22} & \mu_{2.} \\ | Interpreting interactions between two treatments
If I understand you correctly, your design is:
$\begin{array}{rcccl}
~ & B_{X} & B_{B} & M \\\hline
A_{X} & \mu_{11} & \mu_{12} & \mu_{1.} \\
A_{A} & \mu_{21} & \mu_{22} & \mu_{2.} \\\hline
M & \mu_{.1} & \mu_{.2} & \mu
\end{array}$
The first part of your hypothesis (effect of treatment B within control group of A) then means that $H_{1}^{1}: \mu_{12} - \mu_{11} > 0$.
The second part of your hypothesis (no effect of treatment B within treatment A) would then be $H_{1}^{2}: \mu_{22} - \mu_{21} = 0$.
So your composite hypothesis is $H_{1}: H_{1}^{1} \wedge H_{1}^{2}$. The problem is with the second part because a non-significant post-hoc test for $H_{0}: \mu_{22} - \mu_{11} = 0$ doesn't mean that there is no effect - your test simply might not have enough power to detect the difference.
You could still test the hypothesis $H_{1}': (\mu_{12} - \mu_{11}) > (\mu_{22} - \mu_{21})$, i.e., an interaction contrast. However, this tests the weaker hypothesis that B has a bigger effect within A's control group than within treatment A.
I'm not sure what you mean by "the results of the analysis show that only Group 3 was significantly different than the others". I don't understand how exactly you would test that. You could test $\mu_{12} \neq \frac{1}{3} (\mu_{11} + \mu_{21} + \mu_{22})$, but that is a weaker hypothesis (Group 3 is different from the average of the remaining groups). | Interpreting interactions between two treatments
If I understand you correctly, your design is:
$\begin{array}{rcccl}
~ & B_{X} & B_{B} & M \\\hline
A_{X} & \mu_{11} & \mu_{12} & \mu_{1.} \\
A_{A} & \mu_{21} & \mu_{22} & \mu_{2.} \\ |
53,966 | Interpreting interactions between two treatments | If in post hoc testing Group 3's mean was significantly different from all the others' then you've already shown that XB is different from AB. Am I missing something? Your statement about B's effect (and its being lost when combined with A's) would be correct. | Interpreting interactions between two treatments | If in post hoc testing Group 3's mean was significantly different from all the others' then you've already shown that XB is different from AB. Am I missing something? Your statement about B's effect | Interpreting interactions between two treatments
If in post hoc testing Group 3's mean was significantly different from all the others' then you've already shown that XB is different from AB. Am I missing something? Your statement about B's effect (and its being lost when combined with A's) would be correct. | Interpreting interactions between two treatments
If in post hoc testing Group 3's mean was significantly different from all the others' then you've already shown that XB is different from AB. Am I missing something? Your statement about B's effect |
53,967 | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$ | (This response picks up where @AVB, who has provided useful comments, left off by suggesting we need to figure out which differences $X_i - X_j$ ought to be included among the independent variables.)
The big question here is what is an effective method to identify the model. Later we can worry about faster methods. (But regression is so fast that you could process dozens of variables for millions of records in a matter of seconds.)
To make sure I'm not going astray, and to illustrate the procedure, I simulated a dataset like yours, only a little simpler. It consists of 60 independent draws from a common multivariate normal distribution with five unit-variance variables $Z_1, Z_2, Z_3, Z_4,$ and $Y$. The first two variables are independent of the second two and have correlation coefficient 0.9. The second two variables have correlation coefficient -0.9. The correlations between $Z_i$ and $Y$ are 0.5, 0.5, 0.5, and -0.5. Then--this changes nothing essential but makes the data a little more interesting--I rescaled the variables, thus: $X_1 = Z_1, X_2 = 2 Z_2, X_3 = 3 Z_3, X_4 = 4 Z_4$.
Let's begin by establishing that this simulation emulates the stated problem. Here is a scatterplot matrix.
The full regression of $Y$ against the $X_i$ is highly significant ($F(4, 55) = 15.28,\ p < 0.0001$) but all four t-values equal 1.24 ($p = 0.222$), which is not significant at all. The estimated coefficients are 0.26, 0.13, 0.088, and -0.066 (rounded to two sig figs).
Here is my proposal: systematically combine variables in pairs (six pairs in this case, 36 pairs for nine variables), one pair at a time. Regress a pair along with all remaining variables, seeking highly significant results for the pairs.
What is a "pair"? It is the linear combination suggested by the estimated coefficients. In this case, they are
$$\eqalign{
X_{12} =& X_1 / 0.26 &+ X_2 / 0.13 \cr
X_{13} =& X_1 / 0.26 &+ X_3 / 0.088 \cr
X_{14} =& X_1 / 0.26 &- X_4 / 0.066 \cr
X_{23} =& X_2 / 0.13 &+ X_3 / 0.088 \cr
X_{24} =& X_2 / 0.13 &- X_4 / 0.066 \cr
X_{34} =& X_3 / 0.088 &- X_4 / 0.066 \text{.}
}$$
In general, with $\hat{\beta}_i$ representing the estimated coefficient of $X_i$ in this full regression, the pairs are defined by
$$X_{ij} = X_i / \hat{\beta}_i + X_j / \hat{\beta}_j\text{.}$$
This is so systematic that it's straightforward to script.
The "identification regressions" are the model
$$Y \sim X_{12} + X_3 + X_4$$
along with the five additional permutations thereof, one for each pair.
You are looking for results where $X_{ij}$ becomes significant: ignore the significance of the remaining $X_k$. To see what's going on, I will list the results of all six identification regressions for the simulation. As a shorthand, I list the variables followed by a vector of their t-values only:
$$\eqalign{
X_{12}, X_3, X_4:&\ (5.50, 1.24, -1.24) \cr
X_{13}, X_2, X_4:&\ (1.36, 4.94, -1.13) \cr
X_{14}, X_2, X_3:&\ (1.31, 5.16, 1.17) \cr
X_{23}, X_1, X_4:&\ (1.64, 3.10, -1.09) \cr
X_{24}, X_1, X_3:&\ (1.50, 4.15, 1.07) \cr
X_{34}, X_1, X_2:&\ (5.56, 1.25, 1.25)
}
$$
As you can see from the first component of each vector (the t-value for the pair), precisely two disjoint pairs exhibit significant t-statistics: $X_{12}$, with $t = 5.50\ (p \lt 0.001)$, and $X_{34}$, with $t = 5.56\ (p \lt 0.001)$. The model thus identified is
$$Y \sim X_{12} + X_{34}\text{.}$$
(In general, we would also include--provisionally--any remaining $X_i$ not participating in any of the pairs. There aren't any in this case.)
The regression results are
$$\eqalign{
\hat{\beta_{12}} &= 0.027\ (t = 5.54,\ p \lt 0.001) \cr
\hat{\beta_{34}} &= 0.0055\ (t = 5.58,\ p \lt 0.001), \cr
F(2, 57) &= 30.92\ (p \lt 0.0001).
}$$
Translating back to the original $X_i$, the model is
$$\eqalign{
Y &= 0.027(X_1 / 0.26 + X_2 / 0.13) + 0.0055(X_3 / 0.088 - X_4 / 0.066) \cr
&= 0.103 X_1 + 0.206 X_2 + 0.0629 X_3 - 0.0839 X_4 \cr
&= 0.103 (Z_1 + Z_2) + 0.021 (Z_3 - Z_4) \text{.}
}$$
(The last line shows how this all relates to form of the original question.) That's exactly the form used in the simulation: $Z_1$ and $Z_2$ enter with the same coefficient and $Z_3$ and $Z_4$ enter with opposite coefficients. This method got the right answer.
I want to share a cool observation in this regard. First, here's the scatterplot matrix for the model.
Notice how $X_{12}$ and $X_{34}$ look uncorrelated. Furthermore, $Y$ is only weakly correlated with these variables. Doesn't look like much of a relationship, does it? Now consider an alternative set of pairs, $X_{13}$ and $X_{24}$. The regression of $Y$ on these is still highly significant ($F(2, 57) = 16.61\ (p \lt 0.0001).$ Moreover, the coefficient of $X_{24}$ is significant ($t = 2.39,\ p = 0.020$) even though that of $X_{13}$ is not ($t = 0.24,\ p = 0.812$). But look at the scatterplot matrix!
Clearly $X_{13}$ and $X_{24}$ are strongly correlated. But, even though this is the wrong model, $Y$ is also visibly correlated with these two variables, much more so than in the preceding scatterplot matrix!
The lesson here is that mere bivariate plots can be deceiving in a multiple regression setting: to analyze the relationship between any candidate independent variable (such as $X_{12}$) and the independent variable ($Y$), we must make sure to "factor out" all other independent variables. (This is done by regressing $Y$ on all other independent variables and, separately, regressing $X_{12}$ on all the others. Then one looks at a scatterplot of the residuals of the first regression against the residuals of the second regression. It's a theorem that the slope in this bivariate regression equals the coefficient of $X_{12}$ in the full multivariate regression of $Y$ against all the variables.)
This insight shows why we might want to systematically perform the "identification regressions" I have proposed, rather than using graphical methods or attempting to combine many of the pairs in one model. Each identification regression assesses the strength of the contribution of a proposed linear combination of variables (a "pair") in the context of all the remaining independent variables.
Note that although correlated variables were involved, correlation is not an essential feature of the problem or of the solution. Even where you don't expect the original variables $X_i$ to be strongly correlated, you could expect a model to have (unknown) linear constraints among the variables. That is the important issue to cope with. The presence of correlation only means that it can be problematic to identify such pairs solely by inspecting the original regression results.
Following the procedure I have proposed does not guarantee you will find a unique solution. It's conceivable, for instance, that you will find so many highly significant pairs that they are linearly dependent, forcing you to select among them by some other criterion. Nevertheless, the results you get ought to limit the sets of pairs you need to examine; they can be obtained with a straightforward procedure without intervention; and--if this simulation is any guide--they have a good chance of producing effective results. | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$ | (This response picks up where @AVB, who has provided useful comments, left off by suggesting we need to figure out which differences $X_i - X_j$ ought to be included among the independent variables.)
| Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$
(This response picks up where @AVB, who has provided useful comments, left off by suggesting we need to figure out which differences $X_i - X_j$ ought to be included among the independent variables.)
The big question here is what is an effective method to identify the model. Later we can worry about faster methods. (But regression is so fast that you could process dozens of variables for millions of records in a matter of seconds.)
To make sure I'm not going astray, and to illustrate the procedure, I simulated a dataset like yours, only a little simpler. It consists of 60 independent draws from a common multivariate normal distribution with five unit-variance variables $Z_1, Z_2, Z_3, Z_4,$ and $Y$. The first two variables are independent of the second two and have correlation coefficient 0.9. The second two variables have correlation coefficient -0.9. The correlations between $Z_i$ and $Y$ are 0.5, 0.5, 0.5, and -0.5. Then--this changes nothing essential but makes the data a little more interesting--I rescaled the variables, thus: $X_1 = Z_1, X_2 = 2 Z_2, X_3 = 3 Z_3, X_4 = 4 Z_4$.
Let's begin by establishing that this simulation emulates the stated problem. Here is a scatterplot matrix.
The full regression of $Y$ against the $X_i$ is highly significant ($F(4, 55) = 15.28,\ p < 0.0001$) but all four t-values equal 1.24 ($p = 0.222$), which is not significant at all. The estimated coefficients are 0.26, 0.13, 0.088, and -0.066 (rounded to two sig figs).
Here is my proposal: systematically combine variables in pairs (six pairs in this case, 36 pairs for nine variables), one pair at a time. Regress a pair along with all remaining variables, seeking highly significant results for the pairs.
What is a "pair"? It is the linear combination suggested by the estimated coefficients. In this case, they are
$$\eqalign{
X_{12} =& X_1 / 0.26 &+ X_2 / 0.13 \cr
X_{13} =& X_1 / 0.26 &+ X_3 / 0.088 \cr
X_{14} =& X_1 / 0.26 &- X_4 / 0.066 \cr
X_{23} =& X_2 / 0.13 &+ X_3 / 0.088 \cr
X_{24} =& X_2 / 0.13 &- X_4 / 0.066 \cr
X_{34} =& X_3 / 0.088 &- X_4 / 0.066 \text{.}
}$$
In general, with $\hat{\beta}_i$ representing the estimated coefficient of $X_i$ in this full regression, the pairs are defined by
$$X_{ij} = X_i / \hat{\beta}_i + X_j / \hat{\beta}_j\text{.}$$
This is so systematic that it's straightforward to script.
The "identification regressions" are the model
$$Y \sim X_{12} + X_3 + X_4$$
along with the five additional permutations thereof, one for each pair.
You are looking for results where $X_{ij}$ becomes significant: ignore the significance of the remaining $X_k$. To see what's going on, I will list the results of all six identification regressions for the simulation. As a shorthand, I list the variables followed by a vector of their t-values only:
$$\eqalign{
X_{12}, X_3, X_4:&\ (5.50, 1.24, -1.24) \cr
X_{13}, X_2, X_4:&\ (1.36, 4.94, -1.13) \cr
X_{14}, X_2, X_3:&\ (1.31, 5.16, 1.17) \cr
X_{23}, X_1, X_4:&\ (1.64, 3.10, -1.09) \cr
X_{24}, X_1, X_3:&\ (1.50, 4.15, 1.07) \cr
X_{34}, X_1, X_2:&\ (5.56, 1.25, 1.25)
}
$$
As you can see from the first component of each vector (the t-value for the pair), precisely two disjoint pairs exhibit significant t-statistics: $X_{12}$, with $t = 5.50\ (p \lt 0.001)$, and $X_{34}$, with $t = 5.56\ (p \lt 0.001)$. The model thus identified is
$$Y \sim X_{12} + X_{34}\text{.}$$
(In general, we would also include--provisionally--any remaining $X_i$ not participating in any of the pairs. There aren't any in this case.)
The regression results are
$$\eqalign{
\hat{\beta_{12}} &= 0.027\ (t = 5.54,\ p \lt 0.001) \cr
\hat{\beta_{34}} &= 0.0055\ (t = 5.58,\ p \lt 0.001), \cr
F(2, 57) &= 30.92\ (p \lt 0.0001).
}$$
Translating back to the original $X_i$, the model is
$$\eqalign{
Y &= 0.027(X_1 / 0.26 + X_2 / 0.13) + 0.0055(X_3 / 0.088 - X_4 / 0.066) \cr
&= 0.103 X_1 + 0.206 X_2 + 0.0629 X_3 - 0.0839 X_4 \cr
&= 0.103 (Z_1 + Z_2) + 0.021 (Z_3 - Z_4) \text{.}
}$$
(The last line shows how this all relates to form of the original question.) That's exactly the form used in the simulation: $Z_1$ and $Z_2$ enter with the same coefficient and $Z_3$ and $Z_4$ enter with opposite coefficients. This method got the right answer.
I want to share a cool observation in this regard. First, here's the scatterplot matrix for the model.
Notice how $X_{12}$ and $X_{34}$ look uncorrelated. Furthermore, $Y$ is only weakly correlated with these variables. Doesn't look like much of a relationship, does it? Now consider an alternative set of pairs, $X_{13}$ and $X_{24}$. The regression of $Y$ on these is still highly significant ($F(2, 57) = 16.61\ (p \lt 0.0001).$ Moreover, the coefficient of $X_{24}$ is significant ($t = 2.39,\ p = 0.020$) even though that of $X_{13}$ is not ($t = 0.24,\ p = 0.812$). But look at the scatterplot matrix!
Clearly $X_{13}$ and $X_{24}$ are strongly correlated. But, even though this is the wrong model, $Y$ is also visibly correlated with these two variables, much more so than in the preceding scatterplot matrix!
The lesson here is that mere bivariate plots can be deceiving in a multiple regression setting: to analyze the relationship between any candidate independent variable (such as $X_{12}$) and the independent variable ($Y$), we must make sure to "factor out" all other independent variables. (This is done by regressing $Y$ on all other independent variables and, separately, regressing $X_{12}$ on all the others. Then one looks at a scatterplot of the residuals of the first regression against the residuals of the second regression. It's a theorem that the slope in this bivariate regression equals the coefficient of $X_{12}$ in the full multivariate regression of $Y$ against all the variables.)
This insight shows why we might want to systematically perform the "identification regressions" I have proposed, rather than using graphical methods or attempting to combine many of the pairs in one model. Each identification regression assesses the strength of the contribution of a proposed linear combination of variables (a "pair") in the context of all the remaining independent variables.
Note that although correlated variables were involved, correlation is not an essential feature of the problem or of the solution. Even where you don't expect the original variables $X_i$ to be strongly correlated, you could expect a model to have (unknown) linear constraints among the variables. That is the important issue to cope with. The presence of correlation only means that it can be problematic to identify such pairs solely by inspecting the original regression results.
Following the procedure I have proposed does not guarantee you will find a unique solution. It's conceivable, for instance, that you will find so many highly significant pairs that they are linearly dependent, forcing you to select among them by some other criterion. Nevertheless, the results you get ought to limit the sets of pairs you need to examine; they can be obtained with a straightforward procedure without intervention; and--if this simulation is any guide--they have a good chance of producing effective results. | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$
(This response picks up where @AVB, who has provided useful comments, left off by suggesting we need to figure out which differences $X_i - X_j$ ought to be included among the independent variables.)
|
53,968 | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$ | You can not use the 36-coefficient model, and not because it's going to be slow. Speed is the least of your worries here.
The real trouble is that you've taken an already under-determined problem (because of the correlations), and converted it into a problem which is severely under-determined for any data, because of linear dependencies. Simply put, $x_1-x_2=(x_1-x_3)-(x_2-x_3)$, so you can only determine 2 out of the 3 coefficients for these terms in the best case. The only way to fix this will be to prescribe some artificial regularization condition, like having minimal $\sum c_i^2$, or whatever may be right in your case.
It seems to me that you might be better off if you start by analyzing the correlation matrix and first figuring out which terms of the form $c_i-c_j$ should really appear in your problem. | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$ | You can not use the 36-coefficient model, and not because it's going to be slow. Speed is the least of your worries here.
The real trouble is that you've taken an already under-determined problem (be | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$
You can not use the 36-coefficient model, and not because it's going to be slow. Speed is the least of your worries here.
The real trouble is that you've taken an already under-determined problem (because of the correlations), and converted it into a problem which is severely under-determined for any data, because of linear dependencies. Simply put, $x_1-x_2=(x_1-x_3)-(x_2-x_3)$, so you can only determine 2 out of the 3 coefficients for these terms in the best case. The only way to fix this will be to prescribe some artificial regularization condition, like having minimal $\sum c_i^2$, or whatever may be right in your case.
It seems to me that you might be better off if you start by analyzing the correlation matrix and first figuring out which terms of the form $c_i-c_j$ should really appear in your problem. | Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$
You can not use the 36-coefficient model, and not because it's going to be slow. Speed is the least of your worries here.
The real trouble is that you've taken an already under-determined problem (be |
53,969 | Is there a limit function in R? [closed] | All links in this answer are dead.
Following the advice of this answer on the r-help mailing list, you can find examples in the sympy help file in the rSymPy and in the Ryacas vignette in the Ryacas package.
In general, all of the work that I do in R requires mathematical computations, and R does an excellent job of facilitating this work. | Is there a limit function in R? [closed] | All links in this answer are dead.
Following the advice of this answer on the r-help mailing list, you can find examples in the sympy help file in the rSymPy and in the Ryacas vignette in the Ryacas p | Is there a limit function in R? [closed]
All links in this answer are dead.
Following the advice of this answer on the r-help mailing list, you can find examples in the sympy help file in the rSymPy and in the Ryacas vignette in the Ryacas package.
In general, all of the work that I do in R requires mathematical computations, and R does an excellent job of facilitating this work. | Is there a limit function in R? [closed]
All links in this answer are dead.
Following the advice of this answer on the r-help mailing list, you can find examples in the sympy help file in the rSymPy and in the Ryacas vignette in the Ryacas p |
53,970 | Is there a limit function in R? [closed] | As @David mentioned, R is not doing symbolic calculations on its own, and such jobs should be done in other programs or with wrapper packages.
Yet, calculating limit numerically is usually quite easy -- in most cases you just need to find argument $n$ large enough so the difference between $f(n+1)-f(n)$ is smaller than the desired accuracy -- this can be done in a simple loop. | Is there a limit function in R? [closed] | As @David mentioned, R is not doing symbolic calculations on its own, and such jobs should be done in other programs or with wrapper packages.
Yet, calculating limit numerically is usually quite easy | Is there a limit function in R? [closed]
As @David mentioned, R is not doing symbolic calculations on its own, and such jobs should be done in other programs or with wrapper packages.
Yet, calculating limit numerically is usually quite easy -- in most cases you just need to find argument $n$ large enough so the difference between $f(n+1)-f(n)$ is smaller than the desired accuracy -- this can be done in a simple loop. | Is there a limit function in R? [closed]
As @David mentioned, R is not doing symbolic calculations on its own, and such jobs should be done in other programs or with wrapper packages.
Yet, calculating limit numerically is usually quite easy |
53,971 | Aggregation-Level in AB-Tests | If the treatment is randomly assigned the aggregation won't matter in determining the effect of the treatment (or the average treatment effect). I use lowercase in the following examples to refer to disaggregated items and uppercase to refer to aggregated items. Lets a priori state a model of individual decision making, where $y$ is the outcome of interest, and $x$ represents when an observation recieved the treatment;
$y = \alpha + b_1(x) + b_2(z) + e$
When one aggregates, one is simply summing random variables. So one would observe;
$\sum y = \sum\alpha + \beta_1(\sum x) + \beta_2(\sum z) + \sum e$
So what is to say that $\beta_1$ (divided by its total number of elements, $n$) will equal $b_1$? Because by the nature of random assignment all of the individual components of $x$ are orthogonal (i.e. the variance of $(\sum x)$ is simply the sum of the individual variances), and all of the individual components are uncorrelated with any of the $z$'s or $e$'s in the above equation.
Perhaps using an example of summing two random variables will be more informative. So say we have a case where we aggregate two random variables from the first equation presented. So what we observe is;
$(y_i + y_j) = (\alpha_1 + \alpha_2) + \beta_1(x_i + x_j) + \beta_2(z_i + z_j) + (e_1 + e_2)$
This can subsequently be broken down into its individual components;
$(y_i + y_j) = \alpha_1 + \alpha_2 + b_1(x_i) + b_2(x_j) + b_3(z_i) + b_4(z_j) + e_1 + e_2$
By the nature of random assignment we expect $x_i$ and $x_j$ in the above statement to be independent of all the other parameters ($z_i$, $z_j$, $e_1$, etc.) and each other. Hence the effect of the aggregated data is equal to the effect of the data disaggregated (or $\beta_1$ equals the sum of $b_1$ and $b_2$ divided by two in this case).
This exercise is informative though to see where the aggregation bias will come into play. Anytime the components of that aggregated variable are not independent of the other components you are creating an inherent confound in the analysis (e.g. you can not independently identify the effects of each individual item). So going with your "blue day" scenario one might have a model of individual behavior;
$y = \alpha + b_1(x) + \beta_2(Z) + b_3(x*Z) + e$
Where $Z$ refers to whether the observation was taken on blue day and $x*Z$ is the interaction of the treatment effect with it being blue day. This should be fairly obvious why it would be problematic if you take all of your observations on one day. If treatment is randomly assigned $b_1(x)$ and $\beta_2(Z)$ should be independent, but $b_1(x)$ and $b_3(x*Z)$ are not. Hence you will not be able to uniquely identify $b_1$, and the research design is inherently confounded.
You could potentially make a case for doing the data analysis on the aggregated items (aggregated values tend to be easier to work with and find correlations, less noisy and tend to have easier distributions to model). But if the real questions is to identify $b_1(x)$, then the research design should be structured to appropriately identify it. While I made an argument above for why it does not matter in a randomized experiment, in many settings the argument that all of the individual components are independent is violated. If you expect specific effects on specific days, aggregation of the observations will not help you identify the treatment effect (it is actually a good argument to prolong the observations to make sure no inherent confounds are present). | Aggregation-Level in AB-Tests | If the treatment is randomly assigned the aggregation won't matter in determining the effect of the treatment (or the average treatment effect). I use lowercase in the following examples to refer to d | Aggregation-Level in AB-Tests
If the treatment is randomly assigned the aggregation won't matter in determining the effect of the treatment (or the average treatment effect). I use lowercase in the following examples to refer to disaggregated items and uppercase to refer to aggregated items. Lets a priori state a model of individual decision making, where $y$ is the outcome of interest, and $x$ represents when an observation recieved the treatment;
$y = \alpha + b_1(x) + b_2(z) + e$
When one aggregates, one is simply summing random variables. So one would observe;
$\sum y = \sum\alpha + \beta_1(\sum x) + \beta_2(\sum z) + \sum e$
So what is to say that $\beta_1$ (divided by its total number of elements, $n$) will equal $b_1$? Because by the nature of random assignment all of the individual components of $x$ are orthogonal (i.e. the variance of $(\sum x)$ is simply the sum of the individual variances), and all of the individual components are uncorrelated with any of the $z$'s or $e$'s in the above equation.
Perhaps using an example of summing two random variables will be more informative. So say we have a case where we aggregate two random variables from the first equation presented. So what we observe is;
$(y_i + y_j) = (\alpha_1 + \alpha_2) + \beta_1(x_i + x_j) + \beta_2(z_i + z_j) + (e_1 + e_2)$
This can subsequently be broken down into its individual components;
$(y_i + y_j) = \alpha_1 + \alpha_2 + b_1(x_i) + b_2(x_j) + b_3(z_i) + b_4(z_j) + e_1 + e_2$
By the nature of random assignment we expect $x_i$ and $x_j$ in the above statement to be independent of all the other parameters ($z_i$, $z_j$, $e_1$, etc.) and each other. Hence the effect of the aggregated data is equal to the effect of the data disaggregated (or $\beta_1$ equals the sum of $b_1$ and $b_2$ divided by two in this case).
This exercise is informative though to see where the aggregation bias will come into play. Anytime the components of that aggregated variable are not independent of the other components you are creating an inherent confound in the analysis (e.g. you can not independently identify the effects of each individual item). So going with your "blue day" scenario one might have a model of individual behavior;
$y = \alpha + b_1(x) + \beta_2(Z) + b_3(x*Z) + e$
Where $Z$ refers to whether the observation was taken on blue day and $x*Z$ is the interaction of the treatment effect with it being blue day. This should be fairly obvious why it would be problematic if you take all of your observations on one day. If treatment is randomly assigned $b_1(x)$ and $\beta_2(Z)$ should be independent, but $b_1(x)$ and $b_3(x*Z)$ are not. Hence you will not be able to uniquely identify $b_1$, and the research design is inherently confounded.
You could potentially make a case for doing the data analysis on the aggregated items (aggregated values tend to be easier to work with and find correlations, less noisy and tend to have easier distributions to model). But if the real questions is to identify $b_1(x)$, then the research design should be structured to appropriately identify it. While I made an argument above for why it does not matter in a randomized experiment, in many settings the argument that all of the individual components are independent is violated. If you expect specific effects on specific days, aggregation of the observations will not help you identify the treatment effect (it is actually a good argument to prolong the observations to make sure no inherent confounds are present). | Aggregation-Level in AB-Tests
If the treatment is randomly assigned the aggregation won't matter in determining the effect of the treatment (or the average treatment effect). I use lowercase in the following examples to refer to d |
53,972 | Aggregation-Level in AB-Tests | The right level of aggregation depends on the time period over which you wish to generalize.
For example, you want to deploy A during nights across several sites but are unsure about its effectiveness relative to the existing option B. Thus, you may deploy A over a small number of sites and see its effects relative to the alternative B. In such a scenario, you need to aggregate the effects of A across all the nights that it was deployed to assess the relative impact of A vs B.
To use your example from the last para: If the interest lies in evaluating the impact of A across all days (possibly because A will be deployed on all days) then the 'right' thing to do is to aggregate across all days so that the test of A's effectiveness is not biased. | Aggregation-Level in AB-Tests | The right level of aggregation depends on the time period over which you wish to generalize.
For example, you want to deploy A during nights across several sites but are unsure about its effectivenes | Aggregation-Level in AB-Tests
The right level of aggregation depends on the time period over which you wish to generalize.
For example, you want to deploy A during nights across several sites but are unsure about its effectiveness relative to the existing option B. Thus, you may deploy A over a small number of sites and see its effects relative to the alternative B. In such a scenario, you need to aggregate the effects of A across all the nights that it was deployed to assess the relative impact of A vs B.
To use your example from the last para: If the interest lies in evaluating the impact of A across all days (possibly because A will be deployed on all days) then the 'right' thing to do is to aggregate across all days so that the test of A's effectiveness is not biased. | Aggregation-Level in AB-Tests
The right level of aggregation depends on the time period over which you wish to generalize.
For example, you want to deploy A during nights across several sites but are unsure about its effectivenes |
53,973 | Aggregation-Level in AB-Tests | Raw data without aggregation is the easiest way, can conveniently handle more complex situations.
with enough traffic you can make nearly anything significant within one day?
Compared with most academic studies, the sample size of ab-test in tech companies is generally large. Still, most ab-test are actually under power. Because of large coefficients of variation and very small effect sizes (Variance and significance in large-scale online services
).
more complex scene
iid assumption
Aggregation to day/hour basis then do a paired-t often implicitly violation of the iid assumption, results in seriously underestimate std error/p-value.
In a typical ab test, the experiment is randomized by the user, this means that the same user will reenter experiments on different days and be aggregated to different data points, resulting in independence between data points.
estimating heterogeneous treatment effects
One (possibly influencing) example to stimulate your thoughts: Assume that on one day during the test the "National Blue Day" is celebrated, so the color blue together with positive emotions is visible in all the media. This day group A has created a ton of conversions more than group B.
As @Andy W mentioned, raw data with regression (y ~ treat + day_of_week + trt * day_of_week) will be helpful to identify heterogeneous treatment effects.
variance reduction
Assume that the landing page belongs to a vegetable company. One day the "National Vegetable Day" is celebrated and now everyone wants to subscribe to the newsletter, no matter what the color is
Aggregation to day basis then do a paired-t doesn't make sense for variance reduction and may result in large variance and biased std error. A linear regression (y ~ treat + day_of_week) or post-stratification might be appropriate(Does it make mathematically sense to aggregate data in order to reduce variance in statistical significance tests?) | Aggregation-Level in AB-Tests | Raw data without aggregation is the easiest way, can conveniently handle more complex situations.
with enough traffic you can make nearly anything significant within one day?
Compared with most acad | Aggregation-Level in AB-Tests
Raw data without aggregation is the easiest way, can conveniently handle more complex situations.
with enough traffic you can make nearly anything significant within one day?
Compared with most academic studies, the sample size of ab-test in tech companies is generally large. Still, most ab-test are actually under power. Because of large coefficients of variation and very small effect sizes (Variance and significance in large-scale online services
).
more complex scene
iid assumption
Aggregation to day/hour basis then do a paired-t often implicitly violation of the iid assumption, results in seriously underestimate std error/p-value.
In a typical ab test, the experiment is randomized by the user, this means that the same user will reenter experiments on different days and be aggregated to different data points, resulting in independence between data points.
estimating heterogeneous treatment effects
One (possibly influencing) example to stimulate your thoughts: Assume that on one day during the test the "National Blue Day" is celebrated, so the color blue together with positive emotions is visible in all the media. This day group A has created a ton of conversions more than group B.
As @Andy W mentioned, raw data with regression (y ~ treat + day_of_week + trt * day_of_week) will be helpful to identify heterogeneous treatment effects.
variance reduction
Assume that the landing page belongs to a vegetable company. One day the "National Vegetable Day" is celebrated and now everyone wants to subscribe to the newsletter, no matter what the color is
Aggregation to day basis then do a paired-t doesn't make sense for variance reduction and may result in large variance and biased std error. A linear regression (y ~ treat + day_of_week) or post-stratification might be appropriate(Does it make mathematically sense to aggregate data in order to reduce variance in statistical significance tests?) | Aggregation-Level in AB-Tests
Raw data without aggregation is the easiest way, can conveniently handle more complex situations.
with enough traffic you can make nearly anything significant within one day?
Compared with most acad |
53,974 | How many data points do we need for mixed effects longitudinal data? | No, you don't need to remove individuals with data for only only one (or only a limited number) of timepoints. You're right to think that individuals with only one timepoint contribute nothing to estimation of the slope but they contribute to estimation of the intercept and you want to estimate both jointly. The maths and the algorithm deal with this so you don't need to worry about it, and you're more likely to make errors than the programmers of lmer if you try to second-guess things by dropping observations you don't think will contribute.
Yes, you could fit a quadratic effect for time with 4 timepoints. In fact, if you designed an experiment to look for a quadratic effect, you might well choose to have 4 timepoints. (3 would in principle maximise your power, but 4 allows 1 d.f. to test for fit of the quadratic curve). Clearly you need at least some people to participate in at least 3 waves. But as above, don't remove subjects that didn't. | How many data points do we need for mixed effects longitudinal data? | No, you don't need to remove individuals with data for only only one (or only a limited number) of timepoints. You're right to think that individuals with only one timepoint contribute nothing to esti | How many data points do we need for mixed effects longitudinal data?
No, you don't need to remove individuals with data for only only one (or only a limited number) of timepoints. You're right to think that individuals with only one timepoint contribute nothing to estimation of the slope but they contribute to estimation of the intercept and you want to estimate both jointly. The maths and the algorithm deal with this so you don't need to worry about it, and you're more likely to make errors than the programmers of lmer if you try to second-guess things by dropping observations you don't think will contribute.
Yes, you could fit a quadratic effect for time with 4 timepoints. In fact, if you designed an experiment to look for a quadratic effect, you might well choose to have 4 timepoints. (3 would in principle maximise your power, but 4 allows 1 d.f. to test for fit of the quadratic curve). Clearly you need at least some people to participate in at least 3 waves. But as above, don't remove subjects that didn't. | How many data points do we need for mixed effects longitudinal data?
No, you don't need to remove individuals with data for only only one (or only a limited number) of timepoints. You're right to think that individuals with only one timepoint contribute nothing to esti |
53,975 | Recreating traditional null hypothesis testing with Bayesian methods | When you're computing Bayes factors, the priors matter. The influence of the priors can persist even if you have a large amount of data. When you're doing posterior inference, the effect of the prior goes away as you collect more data, but not so with Bayes factors.
Also, you'll get faster convergence if your null and alternative priors have disjoint support. Details here. | Recreating traditional null hypothesis testing with Bayesian methods | When you're computing Bayes factors, the priors matter. The influence of the priors can persist even if you have a large amount of data. When you're doing posterior inference, the effect of the prior | Recreating traditional null hypothesis testing with Bayesian methods
When you're computing Bayes factors, the priors matter. The influence of the priors can persist even if you have a large amount of data. When you're doing posterior inference, the effect of the prior goes away as you collect more data, but not so with Bayes factors.
Also, you'll get faster convergence if your null and alternative priors have disjoint support. Details here. | Recreating traditional null hypothesis testing with Bayesian methods
When you're computing Bayes factors, the priors matter. The influence of the priors can persist even if you have a large amount of data. When you're doing posterior inference, the effect of the prior |
53,976 | Recreating traditional null hypothesis testing with Bayesian methods | I'm note sure I follow the R-code as I have only used R once or twice, but it looks to me as if you are comparing the marginal likelihood of a model with only an intercept and no slope (hadcru.mcmc.zero) and the marginal likelihood of a model with a slope and an intercept (hadcru.mcmc). However, while hadcru.mcmc.zero seems to be the correct model for H0, hadcru.mcmc does not seem to me to correctly represent H1 as there is nothing as far as I can see that constrains the slope to be positive. Is the something in the prior for the slope that makes it strictly positive (I don't know enough about MCMC in R to know)? If not, that may be where your problem lies as the marginal likelihood would then have a component representing the likelihood of the data for all of then egative values of the slop permitted under the prior (and 0) as well as the positive.
It is debatable whether the H0 for this question should be that the slope is exactly zero, nobody would believe that to be plausible a-priori. Perhaps a test using the Bayes factor for a model where the slope is strictly positive (H1) against a model where it is zero or negative (H0).
HTH (and I am not just confusing things) | Recreating traditional null hypothesis testing with Bayesian methods | I'm note sure I follow the R-code as I have only used R once or twice, but it looks to me as if you are comparing the marginal likelihood of a model with only an intercept and no slope (hadcru.mcmc.ze | Recreating traditional null hypothesis testing with Bayesian methods
I'm note sure I follow the R-code as I have only used R once or twice, but it looks to me as if you are comparing the marginal likelihood of a model with only an intercept and no slope (hadcru.mcmc.zero) and the marginal likelihood of a model with a slope and an intercept (hadcru.mcmc). However, while hadcru.mcmc.zero seems to be the correct model for H0, hadcru.mcmc does not seem to me to correctly represent H1 as there is nothing as far as I can see that constrains the slope to be positive. Is the something in the prior for the slope that makes it strictly positive (I don't know enough about MCMC in R to know)? If not, that may be where your problem lies as the marginal likelihood would then have a component representing the likelihood of the data for all of then egative values of the slop permitted under the prior (and 0) as well as the positive.
It is debatable whether the H0 for this question should be that the slope is exactly zero, nobody would believe that to be plausible a-priori. Perhaps a test using the Bayes factor for a model where the slope is strictly positive (H1) against a model where it is zero or negative (H0).
HTH (and I am not just confusing things) | Recreating traditional null hypothesis testing with Bayesian methods
I'm note sure I follow the R-code as I have only used R once or twice, but it looks to me as if you are comparing the marginal likelihood of a model with only an intercept and no slope (hadcru.mcmc.ze |
53,977 | Recreating traditional null hypothesis testing with Bayesian methods | I do not know the packages you are using or their internal working but perhaps the choice of priors matter? Perhaps, you should consider using different prior structures to see how sensitive the mcmc marginal likelihoods are to your choice of priors.
In particular, I suspect that the mcmc and the traditional likelihoods are likely to converge better as the priors become more diffuse. Note that in mcmc the marginal likelihoods are computed by integrating out the likelihood function with respect to the priors. Thus, I have a feeling that the 'diffuseness' of the priors may matter (could be wrong on this issue but worth checking out). | Recreating traditional null hypothesis testing with Bayesian methods | I do not know the packages you are using or their internal working but perhaps the choice of priors matter? Perhaps, you should consider using different prior structures to see how sensitive the mcmc | Recreating traditional null hypothesis testing with Bayesian methods
I do not know the packages you are using or their internal working but perhaps the choice of priors matter? Perhaps, you should consider using different prior structures to see how sensitive the mcmc marginal likelihoods are to your choice of priors.
In particular, I suspect that the mcmc and the traditional likelihoods are likely to converge better as the priors become more diffuse. Note that in mcmc the marginal likelihoods are computed by integrating out the likelihood function with respect to the priors. Thus, I have a feeling that the 'diffuseness' of the priors may matter (could be wrong on this issue but worth checking out). | Recreating traditional null hypothesis testing with Bayesian methods
I do not know the packages you are using or their internal working but perhaps the choice of priors matter? Perhaps, you should consider using different prior structures to see how sensitive the mcmc |
53,978 | Elasticities Using GLM | In the usual case with a log variable, the model is
\begin{align}
\log(y) &= a + b\log(x) + \varepsilon\newline
\text{or}\quad y &= e^a x^b e^\varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$ and $b$ is the elasticity.
In the situation you mention,
\begin{align}
y &= \exp[a + b\log(x)] + \varepsilon \newline
\text{or}\quad y &= e^ax^b + \varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$. So, ignoring the error, the parameter $b$ is playing the same role in both models and is an elasticity in both cases.
What is different is the assumption on the error distribution. In the first case, the error is assumed to be logNormal and multiplicative while in the second case the error is assumed to be Normal and additive. | Elasticities Using GLM | In the usual case with a log variable, the model is
\begin{align}
\log(y) &= a + b\log(x) + \varepsilon\newline
\text{or}\quad y &= e^a x^b e^\varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0, | Elasticities Using GLM
In the usual case with a log variable, the model is
\begin{align}
\log(y) &= a + b\log(x) + \varepsilon\newline
\text{or}\quad y &= e^a x^b e^\varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$ and $b$ is the elasticity.
In the situation you mention,
\begin{align}
y &= \exp[a + b\log(x)] + \varepsilon \newline
\text{or}\quad y &= e^ax^b + \varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0,\sigma^2)$. So, ignoring the error, the parameter $b$ is playing the same role in both models and is an elasticity in both cases.
What is different is the assumption on the error distribution. In the first case, the error is assumed to be logNormal and multiplicative while in the second case the error is assumed to be Normal and additive. | Elasticities Using GLM
In the usual case with a log variable, the model is
\begin{align}
\log(y) &= a + b\log(x) + \varepsilon\newline
\text{or}\quad y &= e^a x^b e^\varepsilon,
\end{align}
where $\varepsilon\sim\text{N}(0, |
53,979 | How can I fit a distribution to a dataset while forcing it through an exact point in r? | You can fit a gamma distribution in R with optim, e.g. by minimizing the sum of squared errors for the mean and the 95th percentile, which actually sets both errors to zero:
obs = c(28.744,385.714,20.595,99.350,31.864,77.713,
264.408,21.204,31.937,0.900,18.762,173.276,23.707)
tail = 500.912
error = function(x)(qgamma(0.95, shape=x[1], scale=x[2]) - tail)^2 + (x[1]*x[2]-mean(obs))^2
params = optim(c(1,1), error)$par
params
The code uses the formula for the gamma’s mean and gets the desired mean and percentile exactly with a shape parameter of 0.147 and a scale parameter of 616.6.
You can then check the quantile with
qgamma(0.95, shape=params[1], scale=params[2])
You can also check the log-likelihood of the result with
sum(log(dgamma(shape = params[1], scale = params[2], obs)))
and compare it with the log-likelihood from other two-parameter distributions. | How can I fit a distribution to a dataset while forcing it through an exact point in r? | You can fit a gamma distribution in R with optim, e.g. by minimizing the sum of squared errors for the mean and the 95th percentile, which actually sets both errors to zero:
obs = c(28.744,385.714,20. | How can I fit a distribution to a dataset while forcing it through an exact point in r?
You can fit a gamma distribution in R with optim, e.g. by minimizing the sum of squared errors for the mean and the 95th percentile, which actually sets both errors to zero:
obs = c(28.744,385.714,20.595,99.350,31.864,77.713,
264.408,21.204,31.937,0.900,18.762,173.276,23.707)
tail = 500.912
error = function(x)(qgamma(0.95, shape=x[1], scale=x[2]) - tail)^2 + (x[1]*x[2]-mean(obs))^2
params = optim(c(1,1), error)$par
params
The code uses the formula for the gamma’s mean and gets the desired mean and percentile exactly with a shape parameter of 0.147 and a scale parameter of 616.6.
You can then check the quantile with
qgamma(0.95, shape=params[1], scale=params[2])
You can also check the log-likelihood of the result with
sum(log(dgamma(shape = params[1], scale = params[2], obs)))
and compare it with the log-likelihood from other two-parameter distributions. | How can I fit a distribution to a dataset while forcing it through an exact point in r?
You can fit a gamma distribution in R with optim, e.g. by minimizing the sum of squared errors for the mean and the 95th percentile, which actually sets both errors to zero:
obs = c(28.744,385.714,20. |
53,980 | How can I fit a distribution to a dataset while forcing it through an exact point in r? | For distribution with a scale parameter such as the gamma and the weibull, the likelihood can be maximised numerically with respect to the remaining parameters (the shape in case of the gamma and the weibull) since for any given value of those parameters (the shape), there will be only a single value of the scale parameter for which the 95% quantile of the distribution matches the target value. The following R code implements this method:
x <- c(28.744,385.714,20.595,99.350,31.864,77.713,
264.408,21.204,31.937,0.900,18.762,173.276,23.707)
constrained_mle <- function(x, p, q) {
lnL <- function(shape) {
# Solving q = qgamma(p, shape)*scale for the necssary scale
scale <- q/qgamma(p, shape)
sum(dgamma(x, shape, scale=scale, log=TRUE))
}
res <- optimise(lnL, lower=0, upper=1e+3, maximum=TRUE, tol=1e-8)
scale <- q/qgamma(p, res$maximum)
c(scale=scale, shape=res$maximum)
}
par <- constrained_mle(x, .95, 500.912)
par
#> scale shape
#> 231.0561574 0.6038756
# checking that the solution is correct
qgamma(.95, scale=par[1], shape=par[2])
#> [1] 500.912
The constained MLEs are evidently quite different from the estimates produced by matching the first moment and the target quantile (shape=0.1470374 and scale=616.5818235) in the answer by Matt F. | How can I fit a distribution to a dataset while forcing it through an exact point in r? | For distribution with a scale parameter such as the gamma and the weibull, the likelihood can be maximised numerically with respect to the remaining parameters (the shape in case of the gamma and the | How can I fit a distribution to a dataset while forcing it through an exact point in r?
For distribution with a scale parameter such as the gamma and the weibull, the likelihood can be maximised numerically with respect to the remaining parameters (the shape in case of the gamma and the weibull) since for any given value of those parameters (the shape), there will be only a single value of the scale parameter for which the 95% quantile of the distribution matches the target value. The following R code implements this method:
x <- c(28.744,385.714,20.595,99.350,31.864,77.713,
264.408,21.204,31.937,0.900,18.762,173.276,23.707)
constrained_mle <- function(x, p, q) {
lnL <- function(shape) {
# Solving q = qgamma(p, shape)*scale for the necssary scale
scale <- q/qgamma(p, shape)
sum(dgamma(x, shape, scale=scale, log=TRUE))
}
res <- optimise(lnL, lower=0, upper=1e+3, maximum=TRUE, tol=1e-8)
scale <- q/qgamma(p, res$maximum)
c(scale=scale, shape=res$maximum)
}
par <- constrained_mle(x, .95, 500.912)
par
#> scale shape
#> 231.0561574 0.6038756
# checking that the solution is correct
qgamma(.95, scale=par[1], shape=par[2])
#> [1] 500.912
The constained MLEs are evidently quite different from the estimates produced by matching the first moment and the target quantile (shape=0.1470374 and scale=616.5818235) in the answer by Matt F. | How can I fit a distribution to a dataset while forcing it through an exact point in r?
For distribution with a scale parameter such as the gamma and the weibull, the likelihood can be maximised numerically with respect to the remaining parameters (the shape in case of the gamma and the |
53,981 | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to variance estimation | There's situations where you can bootstrap. If your sample size is large, then bootstrapping when possible is really convenient for a number of reasons:
If it works, then it works for almost any metric you can define, while frequentist analytical solutions tend to be derived for one metric at a time (bad luck if your metric of interest is not covered)
It's easy to implement (assuming a straightforward bootstrapping scheme is appropriate for the situation and the metric is easy to calculate for a sample).
Where bootstrapping has problems is
When sample size is small, then the discreteness of bootstrapping can make it inefficient.
When another simple solution is easily available/already implemented, why do it, when it often takes longer to run/is a little bit more complex to implement than running a well-validated package that someone else already wrote?
Sometimes you just cannot bootstrap in a valid way. E.g. multiple-rate-multiple-case (MRMC) studies tend to make it very hard to bootstrap (some solutions still have a bootstrap component, but it's nightmarishly complicated*), because everything is in a way correlated with everything (you have both the same raters looking at cases, as well as the same cases being looked).
As the other answer mentioned, one approach is a form of mixed effects model that describes the whole process. We took that approach for a MRMC study, where there were 3 possible diagnoses, recently. The Methods Section of the article outlines the approach:
We analyzed the primary outcome using a Bayesian model that jointly
modeled each patient’s true disease status (ie, expert panel
diagnosis) as a categorical random variable and the diagnoses given
for each patient by each physician or algorithm (determined using
multinomial logistic regression). The multinomial logistic regression
model included a separate intercept term for each combination of
disease (asthma, COPD, and ACO) and group (PCPs, pulmonologists, and
the AC/DC tool), as well as a random case and random rater effect.
We took this approach, because we could not find any published methods that applied to the metrics our clinical colleagues wanted to look at (most MRMC literature is about AuROC and as mentioned, simple bootstrapping does not work). We ended up fitting this in a Bayesian way using Stan, because that deals with the uncertainties sensibly. It might be possible to fit such a model with something like PROC NLMIXED in SAS and to get confidence intervals that way (they have a decent automatic procedure for SEs there that uses the delta method) in a large study, but the Bayesian approach deals more naturally with (nearly) empty cells.
* Regarding "nightmarishly complicated", look at the complexity in these papers for getting AuROC estimates with CIs from MRMC studies:
Brandon D. Gallas, Gene A. Pennello, and Kyle J. Myers. Multireader
multicase variance analysis for binary data. J. Opt. Soc. Am. A, 24(12):
B70–B80, Dec 2007. doi: 10.1364/JOSAA.24.000B70.
Brandon D. Gallas, Andriy Bandos, Frank W. Samuelson, and Robert F.
Wagner. A framework for random-effects roc analysis: Biases with the boot-
strap and other variance estimators. Communications in Statistics - Theory
and Methods, 38(15):2586–2603, 2009. doi: 10.1080/03610920802610084. | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to varianc | There's situations where you can bootstrap. If your sample size is large, then bootstrapping when possible is really convenient for a number of reasons:
If it works, then it works for almost any metr | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to variance estimation
There's situations where you can bootstrap. If your sample size is large, then bootstrapping when possible is really convenient for a number of reasons:
If it works, then it works for almost any metric you can define, while frequentist analytical solutions tend to be derived for one metric at a time (bad luck if your metric of interest is not covered)
It's easy to implement (assuming a straightforward bootstrapping scheme is appropriate for the situation and the metric is easy to calculate for a sample).
Where bootstrapping has problems is
When sample size is small, then the discreteness of bootstrapping can make it inefficient.
When another simple solution is easily available/already implemented, why do it, when it often takes longer to run/is a little bit more complex to implement than running a well-validated package that someone else already wrote?
Sometimes you just cannot bootstrap in a valid way. E.g. multiple-rate-multiple-case (MRMC) studies tend to make it very hard to bootstrap (some solutions still have a bootstrap component, but it's nightmarishly complicated*), because everything is in a way correlated with everything (you have both the same raters looking at cases, as well as the same cases being looked).
As the other answer mentioned, one approach is a form of mixed effects model that describes the whole process. We took that approach for a MRMC study, where there were 3 possible diagnoses, recently. The Methods Section of the article outlines the approach:
We analyzed the primary outcome using a Bayesian model that jointly
modeled each patient’s true disease status (ie, expert panel
diagnosis) as a categorical random variable and the diagnoses given
for each patient by each physician or algorithm (determined using
multinomial logistic regression). The multinomial logistic regression
model included a separate intercept term for each combination of
disease (asthma, COPD, and ACO) and group (PCPs, pulmonologists, and
the AC/DC tool), as well as a random case and random rater effect.
We took this approach, because we could not find any published methods that applied to the metrics our clinical colleagues wanted to look at (most MRMC literature is about AuROC and as mentioned, simple bootstrapping does not work). We ended up fitting this in a Bayesian way using Stan, because that deals with the uncertainties sensibly. It might be possible to fit such a model with something like PROC NLMIXED in SAS and to get confidence intervals that way (they have a decent automatic procedure for SEs there that uses the delta method) in a large study, but the Bayesian approach deals more naturally with (nearly) empty cells.
* Regarding "nightmarishly complicated", look at the complexity in these papers for getting AuROC estimates with CIs from MRMC studies:
Brandon D. Gallas, Gene A. Pennello, and Kyle J. Myers. Multireader
multicase variance analysis for binary data. J. Opt. Soc. Am. A, 24(12):
B70–B80, Dec 2007. doi: 10.1364/JOSAA.24.000B70.
Brandon D. Gallas, Andriy Bandos, Frank W. Samuelson, and Robert F.
Wagner. A framework for random-effects roc analysis: Biases with the boot-
strap and other variance estimators. Communications in Statistics - Theory
and Methods, 38(15):2586–2603, 2009. doi: 10.1080/03610920802610084. | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to varianc
There's situations where you can bootstrap. If your sample size is large, then bootstrapping when possible is really convenient for a number of reasons:
If it works, then it works for almost any metr |
53,982 | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to variance estimation | I tend to have data with inherent structure, such as multiple samples per patient, multiple measurements per sample and the like.
In that situation, the statistically independent unit is typically the patient rather than the row of the data matrix. (In some cases, independence is even more complicated, with multiple top-level sources of variance.)
While I can easily bootstrap patients (or more general: set up a resampling procedure so that statistical independence is obeyed*) the analytical c.i. calculations I've met so far assume statistical independence at the row level. I.e., they do not accomodate more complex real-life data structure.
It's not that I would speak against using analytical formulations of variance (though I'd recommend to closely check the "small print" of the respective method) - I just don't meet nice data that would allow me to use them...
However, if your data meets the requirements of the analytical/parametric calculation, I'd expect it to have more power than bootstrapping.
What I do use is e.g. binomial c.i.s for proportions like sensitivity, specificity or the predictive values with patient number as sample size. Often, I can show that model instability is negligible compared to this, and I thus get a sensible approximation. (And the result is often anyways so wide that it does a good job serving as warning not to overinterprete the results...)
Sometimes I use this also as back-of-the-envelope calculation before experiments start: the c.i. won't get any better than that - if that is not satisfactory, more patients are needed right from the beginning.
Mixed models that describe the underlying structure for the prediction error may be a way here, but I haven't seen any papers applying this so far. It is, however, an approach that I'm looking into.
I have a sneaking suspicion that situations where the variance on the error estimate can sensibly be approximated as stemming from a single source are actually very rare (and my educated guess is that this is where patient-to-patient variance dominates everything and if that's the case with the final error estimate, it is rarely of practical use).
Of course, one may still sum the contributions of various sources of error into one total variance - but also there, analytical calculation would need to account for the data structure and the corresponding sources of variance.
*Since I typically also work with high-dimensional data, I anyways have to set up such splitting procedures to generate suitable test data to produce the generalization error estimates: the fanciest variance estimator is of no use if my validation procedure has a (large) optimistic bias. Also keep in mind that optimistically biased test results on classification tasks often lead to optimistically biased (too narrow) variance estimates as well.
Update: back of the envelope calculations
Since I don't use F1, I'll outline mostly what I do for sensitivity (or analogously specificity). Sensitivity is the (observed) fraction of correctly predicted tested cases of the positive class, and we can thus use binomial confidence intervals. The width of these depends on the denominator, i.e. the number of tested cases of the positive class, and on the true value of the sensitivity (with maximum width for 50 % sens and minimum width for 0 and 100 % sens).
For the minimum confidence interval width, we can use the rule of 3: assuming we's observe no errors, the 95 % c.i. for sensitivity would be roughy ranging from 1 to $1 - \frac{3}{n}$
However, we can also calculate minimum and maximum c.i. width depending on the number of tested positive cases before any experimental results are available:
... and compare that to the the requirements of the application.
(One can also set up calculation to estimate reasonable sample sizes based on expected performance and application requirements. One scenario I've been using is specifying for an application situation "super nice", "OK" and "not really acceptable" performance. If for the discussed sample size "super nice" performance couldn't be reliably distinguished from "not really acceptable", sample size is insufficient)
To get guesstimates for other measures that are not simply an observed fraction of a pre-specified number of test cases (such as F1), you could either start with the variance of the binomially distributed simple fractions and do error propagation, or set up simulations (which would be quite similar to bootstrapping). | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to varianc | I tend to have data with inherent structure, such as multiple samples per patient, multiple measurements per sample and the like.
In that situation, the statistically independent unit is typically the | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to variance estimation
I tend to have data with inherent structure, such as multiple samples per patient, multiple measurements per sample and the like.
In that situation, the statistically independent unit is typically the patient rather than the row of the data matrix. (In some cases, independence is even more complicated, with multiple top-level sources of variance.)
While I can easily bootstrap patients (or more general: set up a resampling procedure so that statistical independence is obeyed*) the analytical c.i. calculations I've met so far assume statistical independence at the row level. I.e., they do not accomodate more complex real-life data structure.
It's not that I would speak against using analytical formulations of variance (though I'd recommend to closely check the "small print" of the respective method) - I just don't meet nice data that would allow me to use them...
However, if your data meets the requirements of the analytical/parametric calculation, I'd expect it to have more power than bootstrapping.
What I do use is e.g. binomial c.i.s for proportions like sensitivity, specificity or the predictive values with patient number as sample size. Often, I can show that model instability is negligible compared to this, and I thus get a sensible approximation. (And the result is often anyways so wide that it does a good job serving as warning not to overinterprete the results...)
Sometimes I use this also as back-of-the-envelope calculation before experiments start: the c.i. won't get any better than that - if that is not satisfactory, more patients are needed right from the beginning.
Mixed models that describe the underlying structure for the prediction error may be a way here, but I haven't seen any papers applying this so far. It is, however, an approach that I'm looking into.
I have a sneaking suspicion that situations where the variance on the error estimate can sensibly be approximated as stemming from a single source are actually very rare (and my educated guess is that this is where patient-to-patient variance dominates everything and if that's the case with the final error estimate, it is rarely of practical use).
Of course, one may still sum the contributions of various sources of error into one total variance - but also there, analytical calculation would need to account for the data structure and the corresponding sources of variance.
*Since I typically also work with high-dimensional data, I anyways have to set up such splitting procedures to generate suitable test data to produce the generalization error estimates: the fanciest variance estimator is of no use if my validation procedure has a (large) optimistic bias. Also keep in mind that optimistically biased test results on classification tasks often lead to optimistically biased (too narrow) variance estimates as well.
Update: back of the envelope calculations
Since I don't use F1, I'll outline mostly what I do for sensitivity (or analogously specificity). Sensitivity is the (observed) fraction of correctly predicted tested cases of the positive class, and we can thus use binomial confidence intervals. The width of these depends on the denominator, i.e. the number of tested cases of the positive class, and on the true value of the sensitivity (with maximum width for 50 % sens and minimum width for 0 and 100 % sens).
For the minimum confidence interval width, we can use the rule of 3: assuming we's observe no errors, the 95 % c.i. for sensitivity would be roughy ranging from 1 to $1 - \frac{3}{n}$
However, we can also calculate minimum and maximum c.i. width depending on the number of tested positive cases before any experimental results are available:
... and compare that to the the requirements of the application.
(One can also set up calculation to estimate reasonable sample sizes based on expected performance and application requirements. One scenario I've been using is specifying for an application situation "super nice", "OK" and "not really acceptable" performance. If for the discussed sample size "super nice" performance couldn't be reliably distinguished from "not really acceptable", sample size is insufficient)
To get guesstimates for other measures that are not simply an observed fraction of a pre-specified number of test cases (such as F1), you could either start with the variance of the binomially distributed simple fractions and do error propagation, or set up simulations (which would be quite similar to bootstrapping). | Confidence intervals for accuracy metrics: pros and cons of the bootstrap method compared to varianc
I tend to have data with inherent structure, such as multiple samples per patient, multiple measurements per sample and the like.
In that situation, the statistically independent unit is typically the |
53,983 | Central limit theorem notation | You already answered it yourself
$$\bar{X}_n = \frac{X_1+...+X_i+...+X_n}{n}$$
$\bar{X}_n$ is a sample mean.
The subscript here refers to the sample size.
$X_i$ is a variable in the sample.
The subscript here refers to the id of the variable. The id's are the integers from 1 untill n.
An example is the image below of a Galton board (from Wikipedia Matemateca (IME/USP)/Rodrigo Tetsuo Argenton)
The distribution of the beads at the bottom is the distribution of an average (or sum) $\bar{X}$, where it relates to the sum of the movements of a bead through 5 duplicate layers of pins. Every double layer the beads will hit the pins and go straight with 50% chance, left with 25% chance, or right with 25% chance. You can see each double layer as a variable $x_i$ with possible values of $-1,0,1$ (relating to the movement of the bead). The bins at the end are the result of the sum of those $x_1+x_2+x_3+x_4+x_5$ and you can end up somewhere between $-5$ and $5$ (actually this board also has bins -6 and +6, it is not such a perfect process with only steps of -1, 0 or +1).
So you have
$X_i$ the value of the movement of a bead in a particular (double) layer.
$\bar{X}_5$ the sum of all the movements in the 5 double layers.
The hundreds of beads in the bottom, which are different realisations of averaging/summing a sample of bead movements. The distribution of those beads will approximate a normal distribution when we increase the size of the Galton board. | Central limit theorem notation | You already answered it yourself
$$\bar{X}_n = \frac{X_1+...+X_i+...+X_n}{n}$$
$\bar{X}_n$ is a sample mean.
The subscript here refers to the sample size.
$X_i$ is a variable in the sample.
The su | Central limit theorem notation
You already answered it yourself
$$\bar{X}_n = \frac{X_1+...+X_i+...+X_n}{n}$$
$\bar{X}_n$ is a sample mean.
The subscript here refers to the sample size.
$X_i$ is a variable in the sample.
The subscript here refers to the id of the variable. The id's are the integers from 1 untill n.
An example is the image below of a Galton board (from Wikipedia Matemateca (IME/USP)/Rodrigo Tetsuo Argenton)
The distribution of the beads at the bottom is the distribution of an average (or sum) $\bar{X}$, where it relates to the sum of the movements of a bead through 5 duplicate layers of pins. Every double layer the beads will hit the pins and go straight with 50% chance, left with 25% chance, or right with 25% chance. You can see each double layer as a variable $x_i$ with possible values of $-1,0,1$ (relating to the movement of the bead). The bins at the end are the result of the sum of those $x_1+x_2+x_3+x_4+x_5$ and you can end up somewhere between $-5$ and $5$ (actually this board also has bins -6 and +6, it is not such a perfect process with only steps of -1, 0 or +1).
So you have
$X_i$ the value of the movement of a bead in a particular (double) layer.
$\bar{X}_5$ the sum of all the movements in the 5 double layers.
The hundreds of beads in the bottom, which are different realisations of averaging/summing a sample of bead movements. The distribution of those beads will approximate a normal distribution when we increase the size of the Galton board. | Central limit theorem notation
You already answered it yourself
$$\bar{X}_n = \frac{X_1+...+X_i+...+X_n}{n}$$
$\bar{X}_n$ is a sample mean.
The subscript here refers to the sample size.
$X_i$ is a variable in the sample.
The su |
53,984 | Central limit theorem notation | So I think you have a common misunderstanding. each $X_i$ is a single observation.
Just as 1 throw of a dice has uniform distribution on 1-6 ( you don't need to consider throwing 1000s of dice), a single sample mean is approximately distributed normally ( with the approximation improving as the sample size increases).
So I can calculate probabilities on my single sample mean, just as I can calculate that eg the probability of getting a number of 4 and above with a single dice throw is 1/2. and I know that after many dice throws the proportion of times I get 4 and above should approach 1/2. But this long run effect is a consequence of saying that the probability of throwing 4-6 is 1/2.
So with one sample mean and variance, I can calculate the approximate distribution of my single sample mean, and discuss the probability that my single sample mean was greater than a given value etc. | Central limit theorem notation | So I think you have a common misunderstanding. each $X_i$ is a single observation.
Just as 1 throw of a dice has uniform distribution on 1-6 ( you don't need to consider throwing 1000s of dice), a sin | Central limit theorem notation
So I think you have a common misunderstanding. each $X_i$ is a single observation.
Just as 1 throw of a dice has uniform distribution on 1-6 ( you don't need to consider throwing 1000s of dice), a single sample mean is approximately distributed normally ( with the approximation improving as the sample size increases).
So I can calculate probabilities on my single sample mean, just as I can calculate that eg the probability of getting a number of 4 and above with a single dice throw is 1/2. and I know that after many dice throws the proportion of times I get 4 and above should approach 1/2. But this long run effect is a consequence of saying that the probability of throwing 4-6 is 1/2.
So with one sample mean and variance, I can calculate the approximate distribution of my single sample mean, and discuss the probability that my single sample mean was greater than a given value etc. | Central limit theorem notation
So I think you have a common misunderstanding. each $X_i$ is a single observation.
Just as 1 throw of a dice has uniform distribution on 1-6 ( you don't need to consider throwing 1000s of dice), a sin |
53,985 | Central limit theorem notation | As extensively already discussed by others and in the comments, the Central Limit Theorem tells us about the behaviour of the distribution of standardized sample averages $\bar X_i$ as $i$ becomes large, where $X_i$ and its space play no role.
In the Central Limit Theorem, the basic underlying hypothesis is
For each $n$, let $X_1, \ldots X_n$ be independent random variables.
As $n$ changes, the underlying probability space may change, but this is of no consequence since a convergence in distribution statement is a statement about convergence of a sequence of real-valued functions on $\mathbb{R}$.
Indeed, If $X_1,\ldots,X_n$ are independent with distribution functions $F_1,\ldots,F_n$ and
$$
T_n = (S_n - E(S_n))/\text{var}(S_n),
$$
with $S_n = X_1,\ldots,X_n$, the distribution function of $T_n$ is completely determined by the $F_i$, and the validity of a statement about convergence in distribution of $T_n$ is also determined by $F_i$, regardless of the construction of the underlying probability space. | Central limit theorem notation | As extensively already discussed by others and in the comments, the Central Limit Theorem tells us about the behaviour of the distribution of standardized sample averages $\bar X_i$ as $i$ becomes lar | Central limit theorem notation
As extensively already discussed by others and in the comments, the Central Limit Theorem tells us about the behaviour of the distribution of standardized sample averages $\bar X_i$ as $i$ becomes large, where $X_i$ and its space play no role.
In the Central Limit Theorem, the basic underlying hypothesis is
For each $n$, let $X_1, \ldots X_n$ be independent random variables.
As $n$ changes, the underlying probability space may change, but this is of no consequence since a convergence in distribution statement is a statement about convergence of a sequence of real-valued functions on $\mathbb{R}$.
Indeed, If $X_1,\ldots,X_n$ are independent with distribution functions $F_1,\ldots,F_n$ and
$$
T_n = (S_n - E(S_n))/\text{var}(S_n),
$$
with $S_n = X_1,\ldots,X_n$, the distribution function of $T_n$ is completely determined by the $F_i$, and the validity of a statement about convergence in distribution of $T_n$ is also determined by $F_i$, regardless of the construction of the underlying probability space. | Central limit theorem notation
As extensively already discussed by others and in the comments, the Central Limit Theorem tells us about the behaviour of the distribution of standardized sample averages $\bar X_i$ as $i$ becomes lar |
53,986 | Kullback–Leibler divergence between two normal distributions | Note that the Kullback-Leibler divergence, which as you rightly point out is not symmetric, is
$$
\begin{align*}
\text{KL}(P\vert\vert Q) & = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x = \int p(x)\log \left(\frac{1}{q(x)}\right)\,\text{d}x - \int p(x)\log \left(\frac{1}{p(x)}\right)\text{d}x\\
& = H(P,Q) - H(P),
\end{align*}
$$
where $H(P,Q)$ is the cross-entropy of $P$ and $Q$, and $H(P)$ is the entropy of $P$, or the cross-entropy of $P$ with itself. I'm not familiar with python, but I guess that entropy in your code computes $H(P, Q)$.
As you can see from its definition, $H(P, Q)$ is generally different from $H(Q, P)$, which is another way to see why in general $\text{KL}(P||Q)\neq\text{KL}(Q||P)$.
Now, the KL for two normal models $p(x) = N(\mu_1,\sigma_1^2)$ and $q(x) = N(\mu_2,\sigma_2^2)$ is
$$
\text{KL}(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}. \quad(*)
$$
Although there is a great deal of overlapping between $N(0, 1)$ and $N(1, 1)$, the two densities are different (see picture), thus we expect that both versions of KL are not equal to zero. The point is thus, are the two versions of KL equal or not?
Using (*), in this particular case, as you almost correctly conjectured, it turns out that the two KL's are equal, i.e.
$$\text{KL}(N(0,1)|| N(1,1)) = \text{KL}(N(1,1)|| N(0,1)) = 0.5$$ | Kullback–Leibler divergence between two normal distributions | Note that the Kullback-Leibler divergence, which as you rightly point out is not symmetric, is
$$
\begin{align*}
\text{KL}(P\vert\vert Q) & = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x = | Kullback–Leibler divergence between two normal distributions
Note that the Kullback-Leibler divergence, which as you rightly point out is not symmetric, is
$$
\begin{align*}
\text{KL}(P\vert\vert Q) & = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x = \int p(x)\log \left(\frac{1}{q(x)}\right)\,\text{d}x - \int p(x)\log \left(\frac{1}{p(x)}\right)\text{d}x\\
& = H(P,Q) - H(P),
\end{align*}
$$
where $H(P,Q)$ is the cross-entropy of $P$ and $Q$, and $H(P)$ is the entropy of $P$, or the cross-entropy of $P$ with itself. I'm not familiar with python, but I guess that entropy in your code computes $H(P, Q)$.
As you can see from its definition, $H(P, Q)$ is generally different from $H(Q, P)$, which is another way to see why in general $\text{KL}(P||Q)\neq\text{KL}(Q||P)$.
Now, the KL for two normal models $p(x) = N(\mu_1,\sigma_1^2)$ and $q(x) = N(\mu_2,\sigma_2^2)$ is
$$
\text{KL}(p||q) = \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2}. \quad(*)
$$
Although there is a great deal of overlapping between $N(0, 1)$ and $N(1, 1)$, the two densities are different (see picture), thus we expect that both versions of KL are not equal to zero. The point is thus, are the two versions of KL equal or not?
Using (*), in this particular case, as you almost correctly conjectured, it turns out that the two KL's are equal, i.e.
$$\text{KL}(N(0,1)|| N(1,1)) = \text{KL}(N(1,1)|| N(0,1)) = 0.5$$ | Kullback–Leibler divergence between two normal distributions
Note that the Kullback-Leibler divergence, which as you rightly point out is not symmetric, is
$$
\begin{align*}
\text{KL}(P\vert\vert Q) & = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x = |
53,987 | Is it a confounder on not? | As Noah points out in a comment, the unobserved confounder is indeed a confounder since it creates a back-door path between treatment and growth. However, the model as presented does not contain a causal path between treatment and growth. Dagitty, which is where I presume your figure is from, will tell you that you can adjust for dT and then treatment and growth are independent: no effect to measure.
If there were an arrow between treatment and growth then yes, the unobserved variable would confound the effect, and it would be enough to control for dT, see attached figure. | Is it a confounder on not? | As Noah points out in a comment, the unobserved confounder is indeed a confounder since it creates a back-door path between treatment and growth. However, the model as presented does not contain a cau | Is it a confounder on not?
As Noah points out in a comment, the unobserved confounder is indeed a confounder since it creates a back-door path between treatment and growth. However, the model as presented does not contain a causal path between treatment and growth. Dagitty, which is where I presume your figure is from, will tell you that you can adjust for dT and then treatment and growth are independent: no effect to measure.
If there were an arrow between treatment and growth then yes, the unobserved variable would confound the effect, and it would be enough to control for dT, see attached figure. | Is it a confounder on not?
As Noah points out in a comment, the unobserved confounder is indeed a confounder since it creates a back-door path between treatment and growth. However, the model as presented does not contain a cau |
53,988 | Is there any intuitive meaning to $\beta^T A \beta=1$? | Summary: The expression $\beta^\prime A\beta$ generalizes the usual squared Euclidean distance. The set of $\beta$ for which $\beta^\prime A \beta = 1$ is often a sphere, when considered from the correct perspective; and generally it is always a linear transformation of a cylinder defined over a sphere or hyperboloid. (A "sphere" in two dimensions is, of course, a circle, as mentioned in the question.) These form the set of all possible quadratic hypersurfaces centered at the origin of a vector space.
The function
$$Q_A: \beta\to\beta^\prime A \beta$$
is a quadratic form. By definition, a quadratic form on a vector space $V$ over a field of scalars $R$ is an $R$-valued function $Q$ on $V$ for which
$$Q(\lambda x) = \lambda ^2 Q(x)$$
for all vectors $x$ and scalars $\lambda$ and the function
$$(x,y) \to Q(x+y) - Q(x) - Q(y)$$
(defined on ordered pairs of vectors) is linear in both variables [Serre Chapter IV Definition 1]. The properties of matrix multiplication readily imply both of these properties of $Q_A.$
The second requirement is needed only when $1+1=0$ in the field $R.$ Because that rarely interests statisticians, I will assume from now on that this is not the case.
BTW, the first condition immediately shows there's nothing special about the "$1$" in the equation $Q_A(\beta)=1$ except that it is positive, because all level sets of positive value are just rescaled versions of this one.
The first reason for placing the question in this general setting is to point out that different matrices $A$ and $A^\prime$ can define the same quadratic form. The reason is evident when you write out the product in $Q_A.$ This is a combination of products of components of $\beta.$ For indices $i\ne j,$ the coefficient of $\beta_i\beta_j$ is $A_{ij} + A_{ji},$ which is the $(i,j)$ coefficient of the symmetric matrix $A + A^\prime.$ For example, these matrices all define the same quadratic form:
$$\pmatrix{0&-2\\0&0},\quad \pmatrix{0&0\\-2&0},\quad \pmatrix{0&1\\-3&0},\quad \pmatrix{0&-1\\-1&0},$$
because in all cases $Q_A((\beta_1,\beta_2)^\prime) = -2\beta_1\beta_2.$ Notice that in this example the set of vectors $\beta\in\mathbb{R}^2$ for which $Q(\beta) = 1$ comprises both branches of the hyperbola $\beta_1\beta_2 = -1/2.$
This should make it clear that
Whenever $A$ and $B$ are square matrices of the same size and $A+A^\prime = B+B^\prime,$ the quadratic forms $Q_A$ and $Q_B$ are the same.
Consequently, when the function $Q_A$ is the object of interest, (1) the details of the matrix $A$ are irrelevant and (2) we may always assume $A = A^\prime$ is symmetric.
The second reason is that linear algebra offers a host of subtly related matrix decompositions. Recognizing that we're really dealing with a quadratic form strips away inessential characteristics of the problem, allowing us to focus on the relevant theorems and making it easier to see what $A$ is doing. In particular, the fundamental theorem about quadratic forms on finite-dimensional Real vector spaces is that they can always be diagonalized with a suitable orthogonal matrix. With a little post-processing (see the inv function in the code at the end of this post), this implies there exists a matrix $P$ (of the same size as $A$) for which
$$Q_A(\beta) = (P\beta)^\prime J (P\beta)$$
where $J$ is a diagonal matrix with only the values $\pm 1$ and $0$ on the diagonal. We can also arrange, if we wish, for the first $k$ values to be $1,$ the next $m$ values to $-1,$ and the remaining values to be $0.$ Sylvester's Law of Inertia asserts that this is possible and, no matter how it's done -- there usually are many matrices $P$ that diagonalize a form in this way -- you always obtain the same values of $k$ and $m,$ known as the "signature" of $Q_A.$
The columns of the matrix $P$ form a basis of $V.$ Any basis establishes a way to name each vector in $V$ uniquely (as a linear combination of the basis). In this basis, the formula for $Q_A$ is particularly simple. Writing $\alpha = P\beta$ for such a linear combination, Sylvester's Law says
$$Q_A(\beta) = \alpha_1^2 + \cdots + \alpha_k^2 - (\alpha_{k+1}^2 + \cdots \alpha_{k+m}^2).$$
The set of $\alpha$ for which $Q(\beta) = 1$ thereby describes a spheroid or hyperboloid in $V$. In most statistical applications, $Q_A$ is positive definite. This means $k=n$ (the dimension of $V$). Equivalently, $J$ is the identity matrix if and only if $Q_A$ is positive-definite. (We say $Q_A$ is positive semidefinite when $J$ has no values of $-1$ but might have some zeros on the diagonal.)
You should recognize this as the usual formula for squared Euclidean distance: it embodies the Pythagorean Theorem.
In other words,
when $Q_A$ is positive definite, $Q_A$ is the square of the usual Euclidean length (from the perspective of a suitable basis depending on $A$); and the function $(x,y)\to Q(x+y) - Q(x) - Q(y)$ is four times the usual dot product in that basis.
The level set $Q_A(\beta)=1$ in this case corresponds to the unit sphere, $\alpha_1^2 + \alpha_2^2 + \cdots + \alpha_n^2 = 1$ where again $\alpha = P\beta.$
In the original basis of $V$ (used for expressing $\beta$), the unit sphere has been distorted by the mapping $P^{-1}.$ It will be an ellipsoid.
To continue the previous example, the matrix $P$ can be taken to be
$$P = \frac{1}{\sqrt{2}}\pmatrix{-1&-1\\1&-1}.$$
You can readily compute
$$P^\prime\ (A + A^\prime)/2\ P = \pmatrix{1 & 0 \\ 0 & -1},$$
whose signature is $(1,-1).$ That is,
$$\begin{aligned}
-2\beta_1\beta_2 &= \beta^\prime A \beta = (P\beta)^\prime \pmatrix{1 & 0 \\ 0 & -1} (P\beta) = (P\beta)_1^2 - (P\beta)_2^2 \\
&= \frac{1}{2}(-\beta_1+\beta_2)^2 - \frac{1}{2}(-\beta_1-\beta_2)^2
\end{aligned}$$
for all vectors $\beta = (\beta_1,\beta_2)^\prime.$
To analyze an arbitrary $Q_A,$ then, you will want to find the matrix $P$ that diagonalizes the matrix $A+A^\prime,$ but even before you compute $P$ you know you are working with ordinary Euclidean geometry (when $A+A^\prime$ is positive definite) or, at worst, in a pseudo-Riemannian metric, of which the Minkowski space of the theory of Special Relativity is the most notable example (whose metric has signature $(3,-1)$).
As a demonstration of the computational practicality of this way of looking at quadratic forms, here is R code to diagonalize any square matrix. It generates a random square matrix A, diagonalizes it, and exhibits its diagonal form.
# Generate a random square matrix.
# set.seed(17)
n <- 5
A <- matrix(rnorm(n^2), n)
# Compute the symmetric representative of its quadratic form.
A <- (A + t(A))/2
# Diagonalize it.
obj <- eigen(A)
inv <- function(x) ifelse(x == 0, 0, 1/x)
P <- t(t(obj$vectors) * inv(sqrt(abs(obj$values))))
# Check the diagonalization.
zapsmall(t(P) %*% A %*% P)
When I ran it after setting the seed to 17 it output
[,1] [,2] [,3] [,4] [,5]
[1,] 1 0 0 0 0
[2,] 0 1 0 0 0
[3,] 0 0 1 0 0
[4,] 0 0 0 -1 0
[5,] 0 0 0 0 -1
The eigen function in R will cause the output to have first the 1's on the diagonal, then any zeros, then the -1's at the end.
Reference
Serre, J-S. 1973. A Course in Arithmetic. Springer-Verlag. | Is there any intuitive meaning to $\beta^T A \beta=1$? | Summary: The expression $\beta^\prime A\beta$ generalizes the usual squared Euclidean distance. The set of $\beta$ for which $\beta^\prime A \beta = 1$ is often a sphere, when considered from the cor | Is there any intuitive meaning to $\beta^T A \beta=1$?
Summary: The expression $\beta^\prime A\beta$ generalizes the usual squared Euclidean distance. The set of $\beta$ for which $\beta^\prime A \beta = 1$ is often a sphere, when considered from the correct perspective; and generally it is always a linear transformation of a cylinder defined over a sphere or hyperboloid. (A "sphere" in two dimensions is, of course, a circle, as mentioned in the question.) These form the set of all possible quadratic hypersurfaces centered at the origin of a vector space.
The function
$$Q_A: \beta\to\beta^\prime A \beta$$
is a quadratic form. By definition, a quadratic form on a vector space $V$ over a field of scalars $R$ is an $R$-valued function $Q$ on $V$ for which
$$Q(\lambda x) = \lambda ^2 Q(x)$$
for all vectors $x$ and scalars $\lambda$ and the function
$$(x,y) \to Q(x+y) - Q(x) - Q(y)$$
(defined on ordered pairs of vectors) is linear in both variables [Serre Chapter IV Definition 1]. The properties of matrix multiplication readily imply both of these properties of $Q_A.$
The second requirement is needed only when $1+1=0$ in the field $R.$ Because that rarely interests statisticians, I will assume from now on that this is not the case.
BTW, the first condition immediately shows there's nothing special about the "$1$" in the equation $Q_A(\beta)=1$ except that it is positive, because all level sets of positive value are just rescaled versions of this one.
The first reason for placing the question in this general setting is to point out that different matrices $A$ and $A^\prime$ can define the same quadratic form. The reason is evident when you write out the product in $Q_A.$ This is a combination of products of components of $\beta.$ For indices $i\ne j,$ the coefficient of $\beta_i\beta_j$ is $A_{ij} + A_{ji},$ which is the $(i,j)$ coefficient of the symmetric matrix $A + A^\prime.$ For example, these matrices all define the same quadratic form:
$$\pmatrix{0&-2\\0&0},\quad \pmatrix{0&0\\-2&0},\quad \pmatrix{0&1\\-3&0},\quad \pmatrix{0&-1\\-1&0},$$
because in all cases $Q_A((\beta_1,\beta_2)^\prime) = -2\beta_1\beta_2.$ Notice that in this example the set of vectors $\beta\in\mathbb{R}^2$ for which $Q(\beta) = 1$ comprises both branches of the hyperbola $\beta_1\beta_2 = -1/2.$
This should make it clear that
Whenever $A$ and $B$ are square matrices of the same size and $A+A^\prime = B+B^\prime,$ the quadratic forms $Q_A$ and $Q_B$ are the same.
Consequently, when the function $Q_A$ is the object of interest, (1) the details of the matrix $A$ are irrelevant and (2) we may always assume $A = A^\prime$ is symmetric.
The second reason is that linear algebra offers a host of subtly related matrix decompositions. Recognizing that we're really dealing with a quadratic form strips away inessential characteristics of the problem, allowing us to focus on the relevant theorems and making it easier to see what $A$ is doing. In particular, the fundamental theorem about quadratic forms on finite-dimensional Real vector spaces is that they can always be diagonalized with a suitable orthogonal matrix. With a little post-processing (see the inv function in the code at the end of this post), this implies there exists a matrix $P$ (of the same size as $A$) for which
$$Q_A(\beta) = (P\beta)^\prime J (P\beta)$$
where $J$ is a diagonal matrix with only the values $\pm 1$ and $0$ on the diagonal. We can also arrange, if we wish, for the first $k$ values to be $1,$ the next $m$ values to $-1,$ and the remaining values to be $0.$ Sylvester's Law of Inertia asserts that this is possible and, no matter how it's done -- there usually are many matrices $P$ that diagonalize a form in this way -- you always obtain the same values of $k$ and $m,$ known as the "signature" of $Q_A.$
The columns of the matrix $P$ form a basis of $V.$ Any basis establishes a way to name each vector in $V$ uniquely (as a linear combination of the basis). In this basis, the formula for $Q_A$ is particularly simple. Writing $\alpha = P\beta$ for such a linear combination, Sylvester's Law says
$$Q_A(\beta) = \alpha_1^2 + \cdots + \alpha_k^2 - (\alpha_{k+1}^2 + \cdots \alpha_{k+m}^2).$$
The set of $\alpha$ for which $Q(\beta) = 1$ thereby describes a spheroid or hyperboloid in $V$. In most statistical applications, $Q_A$ is positive definite. This means $k=n$ (the dimension of $V$). Equivalently, $J$ is the identity matrix if and only if $Q_A$ is positive-definite. (We say $Q_A$ is positive semidefinite when $J$ has no values of $-1$ but might have some zeros on the diagonal.)
You should recognize this as the usual formula for squared Euclidean distance: it embodies the Pythagorean Theorem.
In other words,
when $Q_A$ is positive definite, $Q_A$ is the square of the usual Euclidean length (from the perspective of a suitable basis depending on $A$); and the function $(x,y)\to Q(x+y) - Q(x) - Q(y)$ is four times the usual dot product in that basis.
The level set $Q_A(\beta)=1$ in this case corresponds to the unit sphere, $\alpha_1^2 + \alpha_2^2 + \cdots + \alpha_n^2 = 1$ where again $\alpha = P\beta.$
In the original basis of $V$ (used for expressing $\beta$), the unit sphere has been distorted by the mapping $P^{-1}.$ It will be an ellipsoid.
To continue the previous example, the matrix $P$ can be taken to be
$$P = \frac{1}{\sqrt{2}}\pmatrix{-1&-1\\1&-1}.$$
You can readily compute
$$P^\prime\ (A + A^\prime)/2\ P = \pmatrix{1 & 0 \\ 0 & -1},$$
whose signature is $(1,-1).$ That is,
$$\begin{aligned}
-2\beta_1\beta_2 &= \beta^\prime A \beta = (P\beta)^\prime \pmatrix{1 & 0 \\ 0 & -1} (P\beta) = (P\beta)_1^2 - (P\beta)_2^2 \\
&= \frac{1}{2}(-\beta_1+\beta_2)^2 - \frac{1}{2}(-\beta_1-\beta_2)^2
\end{aligned}$$
for all vectors $\beta = (\beta_1,\beta_2)^\prime.$
To analyze an arbitrary $Q_A,$ then, you will want to find the matrix $P$ that diagonalizes the matrix $A+A^\prime,$ but even before you compute $P$ you know you are working with ordinary Euclidean geometry (when $A+A^\prime$ is positive definite) or, at worst, in a pseudo-Riemannian metric, of which the Minkowski space of the theory of Special Relativity is the most notable example (whose metric has signature $(3,-1)$).
As a demonstration of the computational practicality of this way of looking at quadratic forms, here is R code to diagonalize any square matrix. It generates a random square matrix A, diagonalizes it, and exhibits its diagonal form.
# Generate a random square matrix.
# set.seed(17)
n <- 5
A <- matrix(rnorm(n^2), n)
# Compute the symmetric representative of its quadratic form.
A <- (A + t(A))/2
# Diagonalize it.
obj <- eigen(A)
inv <- function(x) ifelse(x == 0, 0, 1/x)
P <- t(t(obj$vectors) * inv(sqrt(abs(obj$values))))
# Check the diagonalization.
zapsmall(t(P) %*% A %*% P)
When I ran it after setting the seed to 17 it output
[,1] [,2] [,3] [,4] [,5]
[1,] 1 0 0 0 0
[2,] 0 1 0 0 0
[3,] 0 0 1 0 0
[4,] 0 0 0 -1 0
[5,] 0 0 0 0 -1
The eigen function in R will cause the output to have first the 1's on the diagonal, then any zeros, then the -1's at the end.
Reference
Serre, J-S. 1973. A Course in Arithmetic. Springer-Verlag. | Is there any intuitive meaning to $\beta^T A \beta=1$?
Summary: The expression $\beta^\prime A\beta$ generalizes the usual squared Euclidean distance. The set of $\beta$ for which $\beta^\prime A \beta = 1$ is often a sphere, when considered from the cor |
53,989 | Is there any intuitive meaning to $\beta^T A \beta=1$? | Note that, for any vectors $a$ and $b$ in an inner-product space such as $\mathbb R^d$, one possible inner-product can be defined as $$a^T b = a \cdot b = \|a\|\|b\| \cos \theta$$ where $\|\cdot\|$ is the Euclidean norm of the vector and $\theta$ is the angle between the two vectors. Therefore,
\begin{align}
\beta^T A\beta &= \beta^T(A\beta) \\
&= \beta \cdot (A\beta) \\
&= \|\beta\| \|A\beta \| \cos \theta
\end{align}
where $\theta$ now represents the angle between $\beta$ and $A\beta$. Then,
\begin{align}
\beta^T A\beta &= 1 \\
\|\beta\| \|A\beta \| \cos \theta &= 1 \\
\theta &= \arccos \left(\frac{1}{\|\beta\| \|A\beta \|}\right)
\end{align}
Hence, we are interested in the vectors $\beta$ such that the angle between $\beta$ and $A\beta$ is $\arccos \left(\frac{1}{\|\beta\| \|A\beta \|}\right)$.
More formally, let $A$ be the matrix representation of the linear operator $T : \mathbb R^d \to \mathbb R^d$ in the inner-product space $\mathbb R^d$ defined with the same inner product above. We are interested in the set of vectors $\beta$ such that $\beta$ and $T(\beta)$ have an angle $\theta$ between them. | Is there any intuitive meaning to $\beta^T A \beta=1$? | Note that, for any vectors $a$ and $b$ in an inner-product space such as $\mathbb R^d$, one possible inner-product can be defined as $$a^T b = a \cdot b = \|a\|\|b\| \cos \theta$$ where $\|\cdot\|$ is | Is there any intuitive meaning to $\beta^T A \beta=1$?
Note that, for any vectors $a$ and $b$ in an inner-product space such as $\mathbb R^d$, one possible inner-product can be defined as $$a^T b = a \cdot b = \|a\|\|b\| \cos \theta$$ where $\|\cdot\|$ is the Euclidean norm of the vector and $\theta$ is the angle between the two vectors. Therefore,
\begin{align}
\beta^T A\beta &= \beta^T(A\beta) \\
&= \beta \cdot (A\beta) \\
&= \|\beta\| \|A\beta \| \cos \theta
\end{align}
where $\theta$ now represents the angle between $\beta$ and $A\beta$. Then,
\begin{align}
\beta^T A\beta &= 1 \\
\|\beta\| \|A\beta \| \cos \theta &= 1 \\
\theta &= \arccos \left(\frac{1}{\|\beta\| \|A\beta \|}\right)
\end{align}
Hence, we are interested in the vectors $\beta$ such that the angle between $\beta$ and $A\beta$ is $\arccos \left(\frac{1}{\|\beta\| \|A\beta \|}\right)$.
More formally, let $A$ be the matrix representation of the linear operator $T : \mathbb R^d \to \mathbb R^d$ in the inner-product space $\mathbb R^d$ defined with the same inner product above. We are interested in the set of vectors $\beta$ such that $\beta$ and $T(\beta)$ have an angle $\theta$ between them. | Is there any intuitive meaning to $\beta^T A \beta=1$?
Note that, for any vectors $a$ and $b$ in an inner-product space such as $\mathbb R^d$, one possible inner-product can be defined as $$a^T b = a \cdot b = \|a\|\|b\| \cos \theta$$ where $\|\cdot\|$ is |
53,990 | XGBoost when P>>N | Data is king, so if it works in real life we can't argue with it.
Having said that, I agree with you it's bad practice and will usually not end well.
I can design a dataset where it would work though. Images for example are known for having correlation between close pixels, effectively reducing the number of features with information. Imagine for example a dataset of black balls on a white background and you need to classify images with a ball vs. no ball. In that case, most pixels will not have any information gain so the split will not happen in the tree. All the edge pixels will always be white, containing zero information. Combining that with regularization and image augmentation may leave you with a working model.
Just for my curiosity, do you know if the images are generally similar?
Another phenomenon worth mentioning is double descent. This is more of a NN phenomenon where over parameterized models where $P>>N$ achieve better performance as the number of parameters increases. But it's also been demonstrated in classical models like linear regression.
In summary it's possible, but if I had to do it (with XGBoost) I'd do the following:
Dimensionality reduction.
Strong regularization.
Most importantly - a robust, careful validation procedure. | XGBoost when P>>N | Data is king, so if it works in real life we can't argue with it.
Having said that, I agree with you it's bad practice and will usually not end well.
I can design a dataset where it would work though. | XGBoost when P>>N
Data is king, so if it works in real life we can't argue with it.
Having said that, I agree with you it's bad practice and will usually not end well.
I can design a dataset where it would work though. Images for example are known for having correlation between close pixels, effectively reducing the number of features with information. Imagine for example a dataset of black balls on a white background and you need to classify images with a ball vs. no ball. In that case, most pixels will not have any information gain so the split will not happen in the tree. All the edge pixels will always be white, containing zero information. Combining that with regularization and image augmentation may leave you with a working model.
Just for my curiosity, do you know if the images are generally similar?
Another phenomenon worth mentioning is double descent. This is more of a NN phenomenon where over parameterized models where $P>>N$ achieve better performance as the number of parameters increases. But it's also been demonstrated in classical models like linear regression.
In summary it's possible, but if I had to do it (with XGBoost) I'd do the following:
Dimensionality reduction.
Strong regularization.
Most importantly - a robust, careful validation procedure. | XGBoost when P>>N
Data is king, so if it works in real life we can't argue with it.
Having said that, I agree with you it's bad practice and will usually not end well.
I can design a dataset where it would work though. |
53,991 | Interpreting logistic modelling and linear modelling results for the same formula | You have quite a lot of data, and very well-behaved data. In such a case, it does happen that the model fits from an ordinary and a logistic regression are close to each other, especially in the "middle" part of fitted probabilities, as are the p-values for the coefficients.
Note, though, that there are differences. The fitted probability as a function of Age is noticeably curved for sex = F in the logistic model, but linear in the OLS model (not surprising, of course, because of the log link). And the confidence region for sex = M at high Ages exceeds 1 in the OLS, which is of course nonsensical.
The key issue is that nonsensical fits (or predictions, or extrapolations) larger than 1 or smaller than 0 can easily happen in OLS models if you have less well-behaved data, for instance if you have lots of 1s in the dependent variable for certain parameter constellations. This can happen more easily, and be harder to detect, in more complex models with more than two parameters. | Interpreting logistic modelling and linear modelling results for the same formula | You have quite a lot of data, and very well-behaved data. In such a case, it does happen that the model fits from an ordinary and a logistic regression are close to each other, especially in the "midd | Interpreting logistic modelling and linear modelling results for the same formula
You have quite a lot of data, and very well-behaved data. In such a case, it does happen that the model fits from an ordinary and a logistic regression are close to each other, especially in the "middle" part of fitted probabilities, as are the p-values for the coefficients.
Note, though, that there are differences. The fitted probability as a function of Age is noticeably curved for sex = F in the logistic model, but linear in the OLS model (not surprising, of course, because of the log link). And the confidence region for sex = M at high Ages exceeds 1 in the OLS, which is of course nonsensical.
The key issue is that nonsensical fits (or predictions, or extrapolations) larger than 1 or smaller than 0 can easily happen in OLS models if you have less well-behaved data, for instance if you have lots of 1s in the dependent variable for certain parameter constellations. This can happen more easily, and be harder to detect, in more complex models with more than two parameters. | Interpreting logistic modelling and linear modelling results for the same formula
You have quite a lot of data, and very well-behaved data. In such a case, it does happen that the model fits from an ordinary and a logistic regression are close to each other, especially in the "midd |
53,992 | Interpreting logistic modelling and linear modelling results for the same formula | why when linear modelling, which works best for continuous data gave very close results to logistic modelling even when I had a dichotomous response variable?
The logistic curve can be approximated with a linear function for a small range. See in the image below a plot of the logistic curve and how in a small range (in the shaded area) the logistic curve is approximately linear.
The effect is true for any non-linear function (not just with logistic curves). When the range is very small, and non-linear functions start to be approximately linear, then many different functions can fit the same data reasonably well.
A similar situation is this question (Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model) where data follows an approximately exponential model in a small range and many different models were found to fit the data well (among them an unreasonable fit with a vapour pressure model that has no physical relation with the mechanics of the original problem).
Or here Why would you perform transformations over polynomial regression? | Interpreting logistic modelling and linear modelling results for the same formula | why when linear modelling, which works best for continuous data gave very close results to logistic modelling even when I had a dichotomous response variable?
The logistic curve can be approximated w | Interpreting logistic modelling and linear modelling results for the same formula
why when linear modelling, which works best for continuous data gave very close results to logistic modelling even when I had a dichotomous response variable?
The logistic curve can be approximated with a linear function for a small range. See in the image below a plot of the logistic curve and how in a small range (in the shaded area) the logistic curve is approximately linear.
The effect is true for any non-linear function (not just with logistic curves). When the range is very small, and non-linear functions start to be approximately linear, then many different functions can fit the same data reasonably well.
A similar situation is this question (Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model) where data follows an approximately exponential model in a small range and many different models were found to fit the data well (among them an unreasonable fit with a vapour pressure model that has no physical relation with the mechanics of the original problem).
Or here Why would you perform transformations over polynomial regression? | Interpreting logistic modelling and linear modelling results for the same formula
why when linear modelling, which works best for continuous data gave very close results to logistic modelling even when I had a dichotomous response variable?
The logistic curve can be approximated w |
53,993 | Calculate standard deviation with only the averages from a number of samples | To proceed in general, suppose we have $n$ samples that each contain $m$ values, generated as IID variables from an underlying superpopulation with mean parameter $\mu$ and standard deviation parameter $\sigma$. Now, suppose we define the quantities:
$$\bar{X}_k \equiv \frac{1}{m} \sum_{j=1}^m X_{k,j}
\quad \quad \quad
\bar{X}_* \equiv \frac{1}{n} \sum_{i=1}^n \bar{X}_k
\quad \quad \quad
S_{*}^2 \equiv \frac{1}{n-1} \sum_{i=1}^n (\bar{X}_k - \bar{X}_*)^2.$$
The values $\bar{X}_k$ are the sample means of the individual samples $k=1,...,n$ and the values $\bar{X}_*$ and $S_*^2$ are the sample mean and sample variance of the set of sample means. Using standard moment rules, it can easily be shown that:
$$\mathbb{E}(\bar{X}_k) = \mu
\quad \quad \quad \quad \quad
\mathbb{V}(\bar{X}_k) = \frac{\sigma^2}{m}
\quad \quad \quad \quad \quad
\mathbb{E}(S_*^2) = \frac{\sigma^2}{m}.$$
Consequently, you can obtain an estimator for the standard deviation parameter as $\hat{\sigma} = \sqrt{m} \cdot S_*$. This estimator is biased, but it can be bias-corrected if you want (see here), based on the normality of the sample means (by appeal to the CLT). The bias-corrected estimator is:
$$\hat{\sigma}_\text{UB} = \sqrt{\frac{(n-1)m}{2}} \cdot \frac{\Gamma(\tfrac{n-1}{2})}{\Gamma(\tfrac{n}{2})} \cdot S_*.$$
In your problem you have $n=100$ and $m=1000000$ which gives the unbiased estimator:
$$\hat{\sigma}_\text{UB} = \sqrt{49500000} \cdot \frac{\Gamma(49.5)}{\Gamma(50)} \cdot S_* = 1002.528 \cdot S_*.$$ | Calculate standard deviation with only the averages from a number of samples | To proceed in general, suppose we have $n$ samples that each contain $m$ values, generated as IID variables from an underlying superpopulation with mean parameter $\mu$ and standard deviation paramete | Calculate standard deviation with only the averages from a number of samples
To proceed in general, suppose we have $n$ samples that each contain $m$ values, generated as IID variables from an underlying superpopulation with mean parameter $\mu$ and standard deviation parameter $\sigma$. Now, suppose we define the quantities:
$$\bar{X}_k \equiv \frac{1}{m} \sum_{j=1}^m X_{k,j}
\quad \quad \quad
\bar{X}_* \equiv \frac{1}{n} \sum_{i=1}^n \bar{X}_k
\quad \quad \quad
S_{*}^2 \equiv \frac{1}{n-1} \sum_{i=1}^n (\bar{X}_k - \bar{X}_*)^2.$$
The values $\bar{X}_k$ are the sample means of the individual samples $k=1,...,n$ and the values $\bar{X}_*$ and $S_*^2$ are the sample mean and sample variance of the set of sample means. Using standard moment rules, it can easily be shown that:
$$\mathbb{E}(\bar{X}_k) = \mu
\quad \quad \quad \quad \quad
\mathbb{V}(\bar{X}_k) = \frac{\sigma^2}{m}
\quad \quad \quad \quad \quad
\mathbb{E}(S_*^2) = \frac{\sigma^2}{m}.$$
Consequently, you can obtain an estimator for the standard deviation parameter as $\hat{\sigma} = \sqrt{m} \cdot S_*$. This estimator is biased, but it can be bias-corrected if you want (see here), based on the normality of the sample means (by appeal to the CLT). The bias-corrected estimator is:
$$\hat{\sigma}_\text{UB} = \sqrt{\frac{(n-1)m}{2}} \cdot \frac{\Gamma(\tfrac{n-1}{2})}{\Gamma(\tfrac{n}{2})} \cdot S_*.$$
In your problem you have $n=100$ and $m=1000000$ which gives the unbiased estimator:
$$\hat{\sigma}_\text{UB} = \sqrt{49500000} \cdot \frac{\Gamma(49.5)}{\Gamma(50)} \cdot S_* = 1002.528 \cdot S_*.$$ | Calculate standard deviation with only the averages from a number of samples
To proceed in general, suppose we have $n$ samples that each contain $m$ values, generated as IID variables from an underlying superpopulation with mean parameter $\mu$ and standard deviation paramete |
53,994 | Linear relationship seems to depends from one value: would bootstraping help? | First off:
If I extend this argument, would removing other value also impact the relationship?
Almost certainly yes. The regression line slope depends on all observations, so removing any point will change the estimate, and therefore also the $p$ value.
(It is possible, but unlikely, that there are data points with no influence on the coefficient estimate. Removing such a point will not change the parameter estimate, but will change the $p$ value slightly, since this depends on the parameter estimate and the number of observations.)
Removing a data point may or may not move your $p$ value above or below the magic threshold of $p < .05$.
which statistical operation would help me conclude if the relationship that I observe here is truly significant or not?
Statistical significance depends on the observed data (and a correctly specified model). There is no "true" significance. Either the null hypothesis holds (e.g., there is no relationship), or it does not (e.g., there is a relationship) - and many statisticians will say that the null hypothesis never holds in the first place. All we can say given data (and a model) is whether the data are consistent with a null hypothesis or not.
Thus, if you have no reason to suspect a data error, you should take your $p$ value as it is. It would be good to be cautious about any conclusions you draw and note explicitly that significance depends on a single data point. Also, note that the difference between the slopes you estimate with and without this one data point will probably in itself not be significant (Gelman & Stern, 2006).
Would boostraping help me ?
Bootstrapping is just another way of looking at given data. It will give slightly different results. You may get $p<.05$ with the full dataset with a bootstrap - or not, and the same if you omit this observation. (And as above, a bootstrapped and a parametric parameter estimate may not be significantly different.) If all you are interested in is significance, then I would not worry overly about differences between a parametric and a bootstrap analysis.
Edge cases like these are one main reason to be careful not to over-interpret statistical significance. If you want more precision in your estimates, there is no way around collecting (a lot) more data. | Linear relationship seems to depends from one value: would bootstraping help? | First off:
If I extend this argument, would removing other value also impact the relationship?
Almost certainly yes. The regression line slope depends on all observations, so removing any point will | Linear relationship seems to depends from one value: would bootstraping help?
First off:
If I extend this argument, would removing other value also impact the relationship?
Almost certainly yes. The regression line slope depends on all observations, so removing any point will change the estimate, and therefore also the $p$ value.
(It is possible, but unlikely, that there are data points with no influence on the coefficient estimate. Removing such a point will not change the parameter estimate, but will change the $p$ value slightly, since this depends on the parameter estimate and the number of observations.)
Removing a data point may or may not move your $p$ value above or below the magic threshold of $p < .05$.
which statistical operation would help me conclude if the relationship that I observe here is truly significant or not?
Statistical significance depends on the observed data (and a correctly specified model). There is no "true" significance. Either the null hypothesis holds (e.g., there is no relationship), or it does not (e.g., there is a relationship) - and many statisticians will say that the null hypothesis never holds in the first place. All we can say given data (and a model) is whether the data are consistent with a null hypothesis or not.
Thus, if you have no reason to suspect a data error, you should take your $p$ value as it is. It would be good to be cautious about any conclusions you draw and note explicitly that significance depends on a single data point. Also, note that the difference between the slopes you estimate with and without this one data point will probably in itself not be significant (Gelman & Stern, 2006).
Would boostraping help me ?
Bootstrapping is just another way of looking at given data. It will give slightly different results. You may get $p<.05$ with the full dataset with a bootstrap - or not, and the same if you omit this observation. (And as above, a bootstrapped and a parametric parameter estimate may not be significantly different.) If all you are interested in is significance, then I would not worry overly about differences between a parametric and a bootstrap analysis.
Edge cases like these are one main reason to be careful not to over-interpret statistical significance. If you want more precision in your estimates, there is no way around collecting (a lot) more data. | Linear relationship seems to depends from one value: would bootstraping help?
First off:
If I extend this argument, would removing other value also impact the relationship?
Almost certainly yes. The regression line slope depends on all observations, so removing any point will |
53,995 | Interpreting interaction effects for categorical reference group in regression | First, with the default R treatment coding of your categorical predictors, the individual coefficients for things like Story Vision are their associations with outcome when all other predictors are at reference (categorical predictors) or 0 levels (continuous predictors, like Brand Attitude seems to be). So the Story Vision coefficient is specifically for cases having Style Amiable. Similarly, the Style Driver coefficient is for cases having Story Consumer. The model intercept (which I don't see reported) is the expected outcome for a combination of Style Amiable,Story Consumer, and Brand Attitude = 0.
Second, although the interaction coefficient for Story Product : Style Analytical has a numerical value of 1.063, the high associated standard error (0.727) indicates that you can't reliably distinguish its value from 0. Don't over-interpret such values.
Third, your data set is probably too small to accommodate all of those interaction terms. In Model 4 you have 108 cases and 16 regression coefficients to estimate, for a ratio of less than 7/1. You typically need ratios on the order of 15/1 to avoid overfitting with this type of data. Your Model 3 (without interactions) has a more reliable 15/1 ratio. The slightly higher R-squared in Model 4 is probably from overfitting the data; that model that might not work well on a new data sample.
Fourth, you have shown that Brand Attitude is pretty strongly associated with outcomes. When you have enough data to support a model more complicated than Model 3, you might consider evaluating interactions of that predictor with the Style and/or Story categories, which might be more important than the relatively weak interactions seen between the various Style and Story categories. | Interpreting interaction effects for categorical reference group in regression | First, with the default R treatment coding of your categorical predictors, the individual coefficients for things like Story Vision are their associations with outcome when all other predictors are at | Interpreting interaction effects for categorical reference group in regression
First, with the default R treatment coding of your categorical predictors, the individual coefficients for things like Story Vision are their associations with outcome when all other predictors are at reference (categorical predictors) or 0 levels (continuous predictors, like Brand Attitude seems to be). So the Story Vision coefficient is specifically for cases having Style Amiable. Similarly, the Style Driver coefficient is for cases having Story Consumer. The model intercept (which I don't see reported) is the expected outcome for a combination of Style Amiable,Story Consumer, and Brand Attitude = 0.
Second, although the interaction coefficient for Story Product : Style Analytical has a numerical value of 1.063, the high associated standard error (0.727) indicates that you can't reliably distinguish its value from 0. Don't over-interpret such values.
Third, your data set is probably too small to accommodate all of those interaction terms. In Model 4 you have 108 cases and 16 regression coefficients to estimate, for a ratio of less than 7/1. You typically need ratios on the order of 15/1 to avoid overfitting with this type of data. Your Model 3 (without interactions) has a more reliable 15/1 ratio. The slightly higher R-squared in Model 4 is probably from overfitting the data; that model that might not work well on a new data sample.
Fourth, you have shown that Brand Attitude is pretty strongly associated with outcomes. When you have enough data to support a model more complicated than Model 3, you might consider evaluating interactions of that predictor with the Style and/or Story categories, which might be more important than the relatively weak interactions seen between the various Style and Story categories. | Interpreting interaction effects for categorical reference group in regression
First, with the default R treatment coding of your categorical predictors, the individual coefficients for things like Story Vision are their associations with outcome when all other predictors are at |
53,996 | Interpreting interaction effects for categorical reference group in regression | @EdM makes valid points - read those first. Just for reference, you could get the effects of interest along with their confidence intervals using the emmeans package. For example:
library(emmeans)
# Toy data: add a dummy categorical variable to the Iris dataset
iris$Style <- rep(LETTERS[1:3], 50)
iris
Sepal.Length Sepal.Width Petal.Length Petal.Width Species Style
1 5.1 3.5 1.4 0.2 setosa A
2 4.9 3.0 1.4 0.2 setosa B
3 4.7 3.2 1.3 0.2 setosa C
4 4.6 3.1 1.5 0.2 setosa A
5 5.0 3.6 1.4 0.2 setosa B
...
Fit a model:
fit <- glm(data= iris, Sepal.Length ~ Species * Style)
summary(fit)
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.05294 0.12486 40.470 < 2e-16 ***
Speciesversicolor 0.71765 0.17657 4.064 7.97e-05 ***
Speciesvirginica 1.70331 0.17931 9.499 < 2e-16 ***
StyleB -0.04118 0.17657 -0.233 0.816
StyleC -0.10294 0.17931 -0.574 0.567
Speciesversicolor:StyleB 0.28934 0.25166 1.150 0.252
Speciesvirginica:StyleB -0.26801 0.25166 -1.065 0.289
Speciesversicolor:StyleC 0.35588 0.25166 1.414 0.160
Speciesvirginica:StyleC -0.08272 0.25359 -0.326 0.745
...
Get marginal means:
emmeans(fit, spec= ~Style|Species)
Species = setosa:
Style emmean SE df lower.CL upper.CL
A 5.05 0.125 141 4.81 5.30
B 5.01 0.125 141 4.76 5.26
C 4.95 0.129 141 4.70 5.20
Species = versicolor:
Style emmean SE df lower.CL upper.CL
A 5.77 0.125 141 5.52 6.02
B 6.02 0.129 141 5.76 6.27
C 6.02 0.125 141 5.78 6.27
Species = virginica:
Style emmean SE df lower.CL upper.CL
A 6.76 0.129 141 6.50 7.01
B 6.45 0.125 141 6.20 6.69
C 6.57 0.125 141 6.32 6.82
this shows for example that A is the highest Style within Species setosa although A, B, and C are not convincingly different from each other. | Interpreting interaction effects for categorical reference group in regression | @EdM makes valid points - read those first. Just for reference, you could get the effects of interest along with their confidence intervals using the emmeans package. For example:
library(emmeans)
# | Interpreting interaction effects for categorical reference group in regression
@EdM makes valid points - read those first. Just for reference, you could get the effects of interest along with their confidence intervals using the emmeans package. For example:
library(emmeans)
# Toy data: add a dummy categorical variable to the Iris dataset
iris$Style <- rep(LETTERS[1:3], 50)
iris
Sepal.Length Sepal.Width Petal.Length Petal.Width Species Style
1 5.1 3.5 1.4 0.2 setosa A
2 4.9 3.0 1.4 0.2 setosa B
3 4.7 3.2 1.3 0.2 setosa C
4 4.6 3.1 1.5 0.2 setosa A
5 5.0 3.6 1.4 0.2 setosa B
...
Fit a model:
fit <- glm(data= iris, Sepal.Length ~ Species * Style)
summary(fit)
...
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.05294 0.12486 40.470 < 2e-16 ***
Speciesversicolor 0.71765 0.17657 4.064 7.97e-05 ***
Speciesvirginica 1.70331 0.17931 9.499 < 2e-16 ***
StyleB -0.04118 0.17657 -0.233 0.816
StyleC -0.10294 0.17931 -0.574 0.567
Speciesversicolor:StyleB 0.28934 0.25166 1.150 0.252
Speciesvirginica:StyleB -0.26801 0.25166 -1.065 0.289
Speciesversicolor:StyleC 0.35588 0.25166 1.414 0.160
Speciesvirginica:StyleC -0.08272 0.25359 -0.326 0.745
...
Get marginal means:
emmeans(fit, spec= ~Style|Species)
Species = setosa:
Style emmean SE df lower.CL upper.CL
A 5.05 0.125 141 4.81 5.30
B 5.01 0.125 141 4.76 5.26
C 4.95 0.129 141 4.70 5.20
Species = versicolor:
Style emmean SE df lower.CL upper.CL
A 5.77 0.125 141 5.52 6.02
B 6.02 0.129 141 5.76 6.27
C 6.02 0.125 141 5.78 6.27
Species = virginica:
Style emmean SE df lower.CL upper.CL
A 6.76 0.129 141 6.50 7.01
B 6.45 0.125 141 6.20 6.69
C 6.57 0.125 141 6.32 6.82
this shows for example that A is the highest Style within Species setosa although A, B, and C are not convincingly different from each other. | Interpreting interaction effects for categorical reference group in regression
@EdM makes valid points - read those first. Just for reference, you could get the effects of interest along with their confidence intervals using the emmeans package. For example:
library(emmeans)
# |
53,997 | Can RandomForest multiplicatively combine features? | A tree can form a lot of splits of the form $A>a_i$ or $B>b_i$. Enough of such splits in a single tree can give you a step function that can then approximate just about any function of $A$ and $B$. In RF you usually limit how deep each tree gets, but instead average over lots of trees, which in combination again can give you a step function in $A$ and $B$ that can then approximate just about any function of $A$ and $B$.
However, if your trees are, say, 2 deep, that just gives you a single step in your step function per tree, so it takes a lot of trees, before you approximate a linear relationship such as $A\times B$ really well. If you give the RF the feature $A\times B$ instead, it needs half the depth for the same approximation. I.e. the functional relationship becomes easier for the model to fit.
Nevertheless, if you don't have all that much training data, that will limit how smooth the output of the RF can be no matter whether it gets $A$ and $B$, or additionally $A \times B$ as features. | Can RandomForest multiplicatively combine features? | A tree can form a lot of splits of the form $A>a_i$ or $B>b_i$. Enough of such splits in a single tree can give you a step function that can then approximate just about any function of $A$ and $B$. In | Can RandomForest multiplicatively combine features?
A tree can form a lot of splits of the form $A>a_i$ or $B>b_i$. Enough of such splits in a single tree can give you a step function that can then approximate just about any function of $A$ and $B$. In RF you usually limit how deep each tree gets, but instead average over lots of trees, which in combination again can give you a step function in $A$ and $B$ that can then approximate just about any function of $A$ and $B$.
However, if your trees are, say, 2 deep, that just gives you a single step in your step function per tree, so it takes a lot of trees, before you approximate a linear relationship such as $A\times B$ really well. If you give the RF the feature $A\times B$ instead, it needs half the depth for the same approximation. I.e. the functional relationship becomes easier for the model to fit.
Nevertheless, if you don't have all that much training data, that will limit how smooth the output of the RF can be no matter whether it gets $A$ and $B$, or additionally $A \times B$ as features. | Can RandomForest multiplicatively combine features?
A tree can form a lot of splits of the form $A>a_i$ or $B>b_i$. Enough of such splits in a single tree can give you a step function that can then approximate just about any function of $A$ and $B$. In |
53,998 | Mistake in Casella & Berger on page 207? | That page describes identical and independently distributed variables $X_i$.
You could change that quoted piece in more ways. Probably the following would be better
Change
Under the random sampling model each $X_i$ is an observation on the same variable and each $X_i$ has a marginal distribution given by $f(x)$.
Into
Under the random sampling model each $X_i = x_i$ is an observation on the same population and each $X_i$ has a marginal distribution given by $f(x)$.
See also the piece on page 209
The random sampling model in definitions 5.1.1 is sometimes called sampling from an infinite population. Think of obtaining the values of $X_1, \dots, X_n$ sequentially. First, the experiment is performed and $X_1 = x_1$ is observed. Then, the experiment is repeated and $X_2 = x_2$ is observed. The assumption of independence in random sampling implies that the probability distribution for $X_2$ is unaffected by the fact that $X_1 = x_1$ was observed first. „Removing” $x_1$ from the infinite population does not affect the population, so $X_2 = x_2$ is still a random observation from the same population.
In that last sentence you see both the changes come together. We speak about the small letter when referring to the observation. So it is not $X_2$ that is called an observation but the observation that $X_2 = x_2$ is the observation. And it is not called 'an observation on the same variable' but 'an observation from the same population'.
Personally I think that it is not so bad to call the $X_i$ observations as well and the more important change in the quote is changing 'variable' into 'population'.
The $X_i$ can be seen as random variables that describe a random observation. Say, you could use it in a sentence as "we describe the random observations of the Donkey's walking speed with variables $X_1, X_2,\dots,X_n$". The small letter $x_i$ is more like the realisation of the observation rather than the observation itself. The small letter $x_i$ is not strictly the 'observation' itself but it is more like the 'observed value'. You see this also in the text with sentences like
$X_1 = x_1$ is observed
The $x_1$ is not the 'observation' itself (it is not the act) but it is the 'observed value', or it is 'what is observed'. | Mistake in Casella & Berger on page 207? | That page describes identical and independently distributed variables $X_i$.
You could change that quoted piece in more ways. Probably the following would be better
Change
Under the random sampling m | Mistake in Casella & Berger on page 207?
That page describes identical and independently distributed variables $X_i$.
You could change that quoted piece in more ways. Probably the following would be better
Change
Under the random sampling model each $X_i$ is an observation on the same variable and each $X_i$ has a marginal distribution given by $f(x)$.
Into
Under the random sampling model each $X_i = x_i$ is an observation on the same population and each $X_i$ has a marginal distribution given by $f(x)$.
See also the piece on page 209
The random sampling model in definitions 5.1.1 is sometimes called sampling from an infinite population. Think of obtaining the values of $X_1, \dots, X_n$ sequentially. First, the experiment is performed and $X_1 = x_1$ is observed. Then, the experiment is repeated and $X_2 = x_2$ is observed. The assumption of independence in random sampling implies that the probability distribution for $X_2$ is unaffected by the fact that $X_1 = x_1$ was observed first. „Removing” $x_1$ from the infinite population does not affect the population, so $X_2 = x_2$ is still a random observation from the same population.
In that last sentence you see both the changes come together. We speak about the small letter when referring to the observation. So it is not $X_2$ that is called an observation but the observation that $X_2 = x_2$ is the observation. And it is not called 'an observation on the same variable' but 'an observation from the same population'.
Personally I think that it is not so bad to call the $X_i$ observations as well and the more important change in the quote is changing 'variable' into 'population'.
The $X_i$ can be seen as random variables that describe a random observation. Say, you could use it in a sentence as "we describe the random observations of the Donkey's walking speed with variables $X_1, X_2,\dots,X_n$". The small letter $x_i$ is more like the realisation of the observation rather than the observation itself. The small letter $x_i$ is not strictly the 'observation' itself but it is more like the 'observed value'. You see this also in the text with sentences like
$X_1 = x_1$ is observed
The $x_1$ is not the 'observation' itself (it is not the act) but it is the 'observed value', or it is 'what is observed'. | Mistake in Casella & Berger on page 207?
That page describes identical and independently distributed variables $X_i$.
You could change that quoted piece in more ways. Probably the following would be better
Change
Under the random sampling m |
53,999 | Mistake in Casella & Berger on page 207? | I think the source of confusion comes from the word observation and it's meaning in various contexts. I have to confess that this book was not my first statistical book read and perhaps this helped me to handle that easier.
The paragraph from Page 207 I understand it as:
Under the random sampling model each $X_i$ is an observable distinct random draw produced by the same phenomenon (statistical population described by an assumed variable) and each $X_i$ has a marginal distribution given by $f(x)$.
In different wording we can say that the random sampling model is the assumption that all random variables $X_i$ which are distinct have the same distribution because they are occurrences of the same phenomenon. Because they are produced by the same process they are observations of the same population. They still are distinct because they have independent error components.
This interpretation of word observation is consistent with surrounding paragraphs. For example:
The random sampling model describes a type of experimental situation in which the variable of interest has a probability distribution described by $f(x)$. If only one observation $X$ is made on this variable, then probabilities regarding $X$ can be calculated using $f(x)$.
and continues
In most experiments there are $n \gt 1$ ... repeated observations made on the variable, the first observation is $X_1$ ...
And immediately after your cited paragraph we have:
Furthermore the observations are taken in such a way that the value of one observation has no effect on or relationship with any other observations
The last paragraph is illuminating since it operates the distinction of the two components of an observation: it's value / outcome / data point and it's random variable. The paragraph states the value of one observation does not influence how other values are produced. It also states that the random variables which models each observation does not have any conditional leak of any sort.
In all those paragraphs other than the last one, they don't talk about values or outcomes. It talks only about how one can choose to model some random variables if we assume they are produced by the same phenomena. One cannot talk about any statistical property of a value. The value of an observation is dead from a probabilistic point of view. The associated random variable which produced that value can be used for that.
To me, a natural way to understand the word observation is as an unique occurrence, measurement or snapshot of some source which has an value where random noise is incorporated (observation result) which can be seen (if observable) or not (if hidden) and has an associated random variable (implicitly a probability distribution, sample space and so on) which describes the process. Sometimes I shortcut the terms and use observation to point to the random variables (like here in the random sample definition) and sometimes to point out the pieces of data (for example when we have a sample of that data). Somehow, it is still clear in my mind and well separated.
Later edit (I forgot to comment on your interpretation):
Under the random sampling model each $x_i$ is an observation on the same variable and each $X_i$ has a marginal distribution given by $f(x)$.
This looks wrong to me since if each $x_i$ is the outcome of the same variable ($X$ perhaps, certainly not any of $X_i$ since there are more than one variables there), than on which basis $X_i$ have the same marginal? | Mistake in Casella & Berger on page 207? | I think the source of confusion comes from the word observation and it's meaning in various contexts. I have to confess that this book was not my first statistical book read and perhaps this helped me | Mistake in Casella & Berger on page 207?
I think the source of confusion comes from the word observation and it's meaning in various contexts. I have to confess that this book was not my first statistical book read and perhaps this helped me to handle that easier.
The paragraph from Page 207 I understand it as:
Under the random sampling model each $X_i$ is an observable distinct random draw produced by the same phenomenon (statistical population described by an assumed variable) and each $X_i$ has a marginal distribution given by $f(x)$.
In different wording we can say that the random sampling model is the assumption that all random variables $X_i$ which are distinct have the same distribution because they are occurrences of the same phenomenon. Because they are produced by the same process they are observations of the same population. They still are distinct because they have independent error components.
This interpretation of word observation is consistent with surrounding paragraphs. For example:
The random sampling model describes a type of experimental situation in which the variable of interest has a probability distribution described by $f(x)$. If only one observation $X$ is made on this variable, then probabilities regarding $X$ can be calculated using $f(x)$.
and continues
In most experiments there are $n \gt 1$ ... repeated observations made on the variable, the first observation is $X_1$ ...
And immediately after your cited paragraph we have:
Furthermore the observations are taken in such a way that the value of one observation has no effect on or relationship with any other observations
The last paragraph is illuminating since it operates the distinction of the two components of an observation: it's value / outcome / data point and it's random variable. The paragraph states the value of one observation does not influence how other values are produced. It also states that the random variables which models each observation does not have any conditional leak of any sort.
In all those paragraphs other than the last one, they don't talk about values or outcomes. It talks only about how one can choose to model some random variables if we assume they are produced by the same phenomena. One cannot talk about any statistical property of a value. The value of an observation is dead from a probabilistic point of view. The associated random variable which produced that value can be used for that.
To me, a natural way to understand the word observation is as an unique occurrence, measurement or snapshot of some source which has an value where random noise is incorporated (observation result) which can be seen (if observable) or not (if hidden) and has an associated random variable (implicitly a probability distribution, sample space and so on) which describes the process. Sometimes I shortcut the terms and use observation to point to the random variables (like here in the random sample definition) and sometimes to point out the pieces of data (for example when we have a sample of that data). Somehow, it is still clear in my mind and well separated.
Later edit (I forgot to comment on your interpretation):
Under the random sampling model each $x_i$ is an observation on the same variable and each $X_i$ has a marginal distribution given by $f(x)$.
This looks wrong to me since if each $x_i$ is the outcome of the same variable ($X$ perhaps, certainly not any of $X_i$ since there are more than one variables there), than on which basis $X_i$ have the same marginal? | Mistake in Casella & Berger on page 207?
I think the source of confusion comes from the word observation and it's meaning in various contexts. I have to confess that this book was not my first statistical book read and perhaps this helped me |
54,000 | GLM tests involving deviance and likelihood ratios | The confusion probably comes from the fact that there are three models involved, and the term "deviance" refers to twice the log or the likelihood ratio between two of them.
The models are:
Null model (usually a model with only an intercept term, no influence of explanatory variables on response),
GLM of interest, modelling the response by a linear combination of the explanatory variables (connected by the link function), and
saturated model, in which the expected value of the response can freely depend on the values taken by the explanatory variables.
The residual deviance $D$ is defined as twice the log of the likelihood ratio between the saturated model and the GLM. The null deviance $D_0$ is twice the log of the likelihood ratio between the saturated model and the null model. From this it follows that $D-D_0$ is twice the log of the likelihood ratio between the GLM and the null model, and in fact you can compare any two models of different complexity nested in each other (i.e., where all parameters/explanatory variables of the less complex model also occur in the more complex model) by using the difference of their deviances (note that the log of the likelihood ratio is in fact a difference between log-likelihoods, and this means that if you compute a difference between deviances, the terms belonging to the saturated model cancel out).
All these statistics as logs of likelihood ratios are $\chi^2$-distributed under standard assumptions, with degrees of freedom as the difference between the numbers of parameters of the involved models.
To your questions:
Yes, this tests the null hypothesis that there is no influence of the explanatory variables at all.
Yes, this compares the fitted GLM with the saturated model (i.e., a model that is maximally flexible to fit the response from the explanatory variables).
In principle you could have such a test, but this doesn't involve the GLM to be fitted, and is therefore not normally of interest when fitting a GLM, and therefore not usually taught (I wouldn't say that this is never of interest; it may well, in exceptional situations).
The deviance is a log of the likelihood ratio, therefore tests based on deviances are in fact likelihood ratio tests, and all such likelihood ratio tests can be written as tests using the difference of deviances for different models (see above). | GLM tests involving deviance and likelihood ratios | The confusion probably comes from the fact that there are three models involved, and the term "deviance" refers to twice the log or the likelihood ratio between two of them.
The models are:
Null mode | GLM tests involving deviance and likelihood ratios
The confusion probably comes from the fact that there are three models involved, and the term "deviance" refers to twice the log or the likelihood ratio between two of them.
The models are:
Null model (usually a model with only an intercept term, no influence of explanatory variables on response),
GLM of interest, modelling the response by a linear combination of the explanatory variables (connected by the link function), and
saturated model, in which the expected value of the response can freely depend on the values taken by the explanatory variables.
The residual deviance $D$ is defined as twice the log of the likelihood ratio between the saturated model and the GLM. The null deviance $D_0$ is twice the log of the likelihood ratio between the saturated model and the null model. From this it follows that $D-D_0$ is twice the log of the likelihood ratio between the GLM and the null model, and in fact you can compare any two models of different complexity nested in each other (i.e., where all parameters/explanatory variables of the less complex model also occur in the more complex model) by using the difference of their deviances (note that the log of the likelihood ratio is in fact a difference between log-likelihoods, and this means that if you compute a difference between deviances, the terms belonging to the saturated model cancel out).
All these statistics as logs of likelihood ratios are $\chi^2$-distributed under standard assumptions, with degrees of freedom as the difference between the numbers of parameters of the involved models.
To your questions:
Yes, this tests the null hypothesis that there is no influence of the explanatory variables at all.
Yes, this compares the fitted GLM with the saturated model (i.e., a model that is maximally flexible to fit the response from the explanatory variables).
In principle you could have such a test, but this doesn't involve the GLM to be fitted, and is therefore not normally of interest when fitting a GLM, and therefore not usually taught (I wouldn't say that this is never of interest; it may well, in exceptional situations).
The deviance is a log of the likelihood ratio, therefore tests based on deviances are in fact likelihood ratio tests, and all such likelihood ratio tests can be written as tests using the difference of deviances for different models (see above). | GLM tests involving deviance and likelihood ratios
The confusion probably comes from the fact that there are three models involved, and the term "deviance" refers to twice the log or the likelihood ratio between two of them.
The models are:
Null mode |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.