idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,601 | Linear regression performing better than random forest in Caret | Check out caret's function findLinearCombos and run it on your data. The object returned is a list--the second element is a vector of indices which can safely be removed from your data set since they are linear combinations of other columns (which, now that I think about it makes total sense because you only had 500 observations and, as a result, your model was totally over-specified).
When you pass in your data frame the resulting vector has a thousand elements. Basically, you can go right ahead and delete columns 501 to 1500.
Do that from the get-go and try re-building those models.
A couple quick side notes:
If you include the 'data' argument, then only refer to the columns by their names in the formula (i.e. write age ~. instead of data$age ~.).
If you plan on doing PCA then you might want to consider setting it up via the preProcess argument in the train function (you can tweak the threshold in trainControl function). (Since you're already using carat anyway you might as well have it handle your pre-processing, too--it gives you one less thing to worry about).
By the way, try checking out this code.
prepped <- preProcess(x=df[, -1], method=c('center', 'scale', 'pca'),
thresh=0.9999)
pca <- predict(prepped, df[, -1])
pca$age <- df$age
fit <- ln(age ~ ., data=pca)
num.unique <- length(unique(df$age))
cols <- rainbow(num.unique)[factor(df$age)]
plot(fit, col=cols) | Linear regression performing better than random forest in Caret | Check out caret's function findLinearCombos and run it on your data. The object returned is a list--the second element is a vector of indices which can safely be removed from your data set since they | Linear regression performing better than random forest in Caret
Check out caret's function findLinearCombos and run it on your data. The object returned is a list--the second element is a vector of indices which can safely be removed from your data set since they are linear combinations of other columns (which, now that I think about it makes total sense because you only had 500 observations and, as a result, your model was totally over-specified).
When you pass in your data frame the resulting vector has a thousand elements. Basically, you can go right ahead and delete columns 501 to 1500.
Do that from the get-go and try re-building those models.
A couple quick side notes:
If you include the 'data' argument, then only refer to the columns by their names in the formula (i.e. write age ~. instead of data$age ~.).
If you plan on doing PCA then you might want to consider setting it up via the preProcess argument in the train function (you can tweak the threshold in trainControl function). (Since you're already using carat anyway you might as well have it handle your pre-processing, too--it gives you one less thing to worry about).
By the way, try checking out this code.
prepped <- preProcess(x=df[, -1], method=c('center', 'scale', 'pca'),
thresh=0.9999)
pca <- predict(prepped, df[, -1])
pca$age <- df$age
fit <- ln(age ~ ., data=pca)
num.unique <- length(unique(df$age))
cols <- rainbow(num.unique)[factor(df$age)]
plot(fit, col=cols) | Linear regression performing better than random forest in Caret
Check out caret's function findLinearCombos and run it on your data. The object returned is a list--the second element is a vector of indices which can safely be removed from your data set since they |
52,602 | Etymology of "cluster" in the context of cluster analysis | "Cluster of rocks", "cluster of islands", "cluster of factories" etc. can easily be traced back to the 19th century (and probably much longer). Of course statistics early on started to look for a way to formalize this. So good luck, you will likely need to walk to a lot of libraries (the physical one, not the software library)!
Don't look at "machine learning". ML did not invent cluster analysis; and most cluster analysis research happens outside the ML community.
The term "cluster analysis" dates back to the 1930s statistics; but you can imagine that "cluster" in the notion above was used much earlier - but cluster analysis attempts at discovery exactly this notion of "clusters".
Many of the early usage was on clustering observations in nature, such as species; either by location or by similarity. No computers involved: it probably wasn't until 1957 when the first algorithms for "cluster analysis" arrived (before that, cluster analysis was "pen & paper")
P. H. Sneath: The application of computers to taxonomy. In:Journal of general microbiology. 17(1), 1957, S. 201–226. | Etymology of "cluster" in the context of cluster analysis | "Cluster of rocks", "cluster of islands", "cluster of factories" etc. can easily be traced back to the 19th century (and probably much longer). Of course statistics early on started to look for a way | Etymology of "cluster" in the context of cluster analysis
"Cluster of rocks", "cluster of islands", "cluster of factories" etc. can easily be traced back to the 19th century (and probably much longer). Of course statistics early on started to look for a way to formalize this. So good luck, you will likely need to walk to a lot of libraries (the physical one, not the software library)!
Don't look at "machine learning". ML did not invent cluster analysis; and most cluster analysis research happens outside the ML community.
The term "cluster analysis" dates back to the 1930s statistics; but you can imagine that "cluster" in the notion above was used much earlier - but cluster analysis attempts at discovery exactly this notion of "clusters".
Many of the early usage was on clustering observations in nature, such as species; either by location or by similarity. No computers involved: it probably wasn't until 1957 when the first algorithms for "cluster analysis" arrived (before that, cluster analysis was "pen & paper")
P. H. Sneath: The application of computers to taxonomy. In:Journal of general microbiology. 17(1), 1957, S. 201–226. | Etymology of "cluster" in the context of cluster analysis
"Cluster of rocks", "cluster of islands", "cluster of factories" etc. can easily be traced back to the 19th century (and probably much longer). Of course statistics early on started to look for a way |
52,603 | Etymology of "cluster" in the context of cluster analysis | According to Oxford Dictionary the word cluster is derived from the Old English word 'clyster' and was "probably related to clot [or clott]" and is derived from the Germanic 'klotz'. | Etymology of "cluster" in the context of cluster analysis | According to Oxford Dictionary the word cluster is derived from the Old English word 'clyster' and was "probably related to clot [or clott]" and is derived from the Germanic 'klotz'. | Etymology of "cluster" in the context of cluster analysis
According to Oxford Dictionary the word cluster is derived from the Old English word 'clyster' and was "probably related to clot [or clott]" and is derived from the Germanic 'klotz'. | Etymology of "cluster" in the context of cluster analysis
According to Oxford Dictionary the word cluster is derived from the Old English word 'clyster' and was "probably related to clot [or clott]" and is derived from the Germanic 'klotz'. |
52,604 | What is a multivariate logistic regression | There is no reliable answer to your question in terms of standard usage in the medical literature. The paper cited by @DirkHorsten, in a comment on another answer here, examined articles published over the course of a year in a single high-quality public health journal that presumably has statistical expertise in its reviewing practices. Sometimes the word "multivariate" represented longitudinal data, sometimes it simply represented multiple predictors in a model with only one outcome variable and no mixed effects.
I'm afraid that you will, at least for now, have to read each paper to know what the authors meant by "multivariate." | What is a multivariate logistic regression | There is no reliable answer to your question in terms of standard usage in the medical literature. The paper cited by @DirkHorsten, in a comment on another answer here, examined articles published ove | What is a multivariate logistic regression
There is no reliable answer to your question in terms of standard usage in the medical literature. The paper cited by @DirkHorsten, in a comment on another answer here, examined articles published over the course of a year in a single high-quality public health journal that presumably has statistical expertise in its reviewing practices. Sometimes the word "multivariate" represented longitudinal data, sometimes it simply represented multiple predictors in a model with only one outcome variable and no mixed effects.
I'm afraid that you will, at least for now, have to read each paper to know what the authors meant by "multivariate." | What is a multivariate logistic regression
There is no reliable answer to your question in terms of standard usage in the medical literature. The paper cited by @DirkHorsten, in a comment on another answer here, examined articles published ove |
52,605 | What is a multivariate logistic regression | In statistics, multivariate and multiple mean two different things all together. In a regression model, "multiple" denotes several predictors/independent variables. On the other hand, "multivariate" is used to mean several (2 or more) responses/ dependent variables. To this end, multivariate logistic regression is a logistic regression with more than one binary outcome. For example including both HIV status (positive or negative) and Condom use(Yes or No) as response/outcome in the same logistic regression model. Both responses are binary (hence logistic regression, probit regression can also be used), and more than one response/ dependent variable is involved (hence multivariate). NOTE In multivariate analysis, there should be some correlation between the responses used in the model. Otherwise, separate logistic regression models should be fitted for each response. In the above example with HIV status and Condom use as dependent variables, there should be some within subject correlations between HIV status and Condom use.
As opposed to multivariate logistic regression, a multiple logistic regression is a logistic regression with only one response but several predictors.
For example predicting HIV status (Positive or negative) using the number of sexual partners, and the practice of safe sex as possible independent variables. Here, only one response is involved (HIV status). However, there are two or more (in this case only two) predictors / independent variable namely, 1.number of sexual partners, and 2.practice of safe sex | What is a multivariate logistic regression | In statistics, multivariate and multiple mean two different things all together. In a regression model, "multiple" denotes several predictors/independent variables. On the other hand, "multivariate" | What is a multivariate logistic regression
In statistics, multivariate and multiple mean two different things all together. In a regression model, "multiple" denotes several predictors/independent variables. On the other hand, "multivariate" is used to mean several (2 or more) responses/ dependent variables. To this end, multivariate logistic regression is a logistic regression with more than one binary outcome. For example including both HIV status (positive or negative) and Condom use(Yes or No) as response/outcome in the same logistic regression model. Both responses are binary (hence logistic regression, probit regression can also be used), and more than one response/ dependent variable is involved (hence multivariate). NOTE In multivariate analysis, there should be some correlation between the responses used in the model. Otherwise, separate logistic regression models should be fitted for each response. In the above example with HIV status and Condom use as dependent variables, there should be some within subject correlations between HIV status and Condom use.
As opposed to multivariate logistic regression, a multiple logistic regression is a logistic regression with only one response but several predictors.
For example predicting HIV status (Positive or negative) using the number of sexual partners, and the practice of safe sex as possible independent variables. Here, only one response is involved (HIV status). However, there are two or more (in this case only two) predictors / independent variable namely, 1.number of sexual partners, and 2.practice of safe sex | What is a multivariate logistic regression
In statistics, multivariate and multiple mean two different things all together. In a regression model, "multiple" denotes several predictors/independent variables. On the other hand, "multivariate" |
52,606 | What is a multivariate logistic regression | 'Standard logistic regression' (part of generalised linear models)
The logistic regression can be the 'standard' logistic regression with fixed coefficents, so in the univariate case (for simplicity I take one explanatory variable, but the reasoning holds also in the multivariate case), the logistic regression tries to predict the probability of 'success' conditional on a given value of the explanatory variable: $P(y=1|_{X=x})=\frac{1}{1+e^{-(\beta_1 x + \beta_2)}}$.
If you assume that your coeffcients are fixed then you have the logisitic regression as explained by @Enrique (+1).
Mixed effects logistic regression (part of generalised mixed effect models)
But in some cases it may be interesting not to assume fixed $\beta_i$ e.g. because you think that for each participant in a survey the intercept is different (see an example in the case of a linear model in my answer on How to account for participants in a study design?). If you assume thet e.g. $\beta_2$ is random , then you can estimate these coefficents for this 'mixed effects logistic regression model' .
The estimation of the parameters $\beta_i$ is a bit more complicated but implemented in most statistical software. The interpretation of the estimated parameters is harder however. | What is a multivariate logistic regression | 'Standard logistic regression' (part of generalised linear models)
The logistic regression can be the 'standard' logistic regression with fixed coefficents, so in the univariate case (for simplicity I | What is a multivariate logistic regression
'Standard logistic regression' (part of generalised linear models)
The logistic regression can be the 'standard' logistic regression with fixed coefficents, so in the univariate case (for simplicity I take one explanatory variable, but the reasoning holds also in the multivariate case), the logistic regression tries to predict the probability of 'success' conditional on a given value of the explanatory variable: $P(y=1|_{X=x})=\frac{1}{1+e^{-(\beta_1 x + \beta_2)}}$.
If you assume that your coeffcients are fixed then you have the logisitic regression as explained by @Enrique (+1).
Mixed effects logistic regression (part of generalised mixed effect models)
But in some cases it may be interesting not to assume fixed $\beta_i$ e.g. because you think that for each participant in a survey the intercept is different (see an example in the case of a linear model in my answer on How to account for participants in a study design?). If you assume thet e.g. $\beta_2$ is random , then you can estimate these coefficents for this 'mixed effects logistic regression model' .
The estimation of the parameters $\beta_i$ is a bit more complicated but implemented in most statistical software. The interpretation of the estimated parameters is harder however. | What is a multivariate logistic regression
'Standard logistic regression' (part of generalised linear models)
The logistic regression can be the 'standard' logistic regression with fixed coefficents, so in the univariate case (for simplicity I |
52,607 | What is a multivariate logistic regression | You may want to take a look at "Bahadur Model".
In Bahadur model, "multivariate" in "multivariate logistic regression" means multiple binary dependent variables, which may be correlated at second or higher order. See references from Verbeke and Molenberghs for a description of this model.
Best,
Rolando | What is a multivariate logistic regression | You may want to take a look at "Bahadur Model".
In Bahadur model, "multivariate" in "multivariate logistic regression" means multiple binary dependent variables, which may be correlated at second or h | What is a multivariate logistic regression
You may want to take a look at "Bahadur Model".
In Bahadur model, "multivariate" in "multivariate logistic regression" means multiple binary dependent variables, which may be correlated at second or higher order. See references from Verbeke and Molenberghs for a description of this model.
Best,
Rolando | What is a multivariate logistic regression
You may want to take a look at "Bahadur Model".
In Bahadur model, "multivariate" in "multivariate logistic regression" means multiple binary dependent variables, which may be correlated at second or h |
52,608 | What is a multivariate logistic regression | Multivariate logistic regression is like simple logistic regression but with multiple predictors. Logistic regression is similar to linear regression but you can use it when your response variable is binary. This is common in medical research because with multiple logistic regression you can adjust for confounders. For example you may be interested in predicting whether or not someone may develop a disease based on the exposure to some substance. In that case you may use a simple logistic regression model but it may be the case that the disease has no relation with the substance but with age. In that case you can include in the model both: substance exposure and age so you can analyze if age is a confunder, i.e., the disease may be correlated with age but not with the substance. You can also use multiple logistic regression to increase your prediction power by adding more predictors instead of just using one. | What is a multivariate logistic regression | Multivariate logistic regression is like simple logistic regression but with multiple predictors. Logistic regression is similar to linear regression but you can use it when your response variable is | What is a multivariate logistic regression
Multivariate logistic regression is like simple logistic regression but with multiple predictors. Logistic regression is similar to linear regression but you can use it when your response variable is binary. This is common in medical research because with multiple logistic regression you can adjust for confounders. For example you may be interested in predicting whether or not someone may develop a disease based on the exposure to some substance. In that case you may use a simple logistic regression model but it may be the case that the disease has no relation with the substance but with age. In that case you can include in the model both: substance exposure and age so you can analyze if age is a confunder, i.e., the disease may be correlated with age but not with the substance. You can also use multiple logistic regression to increase your prediction power by adding more predictors instead of just using one. | What is a multivariate logistic regression
Multivariate logistic regression is like simple logistic regression but with multiple predictors. Logistic regression is similar to linear regression but you can use it when your response variable is |
52,609 | Combining principal component regression and stepwise regression | Regression based on principal components analysis (PCA) of the independent variables is certainly a way to approach this problem; see this Cross Validated page for one extensive discussion of pros and cons, with links to further related topics. I don't see the point of the regression you propose after choosing the largest components. The "reconstructed" independent variables might suffer from being too highly dependent on the particular sample on which you based the model, and stepwise selection is generally not a good idea. Cross-validation would be a better way to choose the number of components to retain, finding the number of components that minimizes cross-validation error.
In your situation, with only 5 predictors you might be just as well served by a standard linear model. Unless you have extremely high correlations among some of your variables, you are unlikely to have the numerical instability issues that can arise in extreme cases. (And if you do have two very highly correlated predictors, you should consider using your knowledge of the subject matter rather than an automated approach to choose one.) Paying attention to model diagnostics will help determine whether the linear model is reasonable.
A standard regression model provides easier-to-interpret coefficients and might be easier to explain to others than PCA. For predictions from a linear model you should consider including all 5 independent variables (even those that aren't "statistically significant"), both because of the limitations of stepwise selection and because the relations of some predictors to the dependent variable will differ if other predictors are removed.
If you have very high co-linearity in a standard linear regression then it should show up in high errors associated with the corresponding coefficients, and you might consider approaches noted here like ridge regression to get useful information from all your predictors without overfitting. Ridge regression can be considered as a continuous version of the PCA-regression approach, where principal components are differentially weighted instead of being completely either in or out of the final model; see section 3.5 of Elements of Statistical Learning.
For your second and third questions:
The first page I linked above does a pretty good job of addressing your second question. Yes, choosing a limited number of principal components can help in reducing the problems associated with co-linearity, as the co-linear variables will tend to enter the same principal components together. Two warnings: the predictors should be standardized so that differences in scales don't drive the construction of the principal components, and there's no assurance that the components that capture the greatest variation in the predictors will be those most closely related to the dependent variable.
With respect to your third question, a stepwise approach is inappropriate, as you recognize. I don't see a reason why you couldn't include interaction terms among your selected principal components in a regression, but they would be extremely hard to interpret. That's another reason why I would lean here toward working with the original independent variables rather than with their transformations into principal components.
You seem very interested in using PCA for this predictive model, but remember that it's easy to get fixated on a particular approach. You are in a very good position to compare several approaches, combined with appropriate cross-validation or bootstrapping techniques, to see which works best for your particular needs. If that ends up being PCA that's good, but don't dismiss the other possibilities out of hand. | Combining principal component regression and stepwise regression | Regression based on principal components analysis (PCA) of the independent variables is certainly a way to approach this problem; see this Cross Validated page for one extensive discussion of pros and | Combining principal component regression and stepwise regression
Regression based on principal components analysis (PCA) of the independent variables is certainly a way to approach this problem; see this Cross Validated page for one extensive discussion of pros and cons, with links to further related topics. I don't see the point of the regression you propose after choosing the largest components. The "reconstructed" independent variables might suffer from being too highly dependent on the particular sample on which you based the model, and stepwise selection is generally not a good idea. Cross-validation would be a better way to choose the number of components to retain, finding the number of components that minimizes cross-validation error.
In your situation, with only 5 predictors you might be just as well served by a standard linear model. Unless you have extremely high correlations among some of your variables, you are unlikely to have the numerical instability issues that can arise in extreme cases. (And if you do have two very highly correlated predictors, you should consider using your knowledge of the subject matter rather than an automated approach to choose one.) Paying attention to model diagnostics will help determine whether the linear model is reasonable.
A standard regression model provides easier-to-interpret coefficients and might be easier to explain to others than PCA. For predictions from a linear model you should consider including all 5 independent variables (even those that aren't "statistically significant"), both because of the limitations of stepwise selection and because the relations of some predictors to the dependent variable will differ if other predictors are removed.
If you have very high co-linearity in a standard linear regression then it should show up in high errors associated with the corresponding coefficients, and you might consider approaches noted here like ridge regression to get useful information from all your predictors without overfitting. Ridge regression can be considered as a continuous version of the PCA-regression approach, where principal components are differentially weighted instead of being completely either in or out of the final model; see section 3.5 of Elements of Statistical Learning.
For your second and third questions:
The first page I linked above does a pretty good job of addressing your second question. Yes, choosing a limited number of principal components can help in reducing the problems associated with co-linearity, as the co-linear variables will tend to enter the same principal components together. Two warnings: the predictors should be standardized so that differences in scales don't drive the construction of the principal components, and there's no assurance that the components that capture the greatest variation in the predictors will be those most closely related to the dependent variable.
With respect to your third question, a stepwise approach is inappropriate, as you recognize. I don't see a reason why you couldn't include interaction terms among your selected principal components in a regression, but they would be extremely hard to interpret. That's another reason why I would lean here toward working with the original independent variables rather than with their transformations into principal components.
You seem very interested in using PCA for this predictive model, but remember that it's easy to get fixated on a particular approach. You are in a very good position to compare several approaches, combined with appropriate cross-validation or bootstrapping techniques, to see which works best for your particular needs. If that ends up being PCA that's good, but don't dismiss the other possibilities out of hand. | Combining principal component regression and stepwise regression
Regression based on principal components analysis (PCA) of the independent variables is certainly a way to approach this problem; see this Cross Validated page for one extensive discussion of pros and |
52,610 | Bootstrapping won't always return population statistics - so why say it does? | Heuristically, you can think that the motivation behind the bootstrap is that given a large sample, your sample should be distributed approximately equal to your population. If your sample is distributed approximately equal to your population, then re-sampling from your sample and calculating your statistic should be approximately the same as re-sampling from your population and calculating your statistic. Of course, it will not be exactly the same, as your sample is not exactly distributed the same as your population.
So in your example, the issue is that you have drawn a sample that looks very different than your population, and thus your statistics calculated from resampling from your sample will look very different than statistics calculated from resampling from your population.
Despite your example, bootstrap is still a valid procedure because for large samples, large deviations in the distribution of your sample vs. the distribution of your population become less and less likely.
To say that "bootstrapping returns the population statistic" does not make sense. | Bootstrapping won't always return population statistics - so why say it does? | Heuristically, you can think that the motivation behind the bootstrap is that given a large sample, your sample should be distributed approximately equal to your population. If your sample is distribu | Bootstrapping won't always return population statistics - so why say it does?
Heuristically, you can think that the motivation behind the bootstrap is that given a large sample, your sample should be distributed approximately equal to your population. If your sample is distributed approximately equal to your population, then re-sampling from your sample and calculating your statistic should be approximately the same as re-sampling from your population and calculating your statistic. Of course, it will not be exactly the same, as your sample is not exactly distributed the same as your population.
So in your example, the issue is that you have drawn a sample that looks very different than your population, and thus your statistics calculated from resampling from your sample will look very different than statistics calculated from resampling from your population.
Despite your example, bootstrap is still a valid procedure because for large samples, large deviations in the distribution of your sample vs. the distribution of your population become less and less likely.
To say that "bootstrapping returns the population statistic" does not make sense. | Bootstrapping won't always return population statistics - so why say it does?
Heuristically, you can think that the motivation behind the bootstrap is that given a large sample, your sample should be distributed approximately equal to your population. If your sample is distribu |
52,611 | Bootstrapping won't always return population statistics - so why say it does? | First, bootstrap cannot remedy a problem of an unrepresentative original sample. Thus I agree with @CliffAB.
The chances of randomly drawing a sample that is little representative diminish as the sample size grows. If there was just one coin throw, the outcome would always be "crippled": one element (either head or tail) of the two (head, tail) that are in the population would be completely absent in the sample. Meanwhile, as @justanotherbrain notes in the question, the chances of having 10 heads out of 10 coin throws are just below $10^{-3}$ -- much better than when you have only one throw. @whuber correctly notes that bootstrap properties are asymptotic. I will add that, under certain conditions, bootstrap is more efficient than standard estimators, e.g. achieving convergence rate of $n^{3/2}$ rather than $n^{1/2}$. (I will not expand on these conditions here; a detailed treatment is given in Hall "The Bootstrap and Edgeworth Expansion" (1992) (this is a whole book).)
But let me provide another perspective why bootstrap is generally relevant. Sorry if it does not address your question directly enough.
Bootstrap is useful for assessing whether an estimator (a formula) works alright. The idea is as follows.
Consider the original sample (of $n$ elements) as "a population"
Draw an $n$-element bootstrap sample from the original sample
Apply the estimator (the formula) onto the bootstrap sample to obtain its realization (a value)
Iterate 2. and 3. many times, save the realizations of each iteration
The realizations from the many iterations will form an empirical distribution. Its characteristics (mean, variance, whatever else) will be accessible to you.
Since you consider the original sample as "a population", you know the true characteristics of that population. You can then take at the empirical distribution (based on the many bootstrap realization) from point 5. and see how it looks relative to the actual population characteristic (one item) that the estimator was supposed to estimate.
You can thus get an idea whether your estimator (a formula) is biased, how large its variance is etc. All that is valuable information when it comes to assessing properties of an estimator. Even better, the original sample should normally be fairly representative (contrary to your example) of the actual population of interest. Hence, you can extend your conclusion from the bootstrap experiment to the original problem of interest (by analogy).
For example, if you see that the estimator tends to underestimate the population characteristic by $b$ units ($b$ stands for bias), you would add $b$ to the estimate based on the original sample so as to remove the bias. That way you would get an unbiased estimate of the characteristic that you were after in the first place. Thus you would fruitfully utilize what you have learned about the estimator when bootstrapping.
But at the end I have to repeat that bootstrap is not a remedy in case of an unrepresentative original sample, as your example illustrates. | Bootstrapping won't always return population statistics - so why say it does? | First, bootstrap cannot remedy a problem of an unrepresentative original sample. Thus I agree with @CliffAB.
The chances of randomly drawing a sample that is little representative diminish as the sam | Bootstrapping won't always return population statistics - so why say it does?
First, bootstrap cannot remedy a problem of an unrepresentative original sample. Thus I agree with @CliffAB.
The chances of randomly drawing a sample that is little representative diminish as the sample size grows. If there was just one coin throw, the outcome would always be "crippled": one element (either head or tail) of the two (head, tail) that are in the population would be completely absent in the sample. Meanwhile, as @justanotherbrain notes in the question, the chances of having 10 heads out of 10 coin throws are just below $10^{-3}$ -- much better than when you have only one throw. @whuber correctly notes that bootstrap properties are asymptotic. I will add that, under certain conditions, bootstrap is more efficient than standard estimators, e.g. achieving convergence rate of $n^{3/2}$ rather than $n^{1/2}$. (I will not expand on these conditions here; a detailed treatment is given in Hall "The Bootstrap and Edgeworth Expansion" (1992) (this is a whole book).)
But let me provide another perspective why bootstrap is generally relevant. Sorry if it does not address your question directly enough.
Bootstrap is useful for assessing whether an estimator (a formula) works alright. The idea is as follows.
Consider the original sample (of $n$ elements) as "a population"
Draw an $n$-element bootstrap sample from the original sample
Apply the estimator (the formula) onto the bootstrap sample to obtain its realization (a value)
Iterate 2. and 3. many times, save the realizations of each iteration
The realizations from the many iterations will form an empirical distribution. Its characteristics (mean, variance, whatever else) will be accessible to you.
Since you consider the original sample as "a population", you know the true characteristics of that population. You can then take at the empirical distribution (based on the many bootstrap realization) from point 5. and see how it looks relative to the actual population characteristic (one item) that the estimator was supposed to estimate.
You can thus get an idea whether your estimator (a formula) is biased, how large its variance is etc. All that is valuable information when it comes to assessing properties of an estimator. Even better, the original sample should normally be fairly representative (contrary to your example) of the actual population of interest. Hence, you can extend your conclusion from the bootstrap experiment to the original problem of interest (by analogy).
For example, if you see that the estimator tends to underestimate the population characteristic by $b$ units ($b$ stands for bias), you would add $b$ to the estimate based on the original sample so as to remove the bias. That way you would get an unbiased estimate of the characteristic that you were after in the first place. Thus you would fruitfully utilize what you have learned about the estimator when bootstrapping.
But at the end I have to repeat that bootstrap is not a remedy in case of an unrepresentative original sample, as your example illustrates. | Bootstrapping won't always return population statistics - so why say it does?
First, bootstrap cannot remedy a problem of an unrepresentative original sample. Thus I agree with @CliffAB.
The chances of randomly drawing a sample that is little representative diminish as the sam |
52,612 | When does it makes sense to use Cross Validation? | Here are 2 different scenarios where cross-validation can be used.
1) You want to approximate your model's generalization error (how well it will do on inputs it hasn't seen before). Cross-validation can tell you that because it trains on one set of data, and tests on the other set of data. The error on the test set is representative of the generalization error.
This is important because it would be trivial to get 100% on your training set - just create an n-way if-statement for your n training samples. That is called overfitting.
2) You want to choose 'hyperparameters'. In gradient descent, for example, the learning rate is a hyperparameter. You would do cross-validation for different settings of the learning rate, i.e. 0.01, 0.02, ... and choose the one that gave you the lowest error. If you do happen to do this, the 'test' error you got won't be representative of your generalization error as in #1 - why? Because you were essentially 'training' your learning rate based on different train/test splits of your data (so in a way it was all training data). And thus another set of data would have to be used outside of that to determine the generalization error. | When does it makes sense to use Cross Validation? | Here are 2 different scenarios where cross-validation can be used.
1) You want to approximate your model's generalization error (how well it will do on inputs it hasn't seen before). Cross-validation | When does it makes sense to use Cross Validation?
Here are 2 different scenarios where cross-validation can be used.
1) You want to approximate your model's generalization error (how well it will do on inputs it hasn't seen before). Cross-validation can tell you that because it trains on one set of data, and tests on the other set of data. The error on the test set is representative of the generalization error.
This is important because it would be trivial to get 100% on your training set - just create an n-way if-statement for your n training samples. That is called overfitting.
2) You want to choose 'hyperparameters'. In gradient descent, for example, the learning rate is a hyperparameter. You would do cross-validation for different settings of the learning rate, i.e. 0.01, 0.02, ... and choose the one that gave you the lowest error. If you do happen to do this, the 'test' error you got won't be representative of your generalization error as in #1 - why? Because you were essentially 'training' your learning rate based on different train/test splits of your data (so in a way it was all training data). And thus another set of data would have to be used outside of that to determine the generalization error. | When does it makes sense to use Cross Validation?
Here are 2 different scenarios where cross-validation can be used.
1) You want to approximate your model's generalization error (how well it will do on inputs it hasn't seen before). Cross-validation |
52,613 | When does it makes sense to use Cross Validation? | In addition to @justin_credible's points:
When does it makes sense to use Cross Validation?
Whenever you cannot afford the independent test set you'd really want to have.
Iterated $k$-fold CV or out-of-bootstrap: in order to measure the stability of the predictions wrt. to slight changes in the training set.
(Note that this may calculate more surrogate models than leave-one-out CV) | When does it makes sense to use Cross Validation? | In addition to @justin_credible's points:
When does it makes sense to use Cross Validation?
Whenever you cannot afford the independent test set you'd really want to have.
Iterated $k$-fold CV or o | When does it makes sense to use Cross Validation?
In addition to @justin_credible's points:
When does it makes sense to use Cross Validation?
Whenever you cannot afford the independent test set you'd really want to have.
Iterated $k$-fold CV or out-of-bootstrap: in order to measure the stability of the predictions wrt. to slight changes in the training set.
(Note that this may calculate more surrogate models than leave-one-out CV) | When does it makes sense to use Cross Validation?
In addition to @justin_credible's points:
When does it makes sense to use Cross Validation?
Whenever you cannot afford the independent test set you'd really want to have.
Iterated $k$-fold CV or o |
52,614 | eliminating outliers in MARS regression | My suggestion is independent of the software used.
We need to clarify whether the outliers are outliers in Y, the dependent variable, or outliers in the predictor, X. Outliers in the predictor, X, are easily handled with the vast numbers of transformations available that would reshape the PDF (probability density function) of X.
While I agree with Eric Farng that deleting outliers in Y is not recommended, I disagree with him that they should be deleted only after "careful consideration." In my opinion, one should never delete outliers since they contain important and useful information, that is, unless one can determine that these values are somehow "bad," illegal or fraudulent, etc. The alternatives to outlier deletion in Y are to leverage modeling methods that are robust to outliers.
Why am I opposed to a priori outlier deletions in Y? Let me use an example: after "careful consideration" (Eric Farng's wording, however one chooses to define this) a first set of outliers is deleted. Does this mean that you are done with deleting outliers? Probably not, since a second analysis would almost certainly reveal a new set of outliers relative to the new mean and standard deviation. What does one do with this new information? And how many passes of the data are required to completely scrub it of outliers? Clearly, this is a potentially endless process of outlier deletion that makes little or no sense.
Most importantly and even before one gets into the almost philosophical question of whether or not to delete outliers, it should be noted that MARS is one of the robust, non-parametric alternatives that exploits nonlinear, quantile functions of the relationship between X and Y. From a purely applied and practical point of view this means that MARS is highly robust, almost immune, to the presence of outliers in Y: by definition, deleting outliers in Y is unnecessary, even moot, when leveraging MARS. | eliminating outliers in MARS regression | My suggestion is independent of the software used.
We need to clarify whether the outliers are outliers in Y, the dependent variable, or outliers in the predictor, X. Outliers in the predictor, X, ar | eliminating outliers in MARS regression
My suggestion is independent of the software used.
We need to clarify whether the outliers are outliers in Y, the dependent variable, or outliers in the predictor, X. Outliers in the predictor, X, are easily handled with the vast numbers of transformations available that would reshape the PDF (probability density function) of X.
While I agree with Eric Farng that deleting outliers in Y is not recommended, I disagree with him that they should be deleted only after "careful consideration." In my opinion, one should never delete outliers since they contain important and useful information, that is, unless one can determine that these values are somehow "bad," illegal or fraudulent, etc. The alternatives to outlier deletion in Y are to leverage modeling methods that are robust to outliers.
Why am I opposed to a priori outlier deletions in Y? Let me use an example: after "careful consideration" (Eric Farng's wording, however one chooses to define this) a first set of outliers is deleted. Does this mean that you are done with deleting outliers? Probably not, since a second analysis would almost certainly reveal a new set of outliers relative to the new mean and standard deviation. What does one do with this new information? And how many passes of the data are required to completely scrub it of outliers? Clearly, this is a potentially endless process of outlier deletion that makes little or no sense.
Most importantly and even before one gets into the almost philosophical question of whether or not to delete outliers, it should be noted that MARS is one of the robust, non-parametric alternatives that exploits nonlinear, quantile functions of the relationship between X and Y. From a purely applied and practical point of view this means that MARS is highly robust, almost immune, to the presence of outliers in Y: by definition, deleting outliers in Y is unnecessary, even moot, when leveraging MARS. | eliminating outliers in MARS regression
My suggestion is independent of the software used.
We need to clarify whether the outliers are outliers in Y, the dependent variable, or outliers in the predictor, X. Outliers in the predictor, X, ar |
52,615 | eliminating outliers in MARS regression | From the R package documentation and from the original MARS paper, it looks like rsq and grsq are used for model selection within the package and it looks like your code is removing outliers until the fit of the model is maximized. This is usually not recommended. There are statistical tools to help identify potential outliers and outliers are removed only after careful consideration. Points that are mistakes (value of -1 for count data) or do not represent the population (a population of adults containing a child's data point) can be removed. As for other points, they may be an unlucky outlier. They may also be unlucky to be only one point and not more. So these are removed only after careful inspection.
In the earth package, residuals and leverage are easily available. As you say, not all points with high leverage are necessarily a problem. Cook's Distance tries to solve that issue by actually removing the point from the data and checking how any predicted values change. Unfortunately, it may be difficult to find a library to compute this and other outlier metrics that support the earth package. However, this package does support the plot(model) command which will also give some potential outliers.
For the log-log form, I believe you can skip the categorical variables if, for a variable with $n$ categories, they are being represented as $n-1$ binary/dummy variables. For example in a simple linear regression, transforming the categorical dummy variables [0, 1] into [0, 0.5] will only cause its coefficient to a double. Here is an example using MARS regression. a is the dependent variable. b, d, e are the independent variables where d and e are categorical variables. In the second example, d and e are changed from (0 and 1) to (0 and 0.5) and (-1, 1) respectively.
a <- rnorm(100)
b <- rexp(100) + a
c <- rgamma(100, shape=1) + 2*a
d <- sapply(sapply(round(b + c), function(x) min(1,x)), function(y) max(0, y))
e <- sapply(sapply(round(a - c), function(x) min(1,x)), function(y) max(0, y))
fit <- earth( a ~ b + d + e)
e2 <- e * 2
e2 <- e2 - 1
d2 <- d * 0.5
fit2 <- earth( a ~ b + d2 + e2)
summary(fit)
summary(fit2)
sum(predict(fit) - predict(fit2))
You can see from the output, the coefficients for d, e and the intercept have changed, but none of the other coefficients changed. In addition, the predictions are the same between models.
For reference:
$$
rsq = R^2\\
grsq = 1 - \frac{gcv}{gcv.null}\\
gcv = \frac{1}{N}\sum_{i=1}^{N}\frac{\big[y_i-\hat{f}_M(x_i)\big]^2}{\bigg[1 - \frac{C(M)}{N}\bigg]^2}\\
C(M) = trace(B(B^TB)^{-1}B^T) + 1
$$
$B$ is the "data" matrix. $gcv.null$ is the Generalized Cross Validation of the intercept only model. | eliminating outliers in MARS regression | From the R package documentation and from the original MARS paper, it looks like rsq and grsq are used for model selection within the package and it looks like your code is removing outliers until the | eliminating outliers in MARS regression
From the R package documentation and from the original MARS paper, it looks like rsq and grsq are used for model selection within the package and it looks like your code is removing outliers until the fit of the model is maximized. This is usually not recommended. There are statistical tools to help identify potential outliers and outliers are removed only after careful consideration. Points that are mistakes (value of -1 for count data) or do not represent the population (a population of adults containing a child's data point) can be removed. As for other points, they may be an unlucky outlier. They may also be unlucky to be only one point and not more. So these are removed only after careful inspection.
In the earth package, residuals and leverage are easily available. As you say, not all points with high leverage are necessarily a problem. Cook's Distance tries to solve that issue by actually removing the point from the data and checking how any predicted values change. Unfortunately, it may be difficult to find a library to compute this and other outlier metrics that support the earth package. However, this package does support the plot(model) command which will also give some potential outliers.
For the log-log form, I believe you can skip the categorical variables if, for a variable with $n$ categories, they are being represented as $n-1$ binary/dummy variables. For example in a simple linear regression, transforming the categorical dummy variables [0, 1] into [0, 0.5] will only cause its coefficient to a double. Here is an example using MARS regression. a is the dependent variable. b, d, e are the independent variables where d and e are categorical variables. In the second example, d and e are changed from (0 and 1) to (0 and 0.5) and (-1, 1) respectively.
a <- rnorm(100)
b <- rexp(100) + a
c <- rgamma(100, shape=1) + 2*a
d <- sapply(sapply(round(b + c), function(x) min(1,x)), function(y) max(0, y))
e <- sapply(sapply(round(a - c), function(x) min(1,x)), function(y) max(0, y))
fit <- earth( a ~ b + d + e)
e2 <- e * 2
e2 <- e2 - 1
d2 <- d * 0.5
fit2 <- earth( a ~ b + d2 + e2)
summary(fit)
summary(fit2)
sum(predict(fit) - predict(fit2))
You can see from the output, the coefficients for d, e and the intercept have changed, but none of the other coefficients changed. In addition, the predictions are the same between models.
For reference:
$$
rsq = R^2\\
grsq = 1 - \frac{gcv}{gcv.null}\\
gcv = \frac{1}{N}\sum_{i=1}^{N}\frac{\big[y_i-\hat{f}_M(x_i)\big]^2}{\bigg[1 - \frac{C(M)}{N}\bigg]^2}\\
C(M) = trace(B(B^TB)^{-1}B^T) + 1
$$
$B$ is the "data" matrix. $gcv.null$ is the Generalized Cross Validation of the intercept only model. | eliminating outliers in MARS regression
From the R package documentation and from the original MARS paper, it looks like rsq and grsq are used for model selection within the package and it looks like your code is removing outliers until the |
52,616 | eliminating outliers in MARS regression | To elaborate slightly on Eric Farng's comment that removing outliers until the fit of the model is maximized is not recommended:
The fundamental problem with tweaking the data until you get a good GRSq is that although you will build a model that gives a good fit to your selected data, your model will not give good predictions for future data --- because you aren't modelling the underlying distribution of the data. It's like a model that predicts stock market prices very accurately for historical data but is useless for prediction of future stock prices. | eliminating outliers in MARS regression | To elaborate slightly on Eric Farng's comment that removing outliers until the fit of the model is maximized is not recommended:
The fundamental problem with tweaking the data until you get a good G | eliminating outliers in MARS regression
To elaborate slightly on Eric Farng's comment that removing outliers until the fit of the model is maximized is not recommended:
The fundamental problem with tweaking the data until you get a good GRSq is that although you will build a model that gives a good fit to your selected data, your model will not give good predictions for future data --- because you aren't modelling the underlying distribution of the data. It's like a model that predicts stock market prices very accurately for historical data but is useless for prediction of future stock prices. | eliminating outliers in MARS regression
To elaborate slightly on Eric Farng's comment that removing outliers until the fit of the model is maximized is not recommended:
The fundamental problem with tweaking the data until you get a good G |
52,617 | bootstrapping vs. "repeated cross validation" | Q. 1. Is this bootstrapping without replacement or "repeated 2-fold cross validation"?
Q. 2. If the answer is both, what exactly is the difference between bootstrapping without replacement and "repeated cross validation"?
It is neither. But the differences between sampling methods are subtle.
It is not classic bootstrap because your $B$ training samples are not drawn with replacement. (For the record: classic bootstrap also imposes the condition that a sample consists of exactly $N$ draws with replacement.)
It is not "bootstrap without replacement" because you hold the size of the training sets fixed at $N/2$. (In classic bootstrap the number of unique observations in a training sample is random with expected value $0.632N$ for sufficiently large $N$. More on that here.)
It is not repeated 2-fold cross-validation (CV) because CV imposes a constraint on the validation sets, $\{ V_i \}_{i=1}^B$, namely that $V_i \cap V_{i+1} = \emptyset$ for odd $i$. (See also this answer.)
So what is it then?
The wikipedia reference you provided is correct. It is rightly called "repeated random subsampling (validation)". Sometimes also, "Monte Carlo subsampling" (with fixed subset size). The difference with the related repeated CV method, is simply that it relaxes the disjointness condition on the validation sets. In repeated random subsampling each set drawn is independent of the others. | bootstrapping vs. "repeated cross validation" | Q. 1. Is this bootstrapping without replacement or "repeated 2-fold cross validation"?
Q. 2. If the answer is both, what exactly is the difference between bootstrapping without replacement and "repeat | bootstrapping vs. "repeated cross validation"
Q. 1. Is this bootstrapping without replacement or "repeated 2-fold cross validation"?
Q. 2. If the answer is both, what exactly is the difference between bootstrapping without replacement and "repeated cross validation"?
It is neither. But the differences between sampling methods are subtle.
It is not classic bootstrap because your $B$ training samples are not drawn with replacement. (For the record: classic bootstrap also imposes the condition that a sample consists of exactly $N$ draws with replacement.)
It is not "bootstrap without replacement" because you hold the size of the training sets fixed at $N/2$. (In classic bootstrap the number of unique observations in a training sample is random with expected value $0.632N$ for sufficiently large $N$. More on that here.)
It is not repeated 2-fold cross-validation (CV) because CV imposes a constraint on the validation sets, $\{ V_i \}_{i=1}^B$, namely that $V_i \cap V_{i+1} = \emptyset$ for odd $i$. (See also this answer.)
So what is it then?
The wikipedia reference you provided is correct. It is rightly called "repeated random subsampling (validation)". Sometimes also, "Monte Carlo subsampling" (with fixed subset size). The difference with the related repeated CV method, is simply that it relaxes the disjointness condition on the validation sets. In repeated random subsampling each set drawn is independent of the others. | bootstrapping vs. "repeated cross validation"
Q. 1. Is this bootstrapping without replacement or "repeated 2-fold cross validation"?
Q. 2. If the answer is both, what exactly is the difference between bootstrapping without replacement and "repeat |
52,618 | bootstrapping vs. "repeated cross validation" | Bootstrapping always means that from your set of n samples you draw n samples with replacement. This means you will almost certainly have duplicates in your data set.
In n-fold cross validation you cleanly separate your data in n approximately equally large subsets, which do not overlap. What you are doing is indeed "repeated 2-fold cross-validation".
I don't think there is a consensus yet which methology is better and it probably depends on your application. However, I would suggest using 10-fold cross validation instead of two-fold. If you are still in a range where your model fit improves with additional training samples, then using just half the data will give estimates that are too pessimistic. | bootstrapping vs. "repeated cross validation" | Bootstrapping always means that from your set of n samples you draw n samples with replacement. This means you will almost certainly have duplicates in your data set.
In n-fold cross validation you c | bootstrapping vs. "repeated cross validation"
Bootstrapping always means that from your set of n samples you draw n samples with replacement. This means you will almost certainly have duplicates in your data set.
In n-fold cross validation you cleanly separate your data in n approximately equally large subsets, which do not overlap. What you are doing is indeed "repeated 2-fold cross-validation".
I don't think there is a consensus yet which methology is better and it probably depends on your application. However, I would suggest using 10-fold cross validation instead of two-fold. If you are still in a range where your model fit improves with additional training samples, then using just half the data will give estimates that are too pessimistic. | bootstrapping vs. "repeated cross validation"
Bootstrapping always means that from your set of n samples you draw n samples with replacement. This means you will almost certainly have duplicates in your data set.
In n-fold cross validation you c |
52,619 | What is the name of this graph? | I didn't know before I saw your question but I remember that I saw such a graph in the d3.js gallery. This is called a chord diagram. | What is the name of this graph? | I didn't know before I saw your question but I remember that I saw such a graph in the d3.js gallery. This is called a chord diagram. | What is the name of this graph?
I didn't know before I saw your question but I remember that I saw such a graph in the d3.js gallery. This is called a chord diagram. | What is the name of this graph?
I didn't know before I saw your question but I remember that I saw such a graph in the d3.js gallery. This is called a chord diagram. |
52,620 | What is the name of this graph? | I saw this graph in some biological papers. It is called circos. Originally, it was proposed in this article:
Krzywinski, M. et al. Circos: an Information Aesthetic for Comparative Genomics. Genome Res (2009) 19:1639-1645. | What is the name of this graph? | I saw this graph in some biological papers. It is called circos. Originally, it was proposed in this article:
Krzywinski, M. et al. Circos: an Information Aesthetic for Comparative Genomics. Genome Re | What is the name of this graph?
I saw this graph in some biological papers. It is called circos. Originally, it was proposed in this article:
Krzywinski, M. et al. Circos: an Information Aesthetic for Comparative Genomics. Genome Res (2009) 19:1639-1645. | What is the name of this graph?
I saw this graph in some biological papers. It is called circos. Originally, it was proposed in this article:
Krzywinski, M. et al. Circos: an Information Aesthetic for Comparative Genomics. Genome Re |
52,621 | Distribution of the product of a gamma random variable and a beta random variable | Let random variable $X \sim \text{Gamma}(a,b)$ with pdf $f(x)$:
Let $Y \sim \text{Beta}(c, d)$ with pdf $g(y)$:
We seek the pdf of the product $Z = X*Y$, say $h(z)$, which is given by:
where I am using mathStatica's TransformProduct function to automate the nitty-gritties, and where Hypergeometric1F1 denotes the Kummer confluent hypergeometric function. All done. Note that this does not have the functional form of a Gamma rv.
PDF Plot
The pdf can take a range of possible shapes. Here is a plot to illustrate that it is plainly not Gamma:
PDF Plot: $a= 3$, $b = 1$, $c = 0.7$ ... and ... $d = 0.1$
Monte Carlo check
Here is a quick Monte Carlo check of the exact symbolic solution derived above, just to make sure no errors have crept in, for the same parameter values:
The blue line is the empirical Monte Carlo pdf, and the red dashed line is the theoretical pdf $h(z)$ above. Looks fine :) | Distribution of the product of a gamma random variable and a beta random variable | Let random variable $X \sim \text{Gamma}(a,b)$ with pdf $f(x)$:
Let $Y \sim \text{Beta}(c, d)$ with pdf $g(y)$:
We seek the pdf of the product $Z = X*Y$, say $h(z)$, which is given by:
where I a | Distribution of the product of a gamma random variable and a beta random variable
Let random variable $X \sim \text{Gamma}(a,b)$ with pdf $f(x)$:
Let $Y \sim \text{Beta}(c, d)$ with pdf $g(y)$:
We seek the pdf of the product $Z = X*Y$, say $h(z)$, which is given by:
where I am using mathStatica's TransformProduct function to automate the nitty-gritties, and where Hypergeometric1F1 denotes the Kummer confluent hypergeometric function. All done. Note that this does not have the functional form of a Gamma rv.
PDF Plot
The pdf can take a range of possible shapes. Here is a plot to illustrate that it is plainly not Gamma:
PDF Plot: $a= 3$, $b = 1$, $c = 0.7$ ... and ... $d = 0.1$
Monte Carlo check
Here is a quick Monte Carlo check of the exact symbolic solution derived above, just to make sure no errors have crept in, for the same parameter values:
The blue line is the empirical Monte Carlo pdf, and the red dashed line is the theoretical pdf $h(z)$ above. Looks fine :) | Distribution of the product of a gamma random variable and a beta random variable
Let random variable $X \sim \text{Gamma}(a,b)$ with pdf $f(x)$:
Let $Y \sim \text{Beta}(c, d)$ with pdf $g(y)$:
We seek the pdf of the product $Z = X*Y$, say $h(z)$, which is given by:
where I a |
52,622 | Distribution of the product of a gamma random variable and a beta random variable | Let $X$ and $Y$ be absolutely continuous, independent, and non-negative random variables such that $X$ has bounded support. Then any two of the following 3 conditions imply the third:
(i) $X\sim{}\text{Beta}(a,b)$
(ii) $Y\sim{}\text{Gamma}(a+b,c)$
(iii) $XY\sim{}\text{Gamma}(a,c)$
see Yeo and Milne (1991) at https://www.sciencedirect.com/science/article/pii/016771529190149L
This seems to imply that this is the most general such result.
Yeo, G.F. & Milne, R.K. (1991),
"On characterizations of beta and gamma distributions",
Statistics & Probability Letters, Vol 11, No 3, (March), pp239–242 | Distribution of the product of a gamma random variable and a beta random variable | Let $X$ and $Y$ be absolutely continuous, independent, and non-negative random variables such that $X$ has bounded support. Then any two of the following 3 conditions imply the third:
(i) $X\sim{}\tex | Distribution of the product of a gamma random variable and a beta random variable
Let $X$ and $Y$ be absolutely continuous, independent, and non-negative random variables such that $X$ has bounded support. Then any two of the following 3 conditions imply the third:
(i) $X\sim{}\text{Beta}(a,b)$
(ii) $Y\sim{}\text{Gamma}(a+b,c)$
(iii) $XY\sim{}\text{Gamma}(a,c)$
see Yeo and Milne (1991) at https://www.sciencedirect.com/science/article/pii/016771529190149L
This seems to imply that this is the most general such result.
Yeo, G.F. & Milne, R.K. (1991),
"On characterizations of beta and gamma distributions",
Statistics & Probability Letters, Vol 11, No 3, (March), pp239–242 | Distribution of the product of a gamma random variable and a beta random variable
Let $X$ and $Y$ be absolutely continuous, independent, and non-negative random variables such that $X$ has bounded support. Then any two of the following 3 conditions imply the third:
(i) $X\sim{}\tex |
52,623 | Plotting distributions of variables across time | Something like these quartile bands may work. You can add more quantiles if more detail is needed, but sometimes less is more. The quartiles will be enough to get a sense of the spread and the skewness.
The downside is that the bands don't overlay well, so you lose direct comparison. I added a reference band around 0 to help compare the two frames. | Plotting distributions of variables across time | Something like these quartile bands may work. You can add more quantiles if more detail is needed, but sometimes less is more. The quartiles will be enough to get a sense of the spread and the skewnes | Plotting distributions of variables across time
Something like these quartile bands may work. You can add more quantiles if more detail is needed, but sometimes less is more. The quartiles will be enough to get a sense of the spread and the skewness.
The downside is that the bands don't overlay well, so you lose direct comparison. I added a reference band around 0 to help compare the two frames. | Plotting distributions of variables across time
Something like these quartile bands may work. You can add more quantiles if more detail is needed, but sometimes less is more. The quartiles will be enough to get a sense of the spread and the skewnes |
52,624 | Plotting distributions of variables across time | Let's take $x$ first. Use the quantiles to construct a smooth distribution $x_j(\cdot)$ at each timestep $j$. Pick a 1D grid of values $a + bi$. Now plot the greyscale image $x_j(a+bi)$, where $i$ is the row coordinate and $j$ is the column coordinate, and the $x$-values are normalized to 0-255.
Now do the same for $y$, and put the plots one above the other. If you'd prefer to combine them, then plot $x$ in the red channel and $y$ in the blue. If you do that though, do some reading on the perception of color first. Rainbow Color Map (Still) Considered Harmful is a nice summary of the problems you can encounter.
e: Actually, if you do the two plots independently, plot them in greens rather than greys. It's much easier to pick out a very light green than it is a very light grey. | Plotting distributions of variables across time | Let's take $x$ first. Use the quantiles to construct a smooth distribution $x_j(\cdot)$ at each timestep $j$. Pick a 1D grid of values $a + bi$. Now plot the greyscale image $x_j(a+bi)$, where $i$ is | Plotting distributions of variables across time
Let's take $x$ first. Use the quantiles to construct a smooth distribution $x_j(\cdot)$ at each timestep $j$. Pick a 1D grid of values $a + bi$. Now plot the greyscale image $x_j(a+bi)$, where $i$ is the row coordinate and $j$ is the column coordinate, and the $x$-values are normalized to 0-255.
Now do the same for $y$, and put the plots one above the other. If you'd prefer to combine them, then plot $x$ in the red channel and $y$ in the blue. If you do that though, do some reading on the perception of color first. Rainbow Color Map (Still) Considered Harmful is a nice summary of the problems you can encounter.
e: Actually, if you do the two plots independently, plot them in greens rather than greys. It's much easier to pick out a very light green than it is a very light grey. | Plotting distributions of variables across time
Let's take $x$ first. Use the quantiles to construct a smooth distribution $x_j(\cdot)$ at each timestep $j$. Pick a 1D grid of values $a + bi$. Now plot the greyscale image $x_j(a+bi)$, where $i$ is |
52,625 | How to build a predictive model with a billion of sparse features? | An alternative to dimensionality reduction is to use the hashing trick to train a classifier on the entire feature set without reduction beforehand.* The Vowpal Wabbit pwoject--er, project--is an implementation of various learning algorithms using the hashing trick to speed up computation:
VW is the essence of speed in machine learning, able to learn from terafeature datasets with ease. Via parallel learning, it can exceed the throughput of any single machine network interface when doing linear learning, a first amongst learning algorithms.
I don't know if VW will end up being right for you (if you have billions of features, a lot of your choices may end up being dictated by software engineering considerations), but hopefully it's a pointer in the right direction!
* Well, the hashing trick is technically a kind of dimensionality reduction, but only in a very silly sense. | How to build a predictive model with a billion of sparse features? | An alternative to dimensionality reduction is to use the hashing trick to train a classifier on the entire feature set without reduction beforehand.* The Vowpal Wabbit pwoject--er, project--is an impl | How to build a predictive model with a billion of sparse features?
An alternative to dimensionality reduction is to use the hashing trick to train a classifier on the entire feature set without reduction beforehand.* The Vowpal Wabbit pwoject--er, project--is an implementation of various learning algorithms using the hashing trick to speed up computation:
VW is the essence of speed in machine learning, able to learn from terafeature datasets with ease. Via parallel learning, it can exceed the throughput of any single machine network interface when doing linear learning, a first amongst learning algorithms.
I don't know if VW will end up being right for you (if you have billions of features, a lot of your choices may end up being dictated by software engineering considerations), but hopefully it's a pointer in the right direction!
* Well, the hashing trick is technically a kind of dimensionality reduction, but only in a very silly sense. | How to build a predictive model with a billion of sparse features?
An alternative to dimensionality reduction is to use the hashing trick to train a classifier on the entire feature set without reduction beforehand.* The Vowpal Wabbit pwoject--er, project--is an impl |
52,626 | How to build a predictive model with a billion of sparse features? | Traditionally, principal components analysis (PCA) is used for dimensionality reduction (in a mathematical sense). However, if you care about latent constructs (factors) that your features (indicators or items, in factor analysis and latent variable modeling terminology) represent and measure, then exploratory factor analysis (EFA) and/or confirmatory factor analysis (CFA) are appropriate. For more on this, check my answer here on Cross Validated site: https://stats.stackexchange.com/a/96160/31372.
Modeling a phenomena in terms of latent constructs (factors) has an additional benefit, since this approach allows further reduction in dimensionality and modeling at a higher level of abstraction (but you still will be able to get information on features/indicators, if this is needed).
In regard to the biased samples, as you called them, I think that you're talking about sparse data. If that is the case, read my answer here on Data Science site: https://datascience.stackexchange.com/a/918/2452.
UPDATE: Several people expressed their concerns about my suggestion of using PCA for a data set with such a very large (10^9) number of features. Despite not having direct experience of working with such types of data sets, I stand by my answer and provide a small subset of existing tools and research ("tip of the iceberg"), supporting my suggestion (in particular, focusing on applying using PCA in the cloud, including clusters, which should address the volume issues, as well as parallelizing PCA, including using GPUs, which should address the timing issues):
mlpack - scalable ML library (C++; provides R bindings): http://www.mlpack.org/about.html (supports PCA functionality: http://www.mlpack.org/doxygen.php?doc=classmlpack_1_1pca_1_1PCA.html)
Distributed and scalable PCA in the cloud: http://www.reef-project.org/wp-content/uploads/2014/01/2013-NIPS-BigLearn-DistributedAndScalablePCAinTheCloud.pdf
Robust and scalable algorithms for big data analytics: http://www.dtc.umn.edu/resources/bd_giannakis.pdf
PCA for large data sets with parallel data summarization: http://www2.cs.uh.edu/~ordonez/w-2014-DAPD-udfsvd.pdf
Parallelization of PCA on scalable multi-core architecture: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5423039
Parallel GPU implementation of iterative PCA algorithms: http://arxiv.org/abs/0811.1081 | How to build a predictive model with a billion of sparse features? | Traditionally, principal components analysis (PCA) is used for dimensionality reduction (in a mathematical sense). However, if you care about latent constructs (factors) that your features (indicators | How to build a predictive model with a billion of sparse features?
Traditionally, principal components analysis (PCA) is used for dimensionality reduction (in a mathematical sense). However, if you care about latent constructs (factors) that your features (indicators or items, in factor analysis and latent variable modeling terminology) represent and measure, then exploratory factor analysis (EFA) and/or confirmatory factor analysis (CFA) are appropriate. For more on this, check my answer here on Cross Validated site: https://stats.stackexchange.com/a/96160/31372.
Modeling a phenomena in terms of latent constructs (factors) has an additional benefit, since this approach allows further reduction in dimensionality and modeling at a higher level of abstraction (but you still will be able to get information on features/indicators, if this is needed).
In regard to the biased samples, as you called them, I think that you're talking about sparse data. If that is the case, read my answer here on Data Science site: https://datascience.stackexchange.com/a/918/2452.
UPDATE: Several people expressed their concerns about my suggestion of using PCA for a data set with such a very large (10^9) number of features. Despite not having direct experience of working with such types of data sets, I stand by my answer and provide a small subset of existing tools and research ("tip of the iceberg"), supporting my suggestion (in particular, focusing on applying using PCA in the cloud, including clusters, which should address the volume issues, as well as parallelizing PCA, including using GPUs, which should address the timing issues):
mlpack - scalable ML library (C++; provides R bindings): http://www.mlpack.org/about.html (supports PCA functionality: http://www.mlpack.org/doxygen.php?doc=classmlpack_1_1pca_1_1PCA.html)
Distributed and scalable PCA in the cloud: http://www.reef-project.org/wp-content/uploads/2014/01/2013-NIPS-BigLearn-DistributedAndScalablePCAinTheCloud.pdf
Robust and scalable algorithms for big data analytics: http://www.dtc.umn.edu/resources/bd_giannakis.pdf
PCA for large data sets with parallel data summarization: http://www2.cs.uh.edu/~ordonez/w-2014-DAPD-udfsvd.pdf
Parallelization of PCA on scalable multi-core architecture: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5423039
Parallel GPU implementation of iterative PCA algorithms: http://arxiv.org/abs/0811.1081 | How to build a predictive model with a billion of sparse features?
Traditionally, principal components analysis (PCA) is used for dimensionality reduction (in a mathematical sense). However, if you care about latent constructs (factors) that your features (indicators |
52,627 | Balanced datasets in Naive Bayes | There are two types of classification model, generative model and discriminative model.
Naive Bayes is a generative model, and to train Naive Bayes, your training data should be generated by the true process, and future data will be generated by that process as well. Balancing the data isn't part of the true process, so you can't do that.
On the other hand, if you are training a discriminative model (eg logistic regression), then in some cases, you might want to balance your data. One common reason is that the minority class is of more importance and by balancing, you get a better performance for that class. Manipulating data is a dangerous practice and you should know very sure why you do that. | Balanced datasets in Naive Bayes | There are two types of classification model, generative model and discriminative model.
Naive Bayes is a generative model, and to train Naive Bayes, your training data should be generated by the true | Balanced datasets in Naive Bayes
There are two types of classification model, generative model and discriminative model.
Naive Bayes is a generative model, and to train Naive Bayes, your training data should be generated by the true process, and future data will be generated by that process as well. Balancing the data isn't part of the true process, so you can't do that.
On the other hand, if you are training a discriminative model (eg logistic regression), then in some cases, you might want to balance your data. One common reason is that the minority class is of more importance and by balancing, you get a better performance for that class. Manipulating data is a dangerous practice and you should know very sure why you do that. | Balanced datasets in Naive Bayes
There are two types of classification model, generative model and discriminative model.
Naive Bayes is a generative model, and to train Naive Bayes, your training data should be generated by the true |
52,628 | Balanced datasets in Naive Bayes | Getting balanced data sets is not the only option you've got for the Naive Bayes classifier.
The paper Tackling the Poor Assumptions of Naive Bayes Text Classifiers contains a discussion of this point plus some ways to overcome that difficulty. The paper is focused on text classification though.
Concretely, it shows with a simple and general example (see Table 1) that having a skewed will produce a bias in the estimate of probabilities. So maybe a more interesting question is whether one can correct for that bias, without having to through away the prior information.
How well you can compensate for this bias depends on the concrete problem, and your data at hand. For example, if you only have two classes, and very few data from one of them, there is not really much you can do about it. Maybe in that case is more advisable to perform some novelty detection.
If you have several classes on the other hand, you could apply (as suggested in the referred paper) the "complement naive Bayes" which compensates that bias with a clever yet simple trick: instead of estimating $P(F_{i}|C)$ it calculates its complement, which results in a more balanced estimate. | Balanced datasets in Naive Bayes | Getting balanced data sets is not the only option you've got for the Naive Bayes classifier.
The paper Tackling the Poor Assumptions of Naive Bayes Text Classifiers contains a discussion of this point | Balanced datasets in Naive Bayes
Getting balanced data sets is not the only option you've got for the Naive Bayes classifier.
The paper Tackling the Poor Assumptions of Naive Bayes Text Classifiers contains a discussion of this point plus some ways to overcome that difficulty. The paper is focused on text classification though.
Concretely, it shows with a simple and general example (see Table 1) that having a skewed will produce a bias in the estimate of probabilities. So maybe a more interesting question is whether one can correct for that bias, without having to through away the prior information.
How well you can compensate for this bias depends on the concrete problem, and your data at hand. For example, if you only have two classes, and very few data from one of them, there is not really much you can do about it. Maybe in that case is more advisable to perform some novelty detection.
If you have several classes on the other hand, you could apply (as suggested in the referred paper) the "complement naive Bayes" which compensates that bias with a clever yet simple trick: instead of estimating $P(F_{i}|C)$ it calculates its complement, which results in a more balanced estimate. | Balanced datasets in Naive Bayes
Getting balanced data sets is not the only option you've got for the Naive Bayes classifier.
The paper Tackling the Poor Assumptions of Naive Bayes Text Classifiers contains a discussion of this point |
52,629 | How to implement model in R? | Here is one way to fit the model that you describe.
# sample series
x <- AirPassengers
# to illustrate a more general case,
# take a subsample that does not start in the first season
# and ends in the last season
x <- window(x, start=c(1949,2), end=c(1959,4))
Indicator variables for the seasonal intercepts can be created in several ways, for example:
# monthly intercepts
S <- frequency(x)
monthly.means <- do.call("rbind",
replicate(ceiling(length(x)/S), diag(S), simplify = FALSE))
Some arrangements for non-square time series (this has no effect if the series starts in the first season and ends in the last season):
monthly.means <- ts(monthly.means, frequency = S, start = c(start(x)[1], 1))
monthly.means <- window(monthly.means, start = start(x), end = end(x))
Regardless of the month the first observation belongs to, the first column is related to January, the second to February and so on.
# column names
if (S == 12) {
colnames(monthly.means) <- month.abb
} else
colnames(monthly.means) <- paste("season", 1L:S)
First rows of the monthly.means object:
monthly.means[1:15,]
# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
# [1,] 0 1 0 0 0 0 0 0 0 0 0 0
# [2,] 0 0 1 0 0 0 0 0 0 0 0 0
# [3,] 0 0 0 1 0 0 0 0 0 0 0 0
# [4,] 0 0 0 0 1 0 0 0 0 0 0 0
# [5,] 0 0 0 0 0 1 0 0 0 0 0 0
# [6,] 0 0 0 0 0 0 1 0 0 0 0 0
# [7,] 0 0 0 0 0 0 0 1 0 0 0 0
# [8,] 0 0 0 0 0 0 0 0 1 0 0 0
# [9,] 0 0 0 0 0 0 0 0 0 1 0 0
# [10,] 0 0 0 0 0 0 0 0 0 0 1 0
# [11,] 0 0 0 0 0 0 0 0 0 0 0 1
# [12,] 1 0 0 0 0 0 0 0 0 0 0 0
# [13,] 0 1 0 0 0 0 0 0 0 0 0 0
# [14,] 0 0 1 0 0 0 0 0 0 0 0 0
# [15,] 0 0 0 1 0 0 0 0 0 0 0 0
For the monthly trends we can reuse `monthly.means':
monthly.trends <- monthly.means * seq_along(x) / S
round(monthly.trends[1:15,], 2)
# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
# [1,] 0 0.08 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [2,] 0 0.00 0.17 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [3,] 0 0.00 0.00 0.25 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [4,] 0 0.00 0.00 0.00 0.33 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [5,] 0 0.00 0.00 0.00 0.00 0.42 0.0 0.00 0.00 0.00 0.00 0.00
# [6,] 0 0.00 0.00 0.00 0.00 0.00 0.5 0.00 0.00 0.00 0.00 0.00
# [7,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.58 0.00 0.00 0.00 0.00
# [8,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.67 0.00 0.00 0.00
# [9,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.75 0.00 0.00
# [10,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.83 0.00
# [11,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.92
# [12,] 1 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [13,] 0 1.08 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [14,] 0 0.00 1.17 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [15,] 0 0.00 0.00 1.25 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
Some arbitrary regressors:
set.seed(123)
xreg1 <- runif(length(x), 100, 200)
xreg2 <- rnorm(length(x), mean = mean(x))
Edited
As I did in my first edit, the model could now be fitted using the function lm.
lm(x ~ 0 + monthly.means + monthly.trends + xreg1 + xreg2)
But I overlooked the AR(1) structure for the error term that is mentioned in the question. As @RobHyndman says in the comment below, we can use the function arima specifying an AR(1) process for the error term, order = c(1,0,0). An intercept should not be included in order to avoid exact multicollinearity with the dummies for the seasonal means. (The example is arbitrary, the output is printed just to show the coefficients that are estimated.)
xreg <- cbind(monthly.means, monthly.trends, xreg1, xreg2)
fit <- arima(x, order = c(1,0,0), xreg = xreg, include.mean = FALSE)
round(coef(fit), 2)
# ar1 monthly.means.Jan monthly.means.Feb monthly.means.Mar
# 0.84 69.87 82.45 93.10
# monthly.means.Apr monthly.means.May monthly.means.Jun monthly.means.Jul
# 85.17 76.39 77.48 83.31
# monthly.means.Aug monthly.means.Sep monthly.means.Oct monthly.means.Nov
# 81.01 81.23 68.06 57.34
# monthly.means.Dec monthly.trends.Jan monthly.trends.Feb monthly.trends.Mar
# 73.20 27.07 23.36 27.84
# monthly.trends.Apr monthly.trends.May monthly.trends.Jun monthly.trends.Jul
# 27.56 29.34 35.98 40.47
# monthly.trends.Aug monthly.trends.Sep monthly.trends.Oct monthly.trends.Nov
# 40.30 32.07 27.85 24.02
# monthly.trends.Dec xreg1 xreg2
# 25.60 0.00 0.08
We can do some simple check. In a model containing only the seasonal intercepts, the estimates of the coefficients match the values of the sample means.
fit2 <- arima(x, order = c(0,0,0), xreg = xreg[,1:12], include.mean = FALSE)
coef.seasonal.intercepts <- coef(fit2)
sample.means <- lapply(split(x, cycle(x)), mean)
all.equal(coef.seasonal.intercepts,
as.vector(unlist(sample.means)), check.attributes = FALSE)
# TRUE
Edited (Answer to a comment posted by the OP.)
If the disturbance term follows an AR(1) process, the Ordinary Least Squares estimator is unbiased but not efficient, i.e. on average it gives the true value but the standard errors of parameters estimates are higher than in the classical setting of independent errors (in other words, estimates are not efficient).
As you say, extending the model with the AR(1) error term via arima will not change much the the estimates (standard OLS is still unbiased), but their standard errors will be smaller due to the gain in efficiency.
When the disturbance term is autocorrelated, omitting the AR term will lead to larger standard errors for the estimates, which implies a larger denominator in the t-statistics and therefore larger t-statistics. Hence, the tests for the null of non-significant regressors will be biased towards non-rejection. Using arima to specify the AR error term will protect against this issue.
The code below is a small exercise to check these ideas: 10,000 series are generated from a model with intercept and AR(1) errors. If xregcoef is set to a value other than zero, then a external variable is added to the data. A model that includes the intercept and the external regressor is fitted by means of arima with order=c(1,0,0) (stored in fit1) and by means of lm (stored in fit2).
set.seed(123)
xreg <- runif(200, 2, 6)
xregcoef <- 0
res <- matrix(nrow = 10000, ncol = 6)
colnames(res) <- c("coef 1", "s.e. 1", "t-stat. 1", "coef 2", "s.e. 2", "t-stat. 2")
for (i in seq.int(nrow(res)))
{
x <- 2 + arima.sim(n=200, model=list(ar=0.7)) + xregcoef * xreg
fit1 <- arima(x, order=c(1,0,0), xreg=xreg, include.mean=TRUE)
res[i,1] <- coef(fit1)["xreg"]
res[i,2] <- sqrt(fit1$var.coef["xreg","xreg"])
res[i,3] <- res[i,1]/res[i,2]
fit2 <- summary(lm(x ~ 1 + xreg))
res[i,4:6] <- coef(fit2)["xreg", c("Estimate", "Std. Error", "t value")]
}
Setting for example xregcoef=3, we can see that estimates from both regressions are very close to the true value (unbiased estimates) but the standard errors are slightly higher
for lm, where when the AR structure is omitted.
# results for xregcoef=3
t(apply(res, 2, summary))
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# coef 1 2.81600 2.96300 3.00100 3.00000 3.03800 3.20100
# s.e. 1 0.04313 0.05286 0.05491 0.05492 0.05693 0.06677
# coef 2 2.65800 2.93300 3.00100 2.99900 3.06400 3.37000
# s.e. 2 0.05928 0.08393 0.08879 0.08905 0.09400 0.12280
Let's now consider xregcoef=0, that is, the regressor is not part of the data generating process but we fit a model including this regressor. On average the coefficient is estimated as zero in both cases, however, as the standard errors are slightly higher in the second case, the null of the t-test is rejected slightly more often than it should, given a 5% significance level chosen below.
# results for xregcoef=0
t(apply(res, 2, summary))
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# coef 1 -0.18440 -0.03698 0.0007062 0.0003874 0.03763 0.20080
# s.e. 1 0.04313 0.05286 0.0549100 0.0549200 0.05693 0.06677
# t-stat. 1 -3.68100 -0.67410 0.0129200 0.0072410 0.68500 3.53400
# coef 2 -0.34200 -0.06694 0.0012510 -0.0006191 0.06422 0.36970
# s.e. 2 0.05928 0.08393 0.0887900 0.0890500 0.09400 0.12280
# t-stat. 2 -3.96600 -0.75560 0.0138800 -0.0070140 0.72210 4.44500
#
# rejections of the null xregcoef=0 at the 5% significance level
sum(abs(res[,3]) > 1.96) / nrow(res)
# [1] 0.0516
sum(abs(res[,6]) > 1.96) / nrow(res)
# [1] 0.0749 | How to implement model in R? | Here is one way to fit the model that you describe.
# sample series
x <- AirPassengers
# to illustrate a more general case,
# take a subsample that does not start in the first season
# and ends in th | How to implement model in R?
Here is one way to fit the model that you describe.
# sample series
x <- AirPassengers
# to illustrate a more general case,
# take a subsample that does not start in the first season
# and ends in the last season
x <- window(x, start=c(1949,2), end=c(1959,4))
Indicator variables for the seasonal intercepts can be created in several ways, for example:
# monthly intercepts
S <- frequency(x)
monthly.means <- do.call("rbind",
replicate(ceiling(length(x)/S), diag(S), simplify = FALSE))
Some arrangements for non-square time series (this has no effect if the series starts in the first season and ends in the last season):
monthly.means <- ts(monthly.means, frequency = S, start = c(start(x)[1], 1))
monthly.means <- window(monthly.means, start = start(x), end = end(x))
Regardless of the month the first observation belongs to, the first column is related to January, the second to February and so on.
# column names
if (S == 12) {
colnames(monthly.means) <- month.abb
} else
colnames(monthly.means) <- paste("season", 1L:S)
First rows of the monthly.means object:
monthly.means[1:15,]
# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
# [1,] 0 1 0 0 0 0 0 0 0 0 0 0
# [2,] 0 0 1 0 0 0 0 0 0 0 0 0
# [3,] 0 0 0 1 0 0 0 0 0 0 0 0
# [4,] 0 0 0 0 1 0 0 0 0 0 0 0
# [5,] 0 0 0 0 0 1 0 0 0 0 0 0
# [6,] 0 0 0 0 0 0 1 0 0 0 0 0
# [7,] 0 0 0 0 0 0 0 1 0 0 0 0
# [8,] 0 0 0 0 0 0 0 0 1 0 0 0
# [9,] 0 0 0 0 0 0 0 0 0 1 0 0
# [10,] 0 0 0 0 0 0 0 0 0 0 1 0
# [11,] 0 0 0 0 0 0 0 0 0 0 0 1
# [12,] 1 0 0 0 0 0 0 0 0 0 0 0
# [13,] 0 1 0 0 0 0 0 0 0 0 0 0
# [14,] 0 0 1 0 0 0 0 0 0 0 0 0
# [15,] 0 0 0 1 0 0 0 0 0 0 0 0
For the monthly trends we can reuse `monthly.means':
monthly.trends <- monthly.means * seq_along(x) / S
round(monthly.trends[1:15,], 2)
# Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
# [1,] 0 0.08 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [2,] 0 0.00 0.17 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [3,] 0 0.00 0.00 0.25 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [4,] 0 0.00 0.00 0.00 0.33 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [5,] 0 0.00 0.00 0.00 0.00 0.42 0.0 0.00 0.00 0.00 0.00 0.00
# [6,] 0 0.00 0.00 0.00 0.00 0.00 0.5 0.00 0.00 0.00 0.00 0.00
# [7,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.58 0.00 0.00 0.00 0.00
# [8,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.67 0.00 0.00 0.00
# [9,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.75 0.00 0.00
# [10,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.83 0.00
# [11,] 0 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.92
# [12,] 1 0.00 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [13,] 0 1.08 0.00 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [14,] 0 0.00 1.17 0.00 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
# [15,] 0 0.00 0.00 1.25 0.00 0.00 0.0 0.00 0.00 0.00 0.00 0.00
Some arbitrary regressors:
set.seed(123)
xreg1 <- runif(length(x), 100, 200)
xreg2 <- rnorm(length(x), mean = mean(x))
Edited
As I did in my first edit, the model could now be fitted using the function lm.
lm(x ~ 0 + monthly.means + monthly.trends + xreg1 + xreg2)
But I overlooked the AR(1) structure for the error term that is mentioned in the question. As @RobHyndman says in the comment below, we can use the function arima specifying an AR(1) process for the error term, order = c(1,0,0). An intercept should not be included in order to avoid exact multicollinearity with the dummies for the seasonal means. (The example is arbitrary, the output is printed just to show the coefficients that are estimated.)
xreg <- cbind(monthly.means, monthly.trends, xreg1, xreg2)
fit <- arima(x, order = c(1,0,0), xreg = xreg, include.mean = FALSE)
round(coef(fit), 2)
# ar1 monthly.means.Jan monthly.means.Feb monthly.means.Mar
# 0.84 69.87 82.45 93.10
# monthly.means.Apr monthly.means.May monthly.means.Jun monthly.means.Jul
# 85.17 76.39 77.48 83.31
# monthly.means.Aug monthly.means.Sep monthly.means.Oct monthly.means.Nov
# 81.01 81.23 68.06 57.34
# monthly.means.Dec monthly.trends.Jan monthly.trends.Feb monthly.trends.Mar
# 73.20 27.07 23.36 27.84
# monthly.trends.Apr monthly.trends.May monthly.trends.Jun monthly.trends.Jul
# 27.56 29.34 35.98 40.47
# monthly.trends.Aug monthly.trends.Sep monthly.trends.Oct monthly.trends.Nov
# 40.30 32.07 27.85 24.02
# monthly.trends.Dec xreg1 xreg2
# 25.60 0.00 0.08
We can do some simple check. In a model containing only the seasonal intercepts, the estimates of the coefficients match the values of the sample means.
fit2 <- arima(x, order = c(0,0,0), xreg = xreg[,1:12], include.mean = FALSE)
coef.seasonal.intercepts <- coef(fit2)
sample.means <- lapply(split(x, cycle(x)), mean)
all.equal(coef.seasonal.intercepts,
as.vector(unlist(sample.means)), check.attributes = FALSE)
# TRUE
Edited (Answer to a comment posted by the OP.)
If the disturbance term follows an AR(1) process, the Ordinary Least Squares estimator is unbiased but not efficient, i.e. on average it gives the true value but the standard errors of parameters estimates are higher than in the classical setting of independent errors (in other words, estimates are not efficient).
As you say, extending the model with the AR(1) error term via arima will not change much the the estimates (standard OLS is still unbiased), but their standard errors will be smaller due to the gain in efficiency.
When the disturbance term is autocorrelated, omitting the AR term will lead to larger standard errors for the estimates, which implies a larger denominator in the t-statistics and therefore larger t-statistics. Hence, the tests for the null of non-significant regressors will be biased towards non-rejection. Using arima to specify the AR error term will protect against this issue.
The code below is a small exercise to check these ideas: 10,000 series are generated from a model with intercept and AR(1) errors. If xregcoef is set to a value other than zero, then a external variable is added to the data. A model that includes the intercept and the external regressor is fitted by means of arima with order=c(1,0,0) (stored in fit1) and by means of lm (stored in fit2).
set.seed(123)
xreg <- runif(200, 2, 6)
xregcoef <- 0
res <- matrix(nrow = 10000, ncol = 6)
colnames(res) <- c("coef 1", "s.e. 1", "t-stat. 1", "coef 2", "s.e. 2", "t-stat. 2")
for (i in seq.int(nrow(res)))
{
x <- 2 + arima.sim(n=200, model=list(ar=0.7)) + xregcoef * xreg
fit1 <- arima(x, order=c(1,0,0), xreg=xreg, include.mean=TRUE)
res[i,1] <- coef(fit1)["xreg"]
res[i,2] <- sqrt(fit1$var.coef["xreg","xreg"])
res[i,3] <- res[i,1]/res[i,2]
fit2 <- summary(lm(x ~ 1 + xreg))
res[i,4:6] <- coef(fit2)["xreg", c("Estimate", "Std. Error", "t value")]
}
Setting for example xregcoef=3, we can see that estimates from both regressions are very close to the true value (unbiased estimates) but the standard errors are slightly higher
for lm, where when the AR structure is omitted.
# results for xregcoef=3
t(apply(res, 2, summary))
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# coef 1 2.81600 2.96300 3.00100 3.00000 3.03800 3.20100
# s.e. 1 0.04313 0.05286 0.05491 0.05492 0.05693 0.06677
# coef 2 2.65800 2.93300 3.00100 2.99900 3.06400 3.37000
# s.e. 2 0.05928 0.08393 0.08879 0.08905 0.09400 0.12280
Let's now consider xregcoef=0, that is, the regressor is not part of the data generating process but we fit a model including this regressor. On average the coefficient is estimated as zero in both cases, however, as the standard errors are slightly higher in the second case, the null of the t-test is rejected slightly more often than it should, given a 5% significance level chosen below.
# results for xregcoef=0
t(apply(res, 2, summary))
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# coef 1 -0.18440 -0.03698 0.0007062 0.0003874 0.03763 0.20080
# s.e. 1 0.04313 0.05286 0.0549100 0.0549200 0.05693 0.06677
# t-stat. 1 -3.68100 -0.67410 0.0129200 0.0072410 0.68500 3.53400
# coef 2 -0.34200 -0.06694 0.0012510 -0.0006191 0.06422 0.36970
# s.e. 2 0.05928 0.08393 0.0887900 0.0890500 0.09400 0.12280
# t-stat. 2 -3.96600 -0.75560 0.0138800 -0.0070140 0.72210 4.44500
#
# rejections of the null xregcoef=0 at the 5% significance level
sum(abs(res[,3]) > 1.96) / nrow(res)
# [1] 0.0516
sum(abs(res[,6]) > 1.96) / nrow(res)
# [1] 0.0749 | How to implement model in R?
Here is one way to fit the model that you describe.
# sample series
x <- AirPassengers
# to illustrate a more general case,
# take a subsample that does not start in the first season
# and ends in th |
52,630 | Gower distance with R functions; "gower.dist" and "daisy" | They in fact do give the same results. I am not sure how you are comparing them but here is an example:
# Create example data
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], each=10))
# create numeric variables
vars <- as.matrix(replicate(17, rnorm(30)))
df <- data.frame(nom, vars)
library(cluster)
daisy.mat <- as.matrix(daisy(df, metric="gower"))
library(StatMatch)
gower.mat <- gower.dist(df)
# you can look directly to see the numbers are the same
head(daisy.mat, 3)
head(gower.mat, 3)
# now identical will return FALSE, why?
identical(daisy.mat, gower.mat)
> identical(daisy.mat, gower.mat)
[1] FALSE
# This is because there is of extremely small differences
# in the numbers returned by the different functions
max(abs(daisy.mat - gower.mat))
> max(abs(daisy.mat - gower.mat))
[1] 5.551115e-17
# Using all.equal has a higher tolerance threshold
all.equal(daisy.mat, gower.mat, check.attributes = F)
> all.equal(daisy.mat, gower.mat, check.attributes = F)
[1] TRUE
Now that I understand you are adding an extra component to the daisy function there is still a solution. It lies in the documentation for gower.dist. The key part is in the first part of the documentation, namely that columns of mode logical will be considered as binary asymmetric variables. So you want to make sure your data structure is appropriate.
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], each=10))
# create binary variables
bin <- as.matrix(replicate(5, rep(sample(c(0,1), 30, replace=T))))
# create numeric variables
vars <- as.matrix(replicate(9, rnorm(30)))
df <- data.frame(nom, bin, vars)
# You can see that the columns are not 'logical' types
# We need to change this
str(df)
> str(df)
'data.frame': 30 obs. of 15 variables:
$ nom : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 1 ...
$ X1 : num 0 1 0 1 1 0 1 1 1 0 ...
$ X2 : num 1 1 1 1 0 0 1 0 0 0 ...
$ X3 : num 1 0 0 0 1 0 1 1 1 0 ...
$ X4 : num 0 1 0 1 0 0 1 0 0 1 ...
$ X5 : num 1 0 0 0 0 1 0 0 0 1 ...
$ X1.1: num 1.026 -0.285 -1.221 0.181 -0.139 ...
$ X2.1: num -0.045 -0.785 -1.668 -0.38 0.919 ...
$ X3.1: num 1.13 -1.46 0.74 1.91 -1.44 ...
$ X4.1: num 0.298 0.637 -0.484 0.517 0.369 ...
$ X5.1: num 1.997 0.601 -1.251 -0.611 -1.185 ...
$ X6 : num 0.0597 -0.7046 -0.7172 0.8847 -1.0156 ...
$ X7 : num -0.0886 1.0808 0.6308 -0.1136 -1.5329 ...
$ X8 : num 0.134 0.221 1.641 -0.219 0.168 ...
$ X9 : num 0.704 -0.106 -1.259 1.684 0.911 ...
# make columns logical
df[,2:6] <- sapply(df[,2:6], FUN=function(x) ifelse(x==1, TRUE, FALSE))
# now the columns are the correct types
> str(df)
'data.frame': 30 obs. of 15 variables:
$ nom : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 1 ...
$ X1 : logi FALSE TRUE FALSE TRUE TRUE FALSE ...
$ X2 : logi TRUE TRUE TRUE TRUE FALSE FALSE ...
$ X3 : logi TRUE FALSE FALSE FALSE TRUE FALSE ...
$ X4 : logi FALSE TRUE FALSE TRUE FALSE FALSE ...
$ X5 : logi TRUE FALSE FALSE FALSE FALSE TRUE ...
$ X1.1: num 1.026 -0.285 -1.221 0.181 -0.139 ...
$ X2.1: num -0.045 -0.785 -1.668 -0.38 0.919 ...
$ X3.1: num 1.13 -1.46 0.74 1.91 -1.44 ...
$ X4.1: num 0.298 0.637 -0.484 0.517 0.369 ...
$ X5.1: num 1.997 0.601 -1.251 -0.611 -1.185 ...
$ X6 : num 0.0597 -0.7046 -0.7172 0.8847 -1.0156 ...
$ X7 : num -0.0886 1.0808 0.6308 -0.1136 -1.5329 ...
$ X8 : num 0.134 0.221 1.641 -0.219 0.168 ...
$ X9 : num 0.704 -0.106 -1.259 1.684 0.911 ...
# now you can do your calls
daisy.mat <- as.matrix(daisy(df, metric="gower", type=list(asymm=c(2,3,4,5,6))))
gower.mat <- gower.dist(df)
# and you can see that the results are the same
all.equal(as.matrix(daisy.mat), gower.mat, check.attributes = F)
[1] TRUE | Gower distance with R functions; "gower.dist" and "daisy" | They in fact do give the same results. I am not sure how you are comparing them but here is an example:
# Create example data
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], e | Gower distance with R functions; "gower.dist" and "daisy"
They in fact do give the same results. I am not sure how you are comparing them but here is an example:
# Create example data
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], each=10))
# create numeric variables
vars <- as.matrix(replicate(17, rnorm(30)))
df <- data.frame(nom, vars)
library(cluster)
daisy.mat <- as.matrix(daisy(df, metric="gower"))
library(StatMatch)
gower.mat <- gower.dist(df)
# you can look directly to see the numbers are the same
head(daisy.mat, 3)
head(gower.mat, 3)
# now identical will return FALSE, why?
identical(daisy.mat, gower.mat)
> identical(daisy.mat, gower.mat)
[1] FALSE
# This is because there is of extremely small differences
# in the numbers returned by the different functions
max(abs(daisy.mat - gower.mat))
> max(abs(daisy.mat - gower.mat))
[1] 5.551115e-17
# Using all.equal has a higher tolerance threshold
all.equal(daisy.mat, gower.mat, check.attributes = F)
> all.equal(daisy.mat, gower.mat, check.attributes = F)
[1] TRUE
Now that I understand you are adding an extra component to the daisy function there is still a solution. It lies in the documentation for gower.dist. The key part is in the first part of the documentation, namely that columns of mode logical will be considered as binary asymmetric variables. So you want to make sure your data structure is appropriate.
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], each=10))
# create binary variables
bin <- as.matrix(replicate(5, rep(sample(c(0,1), 30, replace=T))))
# create numeric variables
vars <- as.matrix(replicate(9, rnorm(30)))
df <- data.frame(nom, bin, vars)
# You can see that the columns are not 'logical' types
# We need to change this
str(df)
> str(df)
'data.frame': 30 obs. of 15 variables:
$ nom : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 1 ...
$ X1 : num 0 1 0 1 1 0 1 1 1 0 ...
$ X2 : num 1 1 1 1 0 0 1 0 0 0 ...
$ X3 : num 1 0 0 0 1 0 1 1 1 0 ...
$ X4 : num 0 1 0 1 0 0 1 0 0 1 ...
$ X5 : num 1 0 0 0 0 1 0 0 0 1 ...
$ X1.1: num 1.026 -0.285 -1.221 0.181 -0.139 ...
$ X2.1: num -0.045 -0.785 -1.668 -0.38 0.919 ...
$ X3.1: num 1.13 -1.46 0.74 1.91 -1.44 ...
$ X4.1: num 0.298 0.637 -0.484 0.517 0.369 ...
$ X5.1: num 1.997 0.601 -1.251 -0.611 -1.185 ...
$ X6 : num 0.0597 -0.7046 -0.7172 0.8847 -1.0156 ...
$ X7 : num -0.0886 1.0808 0.6308 -0.1136 -1.5329 ...
$ X8 : num 0.134 0.221 1.641 -0.219 0.168 ...
$ X9 : num 0.704 -0.106 -1.259 1.684 0.911 ...
# make columns logical
df[,2:6] <- sapply(df[,2:6], FUN=function(x) ifelse(x==1, TRUE, FALSE))
# now the columns are the correct types
> str(df)
'data.frame': 30 obs. of 15 variables:
$ nom : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 1 ...
$ X1 : logi FALSE TRUE FALSE TRUE TRUE FALSE ...
$ X2 : logi TRUE TRUE TRUE TRUE FALSE FALSE ...
$ X3 : logi TRUE FALSE FALSE FALSE TRUE FALSE ...
$ X4 : logi FALSE TRUE FALSE TRUE FALSE FALSE ...
$ X5 : logi TRUE FALSE FALSE FALSE FALSE TRUE ...
$ X1.1: num 1.026 -0.285 -1.221 0.181 -0.139 ...
$ X2.1: num -0.045 -0.785 -1.668 -0.38 0.919 ...
$ X3.1: num 1.13 -1.46 0.74 1.91 -1.44 ...
$ X4.1: num 0.298 0.637 -0.484 0.517 0.369 ...
$ X5.1: num 1.997 0.601 -1.251 -0.611 -1.185 ...
$ X6 : num 0.0597 -0.7046 -0.7172 0.8847 -1.0156 ...
$ X7 : num -0.0886 1.0808 0.6308 -0.1136 -1.5329 ...
$ X8 : num 0.134 0.221 1.641 -0.219 0.168 ...
$ X9 : num 0.704 -0.106 -1.259 1.684 0.911 ...
# now you can do your calls
daisy.mat <- as.matrix(daisy(df, metric="gower", type=list(asymm=c(2,3,4,5,6))))
gower.mat <- gower.dist(df)
# and you can see that the results are the same
all.equal(as.matrix(daisy.mat), gower.mat, check.attributes = F)
[1] TRUE | Gower distance with R functions; "gower.dist" and "daisy"
They in fact do give the same results. I am not sure how you are comparing them but here is an example:
# Create example data
set.seed(123)
# create nominal variable
nom <- factor(rep(letters[1:3], e |
52,631 | Gower distance with R functions; "gower.dist" and "daisy" | Yes, They give the same result, just as proven by cdeterman.
One different I want to mention here is "gower.dist" actually use some kind of equal weights method (what they called weights in the function documents can only be 0 or 1), but "daisy" allow you to pass your weight vector by argument 'weights'.
Conclusion:
If you want a more flexible way to calculate Gower Dissimilarity, I prefer using "daisy" from package "cluster".
If you your main interest is building a synthetic dataset, use "gower.dist", it will save you a lot of time by direct using "NND.hotdeck". | Gower distance with R functions; "gower.dist" and "daisy" | Yes, They give the same result, just as proven by cdeterman.
One different I want to mention here is "gower.dist" actually use some kind of equal weights method (what they called weights in the funct | Gower distance with R functions; "gower.dist" and "daisy"
Yes, They give the same result, just as proven by cdeterman.
One different I want to mention here is "gower.dist" actually use some kind of equal weights method (what they called weights in the function documents can only be 0 or 1), but "daisy" allow you to pass your weight vector by argument 'weights'.
Conclusion:
If you want a more flexible way to calculate Gower Dissimilarity, I prefer using "daisy" from package "cluster".
If you your main interest is building a synthetic dataset, use "gower.dist", it will save you a lot of time by direct using "NND.hotdeck". | Gower distance with R functions; "gower.dist" and "daisy"
Yes, They give the same result, just as proven by cdeterman.
One different I want to mention here is "gower.dist" actually use some kind of equal weights method (what they called weights in the funct |
52,632 | Should we remove axis and use direct label on scientific chart? | It depends on your goals.
Who is reading the graph? In print or on screen? Web or PDF? Are the numbers important, or just the relative magnitudes?
I'd say that, in this kind of plot, the number labels are fine and the axis label is probably unnecessary. That's because there are only a few bars and the scale is linear.
But in general these questions need to be decided on a case by case basis, and numerical labels can get ugly quickly. Having too much information is almost as bad as having too little. For example, it's totally counterproductive to report regression coefficients to the 100s place unless your measurements are that precise and that precision actually matters. The same is true for plots, where the spatial variation along an axis is more than sufficient for making comparisons.
The reason I like the numbers here is that comparing distances along an axis takes eye movement and therefore effort: you need to constantly "flip" back and forth between the plot area and the axis. Dropping the axis and labeling the bars directly saves the reader that effort.
Dogmatic adherence to Tufte and Cleveland is a sign that you either didn't read them or didn't understand them. | Should we remove axis and use direct label on scientific chart? | It depends on your goals.
Who is reading the graph? In print or on screen? Web or PDF? Are the numbers important, or just the relative magnitudes?
I'd say that, in this kind of plot, the number labels | Should we remove axis and use direct label on scientific chart?
It depends on your goals.
Who is reading the graph? In print or on screen? Web or PDF? Are the numbers important, or just the relative magnitudes?
I'd say that, in this kind of plot, the number labels are fine and the axis label is probably unnecessary. That's because there are only a few bars and the scale is linear.
But in general these questions need to be decided on a case by case basis, and numerical labels can get ugly quickly. Having too much information is almost as bad as having too little. For example, it's totally counterproductive to report regression coefficients to the 100s place unless your measurements are that precise and that precision actually matters. The same is true for plots, where the spatial variation along an axis is more than sufficient for making comparisons.
The reason I like the numbers here is that comparing distances along an axis takes eye movement and therefore effort: you need to constantly "flip" back and forth between the plot area and the axis. Dropping the axis and labeling the bars directly saves the reader that effort.
Dogmatic adherence to Tufte and Cleveland is a sign that you either didn't read them or didn't understand them. | Should we remove axis and use direct label on scientific chart?
It depends on your goals.
Who is reading the graph? In print or on screen? Web or PDF? Are the numbers important, or just the relative magnitudes?
I'd say that, in this kind of plot, the number labels |
52,633 | Should we remove axis and use direct label on scientific chart? | Here's some sage advice from Jane Miller:
Data labels are typed numeric values adjacent to the pertinent slice,
point, or bar in a chart ... To keep charts simple and readable, use data labels sparingly. Again, the main advantage of a chart is that it can illustrate general
levels or patterns, which will be evident without data labels if your
chart has adequate titles. Complement the general depiction in the
chart with your text description, reporting exact values of selected
numbers to document the patterns ... If your
audience requires exact values for all numbers in the chart, replace
the chart with a table or include an appendix table rather than
putting data labels on each point. Reserve data labels for
- reference points; and
- reporting absolute level associated with a pie or stacked bar chart, such as total number of cases or total value of the contents of the pie
I don't like the direct labeling without the y-axis because many times the y axis gets truncated.
For example, compare two plots that show the same data. In the first one, if there was no y axis, at first glance, I would say that there was quite a bit of variation across the five groups. To make the correct inference, I would have to read and compare five labels, which is more cognitively demanding. At that point, you might as well use a table. When I see that the y-axis doesn't start at zero, it makes my Spider sense tingle. Removing the axis forecloses using that heuristic and creates more for my brain to do.
The second graph is more honest, but the labels would add clutter. One possible reason to use them is that someone might want to do some calculations based on those numbers. I view gridlines as a nice compromise between the labeling and less is more approach. In the digital realm, labels that appear when you click or hover over the bars are another option.
Another example of axes that you don't want removed is logarithmic one, where you want labels on the original scale. | Should we remove axis and use direct label on scientific chart? | Here's some sage advice from Jane Miller:
Data labels are typed numeric values adjacent to the pertinent slice,
point, or bar in a chart ... To keep charts simple and readable, use data labels spar | Should we remove axis and use direct label on scientific chart?
Here's some sage advice from Jane Miller:
Data labels are typed numeric values adjacent to the pertinent slice,
point, or bar in a chart ... To keep charts simple and readable, use data labels sparingly. Again, the main advantage of a chart is that it can illustrate general
levels or patterns, which will be evident without data labels if your
chart has adequate titles. Complement the general depiction in the
chart with your text description, reporting exact values of selected
numbers to document the patterns ... If your
audience requires exact values for all numbers in the chart, replace
the chart with a table or include an appendix table rather than
putting data labels on each point. Reserve data labels for
- reference points; and
- reporting absolute level associated with a pie or stacked bar chart, such as total number of cases or total value of the contents of the pie
I don't like the direct labeling without the y-axis because many times the y axis gets truncated.
For example, compare two plots that show the same data. In the first one, if there was no y axis, at first glance, I would say that there was quite a bit of variation across the five groups. To make the correct inference, I would have to read and compare five labels, which is more cognitively demanding. At that point, you might as well use a table. When I see that the y-axis doesn't start at zero, it makes my Spider sense tingle. Removing the axis forecloses using that heuristic and creates more for my brain to do.
The second graph is more honest, but the labels would add clutter. One possible reason to use them is that someone might want to do some calculations based on those numbers. I view gridlines as a nice compromise between the labeling and less is more approach. In the digital realm, labels that appear when you click or hover over the bars are another option.
Another example of axes that you don't want removed is logarithmic one, where you want labels on the original scale. | Should we remove axis and use direct label on scientific chart?
Here's some sage advice from Jane Miller:
Data labels are typed numeric values adjacent to the pertinent slice,
point, or bar in a chart ... To keep charts simple and readable, use data labels spar |
52,634 | Why use upper triangular Cholesky? | Traditionally, and in most of the "world" (literature), the convention that the Cholesky factor is lower triangular is the most common, i.e., $LL^T$.
In MATLAB and Octave, among others (R's chol), Cholesky factor is defined to be upper triangular, i.e., $R^TR$. This convention was inherited by MATLAB from LINPACK, because MATLAB was originally a front end for LINPACK and EISPACK.
LINPACK chose the then (1970s) unusual convention of defining Cholesky factor to be upper triangular, This was due to its consistency with the QR decomposition, in which R is upper triangular (see the footnote on p. 28 of http://www.netlib.org/utk/people/JackDongarra/PAPERS/Chapter2-LINPACK.pdf ).
LINPACK's successor, LAPACK, does not have a default for upper vs. lower triangular, and makes the user specify which convention to use.
Either way works. Note that $L^T = R$. The important thing is to use the Cholesky factor in a manner commensurate with the convention. | Why use upper triangular Cholesky? | Traditionally, and in most of the "world" (literature), the convention that the Cholesky factor is lower triangular is the most common, i.e., $LL^T$.
In MATLAB and Octave, among others (R's chol), Cho | Why use upper triangular Cholesky?
Traditionally, and in most of the "world" (literature), the convention that the Cholesky factor is lower triangular is the most common, i.e., $LL^T$.
In MATLAB and Octave, among others (R's chol), Cholesky factor is defined to be upper triangular, i.e., $R^TR$. This convention was inherited by MATLAB from LINPACK, because MATLAB was originally a front end for LINPACK and EISPACK.
LINPACK chose the then (1970s) unusual convention of defining Cholesky factor to be upper triangular, This was due to its consistency with the QR decomposition, in which R is upper triangular (see the footnote on p. 28 of http://www.netlib.org/utk/people/JackDongarra/PAPERS/Chapter2-LINPACK.pdf ).
LINPACK's successor, LAPACK, does not have a default for upper vs. lower triangular, and makes the user specify which convention to use.
Either way works. Note that $L^T = R$. The important thing is to use the Cholesky factor in a manner commensurate with the convention. | Why use upper triangular Cholesky?
Traditionally, and in most of the "world" (literature), the convention that the Cholesky factor is lower triangular is the most common, i.e., $LL^T$.
In MATLAB and Octave, among others (R's chol), Cho |
52,635 | Why use upper triangular Cholesky? | It's really a matter of preference. Also $U'z$ will give you the same sample from a multivariate normal. Why? I'll leave this as an exercise for you.
For me it is more natural the upper Cholesky factorization $A = U'U$ as I am more used to the `jik' algorithm, which takes the name from the ordering of the indices in the nested loops.
Remember that the function is implementing an algorithm. So, if they coded the Cholesky `jik' algorithm, they will first return the upper factor as this algorithm returns that.
If a lower factor is explicitly asked for, then they will do an extra computation (transposing the upper factor).
If you prefer the lower, go for it! | Why use upper triangular Cholesky? | It's really a matter of preference. Also $U'z$ will give you the same sample from a multivariate normal. Why? I'll leave this as an exercise for you.
For me it is more natural the upper Cholesky facto | Why use upper triangular Cholesky?
It's really a matter of preference. Also $U'z$ will give you the same sample from a multivariate normal. Why? I'll leave this as an exercise for you.
For me it is more natural the upper Cholesky factorization $A = U'U$ as I am more used to the `jik' algorithm, which takes the name from the ordering of the indices in the nested loops.
Remember that the function is implementing an algorithm. So, if they coded the Cholesky `jik' algorithm, they will first return the upper factor as this algorithm returns that.
If a lower factor is explicitly asked for, then they will do an extra computation (transposing the upper factor).
If you prefer the lower, go for it! | Why use upper triangular Cholesky?
It's really a matter of preference. Also $U'z$ will give you the same sample from a multivariate normal. Why? I'll leave this as an exercise for you.
For me it is more natural the upper Cholesky facto |
52,636 | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$ | Explicit computation of these expectations appears to be out of the question once the index exceeds two or three, so I will focus on issues that had been emphasized in earlier versions of the question:
What happens (asymptotically) as $i$ increases?
What happens as $\alpha$ increases?
The answers turn out to be interesting, unusual, and perhaps counterintuitive. I was surprised by the results and therefore felt it worthwhile to post such a lengthy answer for those who appreciate the subtle interplay of intuition, simulation, and analysis and to expose any flaws in my analysis to critical examination.
Exploration and Intuition
Let's get a handle on this process by describing it in words. A sequence of random values $(X_i)$ generates sequences $(Y_i)$ and $(Z_i)$. $Z_i$ is the smaller of (a) the current value of $X$ (namely, $X_i$) and (b) the average of all its previous values (which is called $Y_{i-1}$). This has two effects:
Because $Z_i$ cannot be any larger than $Y_{i-1}$, it follows (by inducting on $i$) that averaging $Z_i$ with all the previous $Y_j$ cannot increase $Y$. Thus, $(Y_i)$ is a non-increasing sequence. It decreases only when a value of $X$ falls below the running average of the $Z_i$.
The further along we go (that is, the larger the index $i$ is), the smaller is the possible change from $Y_{i-1}$ to $Y_i$, because the weight of $Z_i$ in the average is just $1/i$.
These effects show that any single realization of the process $(Y_i)$ must decrease more and more slowly, leveling off to a horizontal asymptote (because the $X_i$ are bounded below by $0$). Furthermore, one's intuition might suggest that when smaller values of $X_i$ are rare, then this asymptote ought to be positive. That is precisely what simulations suggest, as in the left hand plot in the figure which shows one realization of $(X_i)$ (as gray dots) and the corresponding $(Y_i)$ (as a dark graph) and $(Z_i)$ (as a faint red graph bouncing between the graph of $(Y_i)$ and the lowest values of the $(X_i)$):
The right hand plot displays, in red, $50$ independent realizations of $(Y_i)$, again for $\alpha=2$. (The black curve will be explained later.) Indeed, all these realizations seem to level off quickly to asymptotic values. The striking thing, though, is that these values differ quite a bit. The differences are induced by the large changes occurring very early on in the processes: when the very first one or two of the $X_i$ are small, all subsequent values of $(Y_i)$ must be even smaller. Here we have an example of a stochastic process with an extremely high degree of autocorrelation.
If this intuition is correct, then the expectations $\mathbb{E}(Y_i)$ ought to be a decreasing function of $i$ and level off to some nonzero value depending on $\alpha$.
Most of this intuition is good--except that the probability that $(Y_i)$ levels off to a nonzero value is nil. That is, despite all appearances, essentially all realizations of $(Y_i)$ eventually go to zero! One of the more amazing results concerns how long this will take. First, though, let's see why a nonzero asymptotic value is so unlikely.
Analysis of $(Y_i)$
Consider one realization of $(X_i)$, which can be denoted $(x_i) = x_1, x_2, \ldots, x_i, \ldots$. Associated with it are realizations of $(Y_i)$ and $(Z_i)$, similarly denoted with lower case letters. Suppose that $(y_i)$ reaches some value $y \gt 0$ asymptotically. Since $(y_i)$ is nondecreasing, this means that for any $\epsilon\gt 0$ there exists an integer $n$ such that
$$y + \epsilon > y_i >= y$$
for all $i \ge n$.
Consider what happens when $x_i$ has a value smaller than $y_{i-1}$. The change in the running average $(y_i)$ is
$$y_{i} - y_{i-1} = \frac{(i-1)y_{i-1} + x_i}{i} - y_{i-1} = \frac{iy_{i-1} + x_i-y_i}{i}- y_{i-1} = \frac{x_i-y_i}{i}.$$
I am going to underestimate the size of that change by replacing it by $0$ if $z_i \ge y$ and, otherwise, by $(z_i - y)/i$. In other words, let's only count the amount by which $x_i$ is less than the asymptotic value $y$.
I will further underestimate the rate at which such decreases occur. They happen exactly when $x_i \le y_{i-1}$, which is more often than when $x_i \le y$. Writing $F$ for the CDF of $X$, this rate is $F(y)$, allowing us to express the expected value of the change at step $i$ as being an amount more negative than
$$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i}.$$
Consider, now, what happens to $(y_i)$ starting at $y_n$ and continuing for a huge number of steps $m \gg n$ to $y_{m+n}$. Accumulating these conservative estimates of the decreases causes $y_n$ to drop to a value expected to be less than the sum of the preceding fractions. I will underestimate the size of that sum by replacing the denominators $n+1, n+2, \ldots, n+m$ by the largest denominator $m+n$. Because there are now $m$ identical terms in the sum, this underestimate equals
$$\frac{m F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{m+n}.$$
Finally, we need to compare this expectation to what really is happening with the realization $(y_i)$. The Weak Law of Large Numbers says that for sufficiently large $m$, this realization is almost certain to exhibit a net decrease that is extremely close to the expected decrease. Let's accommodate this sense of "extremely close" by (a) taking $m$ much larger than $n$ but (b) underestimating $m/(m+n)$ as $1/2$. Thus,
It is almost certain that the change from $y_n$ to $y_{m+n}$ is greater in magnitude than
$$\beta(y) = \frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{2} \le 0.$$
It is of logical importance that $m$ did not really depend on $n$: the only relationship among them is that $m$ should be much larger than $n$. Return, then, to the original setting where we supposed that $y$ was a nonzero horizontal asymptote of $(y_i)$. Choose $\epsilon = -\beta(y)$. Provided this was nonzero, it determined the value of $n$ (at which the sequence $(y_i)$ finally approaches within $\epsilon$ of its asymptote). By taking a sufficiently large $m$, we have concluded that $(y_i)$ eventually must decrease by more than $\epsilon$. That is,
$$y = y + \epsilon + \beta(y) > y_{m+n}.$$
Therefore we must have been wrong: either $y$ is not an asymptote of $(y_i)$ or else $\beta(y) = 0$. However, $\beta(y) = 0$ only when $y$ is smaller than all numbers in the support of $X$. In the case of a Beta distribution (any Beta distribution), the support is always the full unit interval $[0,1]$. Consequently, the only possible value at which almost all realizations $y_i$ can level off to is $0$. It follows immediately that
$$\lim_{i\to \infty} \mathbb{E}(Y_i) = 0.$$
Conclusions
For $X \sim $ Beta$(\alpha, 1)$, $F(x) = x^\alpha$ concentrates more and more of the probability near $1$ as $\alpha$ increases. Consequently it is obvious (and easily proven) that
$$\lim_{\alpha \to \infty} \mathbb{E}(Y_i) = 1.$$
Since $Z_i \le Y_i$ by definition, its expectation is squeezed between the expectation of $Y_i$ and zero, whence
$$\lim_{i\to \infty} \mathbb{E}(Z_i) = 0.$$
Comments
Notice that the results for the limiting values with respect to $i$ did not require that the $X_i$ have Beta distributions. Upon reviewing the argument is becomes clear that indeed the $X_i$ do not need to have identical distributions, nor do they need to be independent: the key idea behind the definition of $\beta(y)$ is that there needs to be a nonzero chance of seeing values of the $X_i$ that are appreciably less than $y$. This prevents most realizations $(y_i)$ from leveling off to any value $y$ for which $\beta(y)\gt 0$.
The rates at which realizations reach zero, however, can be astonishingly slow. Consider the problem setting again, in which the $X_i$ are iid with common CDF $F(x) = x^\alpha$. According to the previous estimates the expected rate of change is approximately
$$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i} = \frac{y^\alpha (\alpha/(\alpha+1)y - y)}{i}.$$
The solution can be closely approximated by taking these differences to be derivatives of the expectation $f(i) = \mathbb{E}(Y_i)$ and integrating the resulting differential equation, yielding (for $\alpha \gt 1$)
$$f(i) \approx \left(\frac{\alpha}{\alpha+1}\left(\log(i)+C\right)\right)^{-1/\alpha}$$
for some constant of integration $C$ (which we may ignore when studying the asymptotics for $i\to\infty$). Remember, this was obtained by consistently underestimating the rate of decrease of $f$. Therefore, $f$ approximates an upper bound of the realizations of $(Y_i)$ with probability $1$. Its graph (for $C=0$) is the thick black curve shown in the right-hand plot of the figure. It has the right shape and actually seems to be a pretty good approximation to the upper envelope of these realizations.
This is a very slowly decreasing function. For instance, we might inquire how long it would take for the realizations to draw close to $0$: say, down to $y$. The general solution (ignoring $C$, which makes relatively little difference) is
$$i \approx \exp\left(\frac{\alpha+1}{\alpha}y^{-\alpha}\right).$$
Even with $\alpha=2$ (where each $X$ has an appreciable chance of being close to $0$), and $y=0.1$ (which isn't even terribly close to $0$), the solution is $i\approx 1.4\times 10^{65}$. For $\alpha=100$ and $y=1/2$, $i$ is near $10^{10^{30}}$. I am not going to wait around for that simulation to finish! The moral here is that simulations can sometimes deceive. Their correct interpretation must be informed by an analysis of the underlying phenomenon being simulated.
These asymptotic approximations appear to be pretty good, bringing us at least partway back from the purely limiting results obtained and the request in the question for information about the individual expectations $\mathbb{E}{Z_i}$, which will be less than but close to $\mathbb{E}{Y_i}$. | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$ | Explicit computation of these expectations appears to be out of the question once the index exceeds two or three, so I will focus on issues that had been emphasized in earlier versions of the question | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$
Explicit computation of these expectations appears to be out of the question once the index exceeds two or three, so I will focus on issues that had been emphasized in earlier versions of the question:
What happens (asymptotically) as $i$ increases?
What happens as $\alpha$ increases?
The answers turn out to be interesting, unusual, and perhaps counterintuitive. I was surprised by the results and therefore felt it worthwhile to post such a lengthy answer for those who appreciate the subtle interplay of intuition, simulation, and analysis and to expose any flaws in my analysis to critical examination.
Exploration and Intuition
Let's get a handle on this process by describing it in words. A sequence of random values $(X_i)$ generates sequences $(Y_i)$ and $(Z_i)$. $Z_i$ is the smaller of (a) the current value of $X$ (namely, $X_i$) and (b) the average of all its previous values (which is called $Y_{i-1}$). This has two effects:
Because $Z_i$ cannot be any larger than $Y_{i-1}$, it follows (by inducting on $i$) that averaging $Z_i$ with all the previous $Y_j$ cannot increase $Y$. Thus, $(Y_i)$ is a non-increasing sequence. It decreases only when a value of $X$ falls below the running average of the $Z_i$.
The further along we go (that is, the larger the index $i$ is), the smaller is the possible change from $Y_{i-1}$ to $Y_i$, because the weight of $Z_i$ in the average is just $1/i$.
These effects show that any single realization of the process $(Y_i)$ must decrease more and more slowly, leveling off to a horizontal asymptote (because the $X_i$ are bounded below by $0$). Furthermore, one's intuition might suggest that when smaller values of $X_i$ are rare, then this asymptote ought to be positive. That is precisely what simulations suggest, as in the left hand plot in the figure which shows one realization of $(X_i)$ (as gray dots) and the corresponding $(Y_i)$ (as a dark graph) and $(Z_i)$ (as a faint red graph bouncing between the graph of $(Y_i)$ and the lowest values of the $(X_i)$):
The right hand plot displays, in red, $50$ independent realizations of $(Y_i)$, again for $\alpha=2$. (The black curve will be explained later.) Indeed, all these realizations seem to level off quickly to asymptotic values. The striking thing, though, is that these values differ quite a bit. The differences are induced by the large changes occurring very early on in the processes: when the very first one or two of the $X_i$ are small, all subsequent values of $(Y_i)$ must be even smaller. Here we have an example of a stochastic process with an extremely high degree of autocorrelation.
If this intuition is correct, then the expectations $\mathbb{E}(Y_i)$ ought to be a decreasing function of $i$ and level off to some nonzero value depending on $\alpha$.
Most of this intuition is good--except that the probability that $(Y_i)$ levels off to a nonzero value is nil. That is, despite all appearances, essentially all realizations of $(Y_i)$ eventually go to zero! One of the more amazing results concerns how long this will take. First, though, let's see why a nonzero asymptotic value is so unlikely.
Analysis of $(Y_i)$
Consider one realization of $(X_i)$, which can be denoted $(x_i) = x_1, x_2, \ldots, x_i, \ldots$. Associated with it are realizations of $(Y_i)$ and $(Z_i)$, similarly denoted with lower case letters. Suppose that $(y_i)$ reaches some value $y \gt 0$ asymptotically. Since $(y_i)$ is nondecreasing, this means that for any $\epsilon\gt 0$ there exists an integer $n$ such that
$$y + \epsilon > y_i >= y$$
for all $i \ge n$.
Consider what happens when $x_i$ has a value smaller than $y_{i-1}$. The change in the running average $(y_i)$ is
$$y_{i} - y_{i-1} = \frac{(i-1)y_{i-1} + x_i}{i} - y_{i-1} = \frac{iy_{i-1} + x_i-y_i}{i}- y_{i-1} = \frac{x_i-y_i}{i}.$$
I am going to underestimate the size of that change by replacing it by $0$ if $z_i \ge y$ and, otherwise, by $(z_i - y)/i$. In other words, let's only count the amount by which $x_i$ is less than the asymptotic value $y$.
I will further underestimate the rate at which such decreases occur. They happen exactly when $x_i \le y_{i-1}$, which is more often than when $x_i \le y$. Writing $F$ for the CDF of $X$, this rate is $F(y)$, allowing us to express the expected value of the change at step $i$ as being an amount more negative than
$$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i}.$$
Consider, now, what happens to $(y_i)$ starting at $y_n$ and continuing for a huge number of steps $m \gg n$ to $y_{m+n}$. Accumulating these conservative estimates of the decreases causes $y_n$ to drop to a value expected to be less than the sum of the preceding fractions. I will underestimate the size of that sum by replacing the denominators $n+1, n+2, \ldots, n+m$ by the largest denominator $m+n$. Because there are now $m$ identical terms in the sum, this underestimate equals
$$\frac{m F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{m+n}.$$
Finally, we need to compare this expectation to what really is happening with the realization $(y_i)$. The Weak Law of Large Numbers says that for sufficiently large $m$, this realization is almost certain to exhibit a net decrease that is extremely close to the expected decrease. Let's accommodate this sense of "extremely close" by (a) taking $m$ much larger than $n$ but (b) underestimating $m/(m+n)$ as $1/2$. Thus,
It is almost certain that the change from $y_n$ to $y_{m+n}$ is greater in magnitude than
$$\beta(y) = \frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{2} \le 0.$$
It is of logical importance that $m$ did not really depend on $n$: the only relationship among them is that $m$ should be much larger than $n$. Return, then, to the original setting where we supposed that $y$ was a nonzero horizontal asymptote of $(y_i)$. Choose $\epsilon = -\beta(y)$. Provided this was nonzero, it determined the value of $n$ (at which the sequence $(y_i)$ finally approaches within $\epsilon$ of its asymptote). By taking a sufficiently large $m$, we have concluded that $(y_i)$ eventually must decrease by more than $\epsilon$. That is,
$$y = y + \epsilon + \beta(y) > y_{m+n}.$$
Therefore we must have been wrong: either $y$ is not an asymptote of $(y_i)$ or else $\beta(y) = 0$. However, $\beta(y) = 0$ only when $y$ is smaller than all numbers in the support of $X$. In the case of a Beta distribution (any Beta distribution), the support is always the full unit interval $[0,1]$. Consequently, the only possible value at which almost all realizations $y_i$ can level off to is $0$. It follows immediately that
$$\lim_{i\to \infty} \mathbb{E}(Y_i) = 0.$$
Conclusions
For $X \sim $ Beta$(\alpha, 1)$, $F(x) = x^\alpha$ concentrates more and more of the probability near $1$ as $\alpha$ increases. Consequently it is obvious (and easily proven) that
$$\lim_{\alpha \to \infty} \mathbb{E}(Y_i) = 1.$$
Since $Z_i \le Y_i$ by definition, its expectation is squeezed between the expectation of $Y_i$ and zero, whence
$$\lim_{i\to \infty} \mathbb{E}(Z_i) = 0.$$
Comments
Notice that the results for the limiting values with respect to $i$ did not require that the $X_i$ have Beta distributions. Upon reviewing the argument is becomes clear that indeed the $X_i$ do not need to have identical distributions, nor do they need to be independent: the key idea behind the definition of $\beta(y)$ is that there needs to be a nonzero chance of seeing values of the $X_i$ that are appreciably less than $y$. This prevents most realizations $(y_i)$ from leveling off to any value $y$ for which $\beta(y)\gt 0$.
The rates at which realizations reach zero, however, can be astonishingly slow. Consider the problem setting again, in which the $X_i$ are iid with common CDF $F(x) = x^\alpha$. According to the previous estimates the expected rate of change is approximately
$$\frac{F(y) (\mathbb{E}(X\, |\, X \lt Y) - y)}{i} = \frac{y^\alpha (\alpha/(\alpha+1)y - y)}{i}.$$
The solution can be closely approximated by taking these differences to be derivatives of the expectation $f(i) = \mathbb{E}(Y_i)$ and integrating the resulting differential equation, yielding (for $\alpha \gt 1$)
$$f(i) \approx \left(\frac{\alpha}{\alpha+1}\left(\log(i)+C\right)\right)^{-1/\alpha}$$
for some constant of integration $C$ (which we may ignore when studying the asymptotics for $i\to\infty$). Remember, this was obtained by consistently underestimating the rate of decrease of $f$. Therefore, $f$ approximates an upper bound of the realizations of $(Y_i)$ with probability $1$. Its graph (for $C=0$) is the thick black curve shown in the right-hand plot of the figure. It has the right shape and actually seems to be a pretty good approximation to the upper envelope of these realizations.
This is a very slowly decreasing function. For instance, we might inquire how long it would take for the realizations to draw close to $0$: say, down to $y$. The general solution (ignoring $C$, which makes relatively little difference) is
$$i \approx \exp\left(\frac{\alpha+1}{\alpha}y^{-\alpha}\right).$$
Even with $\alpha=2$ (where each $X$ has an appreciable chance of being close to $0$), and $y=0.1$ (which isn't even terribly close to $0$), the solution is $i\approx 1.4\times 10^{65}$. For $\alpha=100$ and $y=1/2$, $i$ is near $10^{10^{30}}$. I am not going to wait around for that simulation to finish! The moral here is that simulations can sometimes deceive. Their correct interpretation must be informed by an analysis of the underlying phenomenon being simulated.
These asymptotic approximations appear to be pretty good, bringing us at least partway back from the purely limiting results obtained and the request in the question for information about the individual expectations $\mathbb{E}{Z_i}$, which will be less than but close to $\mathbb{E}{Y_i}$. | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$
Explicit computation of these expectations appears to be out of the question once the index exceeds two or three, so I will focus on issues that had been emphasized in earlier versions of the question |
52,637 | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$ | A conditional approach appears possibly helpful here.
The variable $Z_{i+1}$ can be written using indicator functions,
$$Z_{i+1} = X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}+Y_i\cdot [1-I_{\{X_{i+1} \leq Y_i\}}] $$
Denote $\mathcal F_i$ the sigma-algebra at time $i$, that includes $Z_i$ and hence $Y_i$ and consider the conditional expected value
$$E[Z_{i+1}\mid \mathcal F_i] = E[X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}\mid \mathcal F_i]+Y_i\cdot \Big(1-E[I_{\{X_{i+1} \leq Y_i\}}\mid\mathcal F_i]\Big) $$
$$=\int_0^{Y_i}xf_X(x)dx + Y_i\cdot \Big(1-P[X_{i+1} \leq Y_i]\Big)$$
where we have used the fact that the $X$'s are i.i.d. Using the $\text{Beta}(\alpha,1)$ distributional assumption on $X$ we have
$$E[Z_{i+1}\mid \mathcal F_i] = \frac{\alpha}{1+\alpha}x^{1+\alpha}\Big|^{Y_i}_0+Y_i\cdot \Big(1-Y_i^\alpha\Big) =Y_i -\frac 1{1+\alpha}Y_i^{1+\alpha}$$
Therefore
$$E(Z_{i+1}) = E\Big(E[Z_{i+1}\mid \mathcal F_i]\Big) = E(Y_i) -\frac 1{1+\alpha}E(Y_i^{1+\alpha}) \tag{1}$$
We can write
$$(1+i)Y_{i+1} = \sum \limits_{t=1}^{i+1} Z_t = \sum \limits_{t=1}^{i} Z_t +Z_{i+1}$$
$$\Rightarrow \frac {1+i}{i}E[Y_{i+1}] = E(Y_i) +\frac 1i E(Z_{i+1}) \Rightarrow E(Y_i) = \frac {1+i}{i}E[Y_{i+1}] - \frac 1i E(Z_{i+1})$$
Inserting into $(1)$ and re-arranging we get
$$E(Z_{i+1}) = \frac {1+i}{i}E[Y_{i+1}] - \frac 1i E(Z_{i+1}) -\frac 1{1+\alpha}E[Y_i^{1+\alpha}]$$
$$\Rightarrow E(Z_{i+1}) = E[Y_{i+1}]-\frac i{1+i}\frac 1{1+\alpha}E[Y_i^{1+\alpha}] \tag{2}$$
which links (partly) the concurrent expected values of $Z$ and $Y$.
Moreover, equating the right-hand-sides of $(1)$ and $(2)$ and re-arranging we get
$$E(Y_{i+1}-Y_i) = -\frac{1}{1+i}\frac 1{1+\alpha}E[Y_i^{1+\alpha}] \tag{3}$$
which perhaps can make us "suspect" that the approach to zero may be slow. | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$ | A conditional approach appears possibly helpful here.
The variable $Z_{i+1}$ can be written using indicator functions,
$$Z_{i+1} = X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}+Y_i\cdot [1-I_{\{X_{i+1} \leq | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$
A conditional approach appears possibly helpful here.
The variable $Z_{i+1}$ can be written using indicator functions,
$$Z_{i+1} = X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}+Y_i\cdot [1-I_{\{X_{i+1} \leq Y_i\}}] $$
Denote $\mathcal F_i$ the sigma-algebra at time $i$, that includes $Z_i$ and hence $Y_i$ and consider the conditional expected value
$$E[Z_{i+1}\mid \mathcal F_i] = E[X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}\mid \mathcal F_i]+Y_i\cdot \Big(1-E[I_{\{X_{i+1} \leq Y_i\}}\mid\mathcal F_i]\Big) $$
$$=\int_0^{Y_i}xf_X(x)dx + Y_i\cdot \Big(1-P[X_{i+1} \leq Y_i]\Big)$$
where we have used the fact that the $X$'s are i.i.d. Using the $\text{Beta}(\alpha,1)$ distributional assumption on $X$ we have
$$E[Z_{i+1}\mid \mathcal F_i] = \frac{\alpha}{1+\alpha}x^{1+\alpha}\Big|^{Y_i}_0+Y_i\cdot \Big(1-Y_i^\alpha\Big) =Y_i -\frac 1{1+\alpha}Y_i^{1+\alpha}$$
Therefore
$$E(Z_{i+1}) = E\Big(E[Z_{i+1}\mid \mathcal F_i]\Big) = E(Y_i) -\frac 1{1+\alpha}E(Y_i^{1+\alpha}) \tag{1}$$
We can write
$$(1+i)Y_{i+1} = \sum \limits_{t=1}^{i+1} Z_t = \sum \limits_{t=1}^{i} Z_t +Z_{i+1}$$
$$\Rightarrow \frac {1+i}{i}E[Y_{i+1}] = E(Y_i) +\frac 1i E(Z_{i+1}) \Rightarrow E(Y_i) = \frac {1+i}{i}E[Y_{i+1}] - \frac 1i E(Z_{i+1})$$
Inserting into $(1)$ and re-arranging we get
$$E(Z_{i+1}) = \frac {1+i}{i}E[Y_{i+1}] - \frac 1i E(Z_{i+1}) -\frac 1{1+\alpha}E[Y_i^{1+\alpha}]$$
$$\Rightarrow E(Z_{i+1}) = E[Y_{i+1}]-\frac i{1+i}\frac 1{1+\alpha}E[Y_i^{1+\alpha}] \tag{2}$$
which links (partly) the concurrent expected values of $Z$ and $Y$.
Moreover, equating the right-hand-sides of $(1)$ and $(2)$ and re-arranging we get
$$E(Y_{i+1}-Y_i) = -\frac{1}{1+i}\frac 1{1+\alpha}E[Y_i^{1+\alpha}] \tag{3}$$
which perhaps can make us "suspect" that the approach to zero may be slow. | Calculate $\mathbb{E}[Z_i]$ where $Z_i = \min(X_i, Y_{i-1})$, $X \sim Beta(\alpha,1)$
A conditional approach appears possibly helpful here.
The variable $Z_{i+1}$ can be written using indicator functions,
$$Z_{i+1} = X_{i+1}\cdot I_{\{X_{i+1} \leq Y_i\}}+Y_i\cdot [1-I_{\{X_{i+1} \leq |
52,638 | Common name for distributions that are bounded on one side | I would say no, as there are quite a few distributions in addition to log-normal that have support on $[ 0, \infty)$, such as the $\chi^2$ or gamma. (Wikipedia even has a list devoted to this criterion.)
In practice there are different circumstances in which such distributions are useful to approximate actually observed and measured phenomena. For one example, in some situations you may use the term censored at the boundary to describe how observations can go above and below the boundary, but they can only be measured within a certain support (and when they go outside this support they are recorded as being at the end point and/or beyond). This is typically referred to when you have an instrument that can not measure the numerical value outside of the bounds, it only knows it is at or below the boundary. For log-normal an example is the measurement of $\log(\text{wages})$. I believe for the US census they censor the distribution at $0$, although people who own their own businesses can be in the negative. (Some macro economic variables are well approximated by a log-normal distribution, but the support of the actual micro level units is partly in the negative.)
Another example (as user41315 mentioned) are truncated distributions. Truncated means "chopped off". Sometimes we only observe/record the measurement if it exceeds the boundary. For another economic example of wages, lets say you only had to file taxes if your wages were above $0$. So it is not like the census that just records $0$ even if you have less than $0$, you just simply don't observe the individuals with less than $0$ wages. As whuber stated in the comment, you can take take any distribution and re-express it as a truncated one.
The descriptions of censored or truncated refer to how the data are measured, and not to particular distributions. Not all measurements that are bounded on $[ 0, \infty)$ are necessarily truncated or censored though. For example distances or squares of values we know can not go below $0$. | Common name for distributions that are bounded on one side | I would say no, as there are quite a few distributions in addition to log-normal that have support on $[ 0, \infty)$, such as the $\chi^2$ or gamma. (Wikipedia even has a list devoted to this criterio | Common name for distributions that are bounded on one side
I would say no, as there are quite a few distributions in addition to log-normal that have support on $[ 0, \infty)$, such as the $\chi^2$ or gamma. (Wikipedia even has a list devoted to this criterion.)
In practice there are different circumstances in which such distributions are useful to approximate actually observed and measured phenomena. For one example, in some situations you may use the term censored at the boundary to describe how observations can go above and below the boundary, but they can only be measured within a certain support (and when they go outside this support they are recorded as being at the end point and/or beyond). This is typically referred to when you have an instrument that can not measure the numerical value outside of the bounds, it only knows it is at or below the boundary. For log-normal an example is the measurement of $\log(\text{wages})$. I believe for the US census they censor the distribution at $0$, although people who own their own businesses can be in the negative. (Some macro economic variables are well approximated by a log-normal distribution, but the support of the actual micro level units is partly in the negative.)
Another example (as user41315 mentioned) are truncated distributions. Truncated means "chopped off". Sometimes we only observe/record the measurement if it exceeds the boundary. For another economic example of wages, lets say you only had to file taxes if your wages were above $0$. So it is not like the census that just records $0$ even if you have less than $0$, you just simply don't observe the individuals with less than $0$ wages. As whuber stated in the comment, you can take take any distribution and re-express it as a truncated one.
The descriptions of censored or truncated refer to how the data are measured, and not to particular distributions. Not all measurements that are bounded on $[ 0, \infty)$ are necessarily truncated or censored though. For example distances or squares of values we know can not go below $0$. | Common name for distributions that are bounded on one side
I would say no, as there are quite a few distributions in addition to log-normal that have support on $[ 0, \infty)$, such as the $\chi^2$ or gamma. (Wikipedia even has a list devoted to this criterio |
52,639 | Common name for distributions that are bounded on one side | There seems to be no standard term, based on my experience. Some people refer to them as "one-sided", or "of one-sided support". I'd say "supported on [the positive half-line]", even though that is clunky.
Edit: related to Andy W's comment- "truncated" rather than "censored" more appropriate if the values outside a certain range are ignored (without adding mass to the observation window's endpoints). [I don't have enough reputation to comment directly on your answer]. | Common name for distributions that are bounded on one side | There seems to be no standard term, based on my experience. Some people refer to them as "one-sided", or "of one-sided support". I'd say "supported on [the positive half-line]", even though that is | Common name for distributions that are bounded on one side
There seems to be no standard term, based on my experience. Some people refer to them as "one-sided", or "of one-sided support". I'd say "supported on [the positive half-line]", even though that is clunky.
Edit: related to Andy W's comment- "truncated" rather than "censored" more appropriate if the values outside a certain range are ignored (without adding mass to the observation window's endpoints). [I don't have enough reputation to comment directly on your answer]. | Common name for distributions that are bounded on one side
There seems to be no standard term, based on my experience. Some people refer to them as "one-sided", or "of one-sided support". I'd say "supported on [the positive half-line]", even though that is |
52,640 | Automatic identification of distribution of data | Identifying the distribution of data is essentially impossible.
The class of distribution functions is very large; it must be at least as large as the cardinality of $\mathbb{R}$ (e.g. consider only the unit step functions, corresponding to a constant value at some $x$ - there are as many of those as the cardinality of $\mathbb{R}$, so it must be at least that large).
Further, any cdf has an infinity of "near neighbors" that at a given sample size are hard to tell from any given distribution. (e.g. if we consider the KS-statistic to tell them apart, there's an infinite number of distributions close enough to the true distribution that a test at some sample size won't be able to tell the difference).
So the idea that we can say this must be the distribution, on the basis of a sample is a hopeless task.
If we restrict ourselves to some small list of candidates, then at some large sample size we might hope to rule out almost all the list (which sounds useful) ... but then we may actually end up ruling out the entire list (and indeed as sample sizes become large, this becomes essentially a certainty, because the chances our list includes the actual distribution of the data will be essentially zero).
[Further, tools which have a very large list of distribution families to choose from often "overfit", which can be counterproductive. One hopes that they might eventually catch up with some of the ideas which help avoid this problem, but even then, in general domain knowledge is going to be a better tool for good model choice than some arbitrary list of distributions.]
Indeed the entire approach seems pointless, because not only are real distributions typically going to be more complex than we can ever hope to identify (e.g. we might conceive of them consisting of arbitrary mixtures), knowing the true distribution class would be effectively useless as a model (with more parameters than observations, for example).
Probability models are in general little more than (hopefully) useful approximations. We should treat them as such.
So our interest should not be on identifying the distribution, but a distribution - one which adequately describes the situation well enough for our purposes, but which is simple enough for us to do something with at the sample sizes we can get. | Automatic identification of distribution of data | Identifying the distribution of data is essentially impossible.
The class of distribution functions is very large; it must be at least as large as the cardinality of $\mathbb{R}$ (e.g. consider only t | Automatic identification of distribution of data
Identifying the distribution of data is essentially impossible.
The class of distribution functions is very large; it must be at least as large as the cardinality of $\mathbb{R}$ (e.g. consider only the unit step functions, corresponding to a constant value at some $x$ - there are as many of those as the cardinality of $\mathbb{R}$, so it must be at least that large).
Further, any cdf has an infinity of "near neighbors" that at a given sample size are hard to tell from any given distribution. (e.g. if we consider the KS-statistic to tell them apart, there's an infinite number of distributions close enough to the true distribution that a test at some sample size won't be able to tell the difference).
So the idea that we can say this must be the distribution, on the basis of a sample is a hopeless task.
If we restrict ourselves to some small list of candidates, then at some large sample size we might hope to rule out almost all the list (which sounds useful) ... but then we may actually end up ruling out the entire list (and indeed as sample sizes become large, this becomes essentially a certainty, because the chances our list includes the actual distribution of the data will be essentially zero).
[Further, tools which have a very large list of distribution families to choose from often "overfit", which can be counterproductive. One hopes that they might eventually catch up with some of the ideas which help avoid this problem, but even then, in general domain knowledge is going to be a better tool for good model choice than some arbitrary list of distributions.]
Indeed the entire approach seems pointless, because not only are real distributions typically going to be more complex than we can ever hope to identify (e.g. we might conceive of them consisting of arbitrary mixtures), knowing the true distribution class would be effectively useless as a model (with more parameters than observations, for example).
Probability models are in general little more than (hopefully) useful approximations. We should treat them as such.
So our interest should not be on identifying the distribution, but a distribution - one which adequately describes the situation well enough for our purposes, but which is simple enough for us to do something with at the sample sizes we can get. | Automatic identification of distribution of data
Identifying the distribution of data is essentially impossible.
The class of distribution functions is very large; it must be at least as large as the cardinality of $\mathbb{R}$ (e.g. consider only t |
52,641 | Automatic identification of distribution of data | The problem with this approach in practice is that most data sets are small enough that many distributions will adequately fit the data. If you arbitrarily pick a distribution that happens to fit the data and then proceed to do calculations or a simulation under this assumption, you can be badly mislead.
This problem occurs frequently in discrete event simulation modeling. One practical approach is to run the simulation model using a variety of distributions to see whether the results are sensitive to the distributional assumptions.
If you're doing statistical analysis, then nonparametric statistics can often be used to analyze your data without making distributional assumptions. | Automatic identification of distribution of data | The problem with this approach in practice is that most data sets are small enough that many distributions will adequately fit the data. If you arbitrarily pick a distribution that happens to fit the | Automatic identification of distribution of data
The problem with this approach in practice is that most data sets are small enough that many distributions will adequately fit the data. If you arbitrarily pick a distribution that happens to fit the data and then proceed to do calculations or a simulation under this assumption, you can be badly mislead.
This problem occurs frequently in discrete event simulation modeling. One practical approach is to run the simulation model using a variety of distributions to see whether the results are sensitive to the distributional assumptions.
If you're doing statistical analysis, then nonparametric statistics can often be used to analyze your data without making distributional assumptions. | Automatic identification of distribution of data
The problem with this approach in practice is that most data sets are small enough that many distributions will adequately fit the data. If you arbitrarily pick a distribution that happens to fit the |
52,642 | Why is the true (test) error rate of any classifier 50%? | That's not a general statement about classifiers. In this particular case where the class frequencies are half & half, & none of the predictors are any use, the true error rate, of any classifier, is 50%. Imagine trying to predict the result of coin tosses from denomination, year of issue, metal content, &c.—in the long run you won't do better than 50% error rate. The point of the quoted passage is that cross-validation that ignores a model selection step gives an optimistic estimate of performance. | Why is the true (test) error rate of any classifier 50%? | That's not a general statement about classifiers. In this particular case where the class frequencies are half & half, & none of the predictors are any use, the true error rate, of any classifier, is | Why is the true (test) error rate of any classifier 50%?
That's not a general statement about classifiers. In this particular case where the class frequencies are half & half, & none of the predictors are any use, the true error rate, of any classifier, is 50%. Imagine trying to predict the result of coin tosses from denomination, year of issue, metal content, &c.—in the long run you won't do better than 50% error rate. The point of the quoted passage is that cross-validation that ignores a model selection step gives an optimistic estimate of performance. | Why is the true (test) error rate of any classifier 50%?
That's not a general statement about classifiers. In this particular case where the class frequencies are half & half, & none of the predictors are any use, the true error rate, of any classifier, is |
52,643 | Why is the true (test) error rate of any classifier 50%? | To expand on the answer above, the key point is that the predictors are independent of the class labels (of no use) i.e. any forecast using these predictors is equivalent to a random draw from the class labels. | Why is the true (test) error rate of any classifier 50%? | To expand on the answer above, the key point is that the predictors are independent of the class labels (of no use) i.e. any forecast using these predictors is equivalent to a random draw from the cla | Why is the true (test) error rate of any classifier 50%?
To expand on the answer above, the key point is that the predictors are independent of the class labels (of no use) i.e. any forecast using these predictors is equivalent to a random draw from the class labels. | Why is the true (test) error rate of any classifier 50%?
To expand on the answer above, the key point is that the predictors are independent of the class labels (of no use) i.e. any forecast using these predictors is equivalent to a random draw from the cla |
52,644 | Should there be an "i" in a regression equation? | The $i$'s usually index the observations in the sample used to fit the model, so if you simply want to present the predictive equation for a single new observation, there's no need for them. Also be careful not to confuse the random variable, its observed values, & the fitted values: if you've previously defined $Y_i$ as the $i$th observed value of the dependent variable then
$$Y_i= .432 + .320 Age_i + .520 WE_i + .300 JP1_i + .210 JP2_i$$
is wrong because it omits the residual term. A common notational scheme is $Y$ for the random variable, $y$ for its observed values, & $\hat{y}$ for the fits.
I'd suggest you write the equation like this
$$\hat y = 0.432 + 0.320 x_1 + 0.520 x_2 + 0.300 x_3 + 0.210 x_4$$
(defining the terms appropriately) or like this
$$\mathrm{Salary} = 0.432 + (0.320 \times \mathrm{Age}) + (0.520 \times \mathrm{WE}) + (0.300 \times \mathrm{JP1}) + (0.210 \times \mathrm{JP2})$$
rather than mixing up formal mathematical notation with a word equation. NB:
Italics emphasize single-letter place-holders for numbers, functions, or operators that you define. Don't use them in word equations. (In any case, in the LaTeX math environment fluff gives the product $fluff$; \mathit{fluff} gives the italicized word $\mathit{fluff}$.)
S.I. (& I'd wager any other convention written down as such) mandates showing leading zeroes before decimal points.
The brackets in the word equation aren't necessary, but your readers may not know that.
With many terms a table of coefficients is more convenient. | Should there be an "i" in a regression equation? | The $i$'s usually index the observations in the sample used to fit the model, so if you simply want to present the predictive equation for a single new observation, there's no need for them. Also be c | Should there be an "i" in a regression equation?
The $i$'s usually index the observations in the sample used to fit the model, so if you simply want to present the predictive equation for a single new observation, there's no need for them. Also be careful not to confuse the random variable, its observed values, & the fitted values: if you've previously defined $Y_i$ as the $i$th observed value of the dependent variable then
$$Y_i= .432 + .320 Age_i + .520 WE_i + .300 JP1_i + .210 JP2_i$$
is wrong because it omits the residual term. A common notational scheme is $Y$ for the random variable, $y$ for its observed values, & $\hat{y}$ for the fits.
I'd suggest you write the equation like this
$$\hat y = 0.432 + 0.320 x_1 + 0.520 x_2 + 0.300 x_3 + 0.210 x_4$$
(defining the terms appropriately) or like this
$$\mathrm{Salary} = 0.432 + (0.320 \times \mathrm{Age}) + (0.520 \times \mathrm{WE}) + (0.300 \times \mathrm{JP1}) + (0.210 \times \mathrm{JP2})$$
rather than mixing up formal mathematical notation with a word equation. NB:
Italics emphasize single-letter place-holders for numbers, functions, or operators that you define. Don't use them in word equations. (In any case, in the LaTeX math environment fluff gives the product $fluff$; \mathit{fluff} gives the italicized word $\mathit{fluff}$.)
S.I. (& I'd wager any other convention written down as such) mandates showing leading zeroes before decimal points.
The brackets in the word equation aren't necessary, but your readers may not know that.
With many terms a table of coefficients is more convenient. | Should there be an "i" in a regression equation?
The $i$'s usually index the observations in the sample used to fit the model, so if you simply want to present the predictive equation for a single new observation, there's no need for them. Also be c |
52,645 | Should there be an "i" in a regression equation? | Both are correct, but if you use the i 's they should preferably be subscripts:
$Y_i = .432 + .320Age_i + .520WE_i + .300JP1_i + .210JP2_i$
If you don't use the i's then the equation is about vectors. | Should there be an "i" in a regression equation? | Both are correct, but if you use the i 's they should preferably be subscripts:
$Y_i = .432 + .320Age_i + .520WE_i + .300JP1_i + .210JP2_i$
If you don't use the i's then the equation is about vectors. | Should there be an "i" in a regression equation?
Both are correct, but if you use the i 's they should preferably be subscripts:
$Y_i = .432 + .320Age_i + .520WE_i + .300JP1_i + .210JP2_i$
If you don't use the i's then the equation is about vectors. | Should there be an "i" in a regression equation?
Both are correct, but if you use the i 's they should preferably be subscripts:
$Y_i = .432 + .320Age_i + .520WE_i + .300JP1_i + .210JP2_i$
If you don't use the i's then the equation is about vectors. |
52,646 | Unequal variances t-test or U Mann-Whitney test? | The Mann-Whitney doesn't require equal variances unless you're specifically looking for location-shift alternatives.
In particular, it is able to test whether the probability of values in the first group are larger than the values in the second group, which is quite a general alternative that sounds like it's related to your original question.
Not only can the Mann-Whitney deal with transformed-location shifts very well (e.g. a scale-shift is a location-shift in the logs), it has power against any alternative that makes $P(X>Y)$ differ from $\frac{1}{2}$.
The Mann-Whitney U-statistic counts the number of times a value in one sample exceeds a value in the other. That's a scaled estimate of the probability that a random value from one population exceeds the other.
There's more detail here.
Also see the discussion here.
As for which is better, well, that really depends on a number of things. If the data are even a little more heavy-tailed than normal, you may be better with the Mann-Whitney, but it depends on the situation - discreteness and skewness can both complicate that situation, and it also depends on the precise alternatives of interest. | Unequal variances t-test or U Mann-Whitney test? | The Mann-Whitney doesn't require equal variances unless you're specifically looking for location-shift alternatives.
In particular, it is able to test whether the probability of values in the first g | Unequal variances t-test or U Mann-Whitney test?
The Mann-Whitney doesn't require equal variances unless you're specifically looking for location-shift alternatives.
In particular, it is able to test whether the probability of values in the first group are larger than the values in the second group, which is quite a general alternative that sounds like it's related to your original question.
Not only can the Mann-Whitney deal with transformed-location shifts very well (e.g. a scale-shift is a location-shift in the logs), it has power against any alternative that makes $P(X>Y)$ differ from $\frac{1}{2}$.
The Mann-Whitney U-statistic counts the number of times a value in one sample exceeds a value in the other. That's a scaled estimate of the probability that a random value from one population exceeds the other.
There's more detail here.
Also see the discussion here.
As for which is better, well, that really depends on a number of things. If the data are even a little more heavy-tailed than normal, you may be better with the Mann-Whitney, but it depends on the situation - discreteness and skewness can both complicate that situation, and it also depends on the precise alternatives of interest. | Unequal variances t-test or U Mann-Whitney test?
The Mann-Whitney doesn't require equal variances unless you're specifically looking for location-shift alternatives.
In particular, it is able to test whether the probability of values in the first g |
52,647 | Unequal variances t-test or U Mann-Whitney test? | If the sample sizes are unequal, we should use the unpooled variances t-test. If they're equal, use the pooled. Here's an excerpt from "Understanding and Using Statistics in Psychology" (which I co-authored, with Phil Banyard).
"There are a number of different ways to decide if your variances (or standard deviations) are the same. One of the most common is Levene’s test. If Levene’s test is statistically significant, this means that your variances are different from one another and you should use the unpooled variances t-test, which does not assume homogeneity of variance. If Levene’s test is not statistically significant, you have no evidence that your variances are different. In which case you may be able to use the pooled variances t-test (that’s the normal t-test).
OK, now we’ve got that clear, we tell you the second reason that this is a bit tricky. (Were you still waiting for the second reason that it was tricky?) The problem is that a non-significant result does not mean that the variances are the same. It just means that we haven’t found that the variances are different. And the problem with the tests, such as the Levene’s test, is that they are dependent on the sample size, so when the sample size is small the Levene’s test is not very good at detecting differences in the variances. When the sample size is large, the Levene’s test is very good at detecting differences in the variances.
When do you think that it matters the most that the variances are the same? Yep, that’s right, when the sample size is small. So, when the Levene’s test is good at telling us when the variances are different is precisely when we don’t really care. And when the Levene’s test is not very good is precisely when we do care. It’s a bit like having a solar powered torch – it only works when the sun is shining. (We’ve nearly finished, and if you didn’t quite understand the last part, you are really going to like the next part.)
It turns out that homogeneity of variance doesn’t really matter, when the sample sizes are about equal. So if we have equal (or approximately equal) sample sizes we can ignore the assumption of homogeneity of variance, and use the pooled variances t-test.
When the sample sizes are unequal, homogeneity of variance matters a lot more. Given that we only have tests of homogeneity of variance that can tell us if we definitely have it, not if we definitely don’t have it, we should not rely on these, and if the sample sizes are unequal, we should use the unpooled variances t-test. (If you are really gripped by this, there is an excellent article by Zimmerman (2004), that you should read)."
Here's the Zimmerman ref: http://onlinelibrary.wiley.com/doi/10.1348/000711004849222/full . It might be behind a paywall, but the abstract tells you everything you need to know.
Here's the publisher info on the book this is from: http://www.sagepub.com/books/Book226292?siteId=sage-us&prodTypes=any&q=jeremy+miles&fs=1
Also, the equal variances not assumed is not the "SPSS version of the test", it's Welch's t-test. The reference is Welch, B. L. (1938). The significance of the difference between two means when the population variances are unequal. Biometrika 34: 29-35., which is quite a while before SPSS was produced. | Unequal variances t-test or U Mann-Whitney test? | If the sample sizes are unequal, we should use the unpooled variances t-test. If they're equal, use the pooled. Here's an excerpt from "Understanding and Using Statistics in Psychology" (which I co-au | Unequal variances t-test or U Mann-Whitney test?
If the sample sizes are unequal, we should use the unpooled variances t-test. If they're equal, use the pooled. Here's an excerpt from "Understanding and Using Statistics in Psychology" (which I co-authored, with Phil Banyard).
"There are a number of different ways to decide if your variances (or standard deviations) are the same. One of the most common is Levene’s test. If Levene’s test is statistically significant, this means that your variances are different from one another and you should use the unpooled variances t-test, which does not assume homogeneity of variance. If Levene’s test is not statistically significant, you have no evidence that your variances are different. In which case you may be able to use the pooled variances t-test (that’s the normal t-test).
OK, now we’ve got that clear, we tell you the second reason that this is a bit tricky. (Were you still waiting for the second reason that it was tricky?) The problem is that a non-significant result does not mean that the variances are the same. It just means that we haven’t found that the variances are different. And the problem with the tests, such as the Levene’s test, is that they are dependent on the sample size, so when the sample size is small the Levene’s test is not very good at detecting differences in the variances. When the sample size is large, the Levene’s test is very good at detecting differences in the variances.
When do you think that it matters the most that the variances are the same? Yep, that’s right, when the sample size is small. So, when the Levene’s test is good at telling us when the variances are different is precisely when we don’t really care. And when the Levene’s test is not very good is precisely when we do care. It’s a bit like having a solar powered torch – it only works when the sun is shining. (We’ve nearly finished, and if you didn’t quite understand the last part, you are really going to like the next part.)
It turns out that homogeneity of variance doesn’t really matter, when the sample sizes are about equal. So if we have equal (or approximately equal) sample sizes we can ignore the assumption of homogeneity of variance, and use the pooled variances t-test.
When the sample sizes are unequal, homogeneity of variance matters a lot more. Given that we only have tests of homogeneity of variance that can tell us if we definitely have it, not if we definitely don’t have it, we should not rely on these, and if the sample sizes are unequal, we should use the unpooled variances t-test. (If you are really gripped by this, there is an excellent article by Zimmerman (2004), that you should read)."
Here's the Zimmerman ref: http://onlinelibrary.wiley.com/doi/10.1348/000711004849222/full . It might be behind a paywall, but the abstract tells you everything you need to know.
Here's the publisher info on the book this is from: http://www.sagepub.com/books/Book226292?siteId=sage-us&prodTypes=any&q=jeremy+miles&fs=1
Also, the equal variances not assumed is not the "SPSS version of the test", it's Welch's t-test. The reference is Welch, B. L. (1938). The significance of the difference between two means when the population variances are unequal. Biometrika 34: 29-35., which is quite a while before SPSS was produced. | Unequal variances t-test or U Mann-Whitney test?
If the sample sizes are unequal, we should use the unpooled variances t-test. If they're equal, use the pooled. Here's an excerpt from "Understanding and Using Statistics in Psychology" (which I co-au |
52,648 | Unequal variances t-test or U Mann-Whitney test? | You haven't said exactly what you are measuring. If it is count data then this paper is very helpful. I am curious about the nature of your measurements, since if you have a mean of 10 and a SD of 11, a negative number for a value one standard deviation below the mean may or may not be meaningful.
Review: analysis of parasite and other skewed counts
Neal Alexander
DOI: 10.1111/j.1365-3156.2012.02987.x
It is free at:
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-3156.2012.02987.x/abstract;jsessionid=75360AF6882780119FBE6D905DFECFF7.f01t04 | Unequal variances t-test or U Mann-Whitney test? | You haven't said exactly what you are measuring. If it is count data then this paper is very helpful. I am curious about the nature of your measurements, since if you have a mean of 10 and a SD of 11, | Unequal variances t-test or U Mann-Whitney test?
You haven't said exactly what you are measuring. If it is count data then this paper is very helpful. I am curious about the nature of your measurements, since if you have a mean of 10 and a SD of 11, a negative number for a value one standard deviation below the mean may or may not be meaningful.
Review: analysis of parasite and other skewed counts
Neal Alexander
DOI: 10.1111/j.1365-3156.2012.02987.x
It is free at:
http://onlinelibrary.wiley.com/doi/10.1111/j.1365-3156.2012.02987.x/abstract;jsessionid=75360AF6882780119FBE6D905DFECFF7.f01t04 | Unequal variances t-test or U Mann-Whitney test?
You haven't said exactly what you are measuring. If it is count data then this paper is very helpful. I am curious about the nature of your measurements, since if you have a mean of 10 and a SD of 11, |
52,649 | Unequal variances t-test or U Mann-Whitney test? | Before using any test see if a log transform will make the variances more similar and if so then apply the test to the transformed values. Any conclusion that you might draw form log-transformed data will be equally applicable to the raw untransformed data. See my answeer to this question and the comments elicited for more thoughts: Comparing smoke and hormones level in two groups of people. Which test? | Unequal variances t-test or U Mann-Whitney test? | Before using any test see if a log transform will make the variances more similar and if so then apply the test to the transformed values. Any conclusion that you might draw form log-transformed data | Unequal variances t-test or U Mann-Whitney test?
Before using any test see if a log transform will make the variances more similar and if so then apply the test to the transformed values. Any conclusion that you might draw form log-transformed data will be equally applicable to the raw untransformed data. See my answeer to this question and the comments elicited for more thoughts: Comparing smoke and hormones level in two groups of people. Which test? | Unequal variances t-test or U Mann-Whitney test?
Before using any test see if a log transform will make the variances more similar and if so then apply the test to the transformed values. Any conclusion that you might draw form log-transformed data |
52,650 | statsmodels: error in kde on a list of repeated values | I've never used the Python statsmodels package, and I'm not familiar with it, but based upon the error messages, I think I have a pretty good guess as to what is probably going on, and it's not a bug--the problem is with your input. According to wikipedia, a key step in Kernal Density Estimation is bandwidth estimation. As described in the link, for Gaussian basis functions (which, based upon your error message referencing gaussian_kde(), appears to be what you are in fact using), one common choice for estimating the bandwidth, in the special case of one dimensional KDE, requires the sample standard deviation $\hat{\sigma}$ as an input.
Your mention that the error arises, specifically, in cases where your input is a list of repeated instances of the same number. Imagine calculating the sample standard deviation for a list of identical numbers, what do you get? Well, essentially what you are simulating in that case is a dirac delta function, so effectively your sample standard deviation is $\hat{\sigma} = 0$. Based upon the fact the next error down in the stack trace, below the exception caught by gaussian_kde(), occurs within a method called set_bandwidth(), I would say that what appears to be happening is that you are feeding the code a distribution whose standard deviation is zero, and the code is attempting to use this value to calculate an initial guess for the KDE bandwidth parameter, and it's choking because zero isn't really a valid value.
"O.K.", you reply, "but your explanation doesn't mention anything about linear algebra or singular matrices--why does the error manifest itself within a linear algebra routine, specifically?" Good question. I'm not absolutely certain, but here's what I suspect is happening. The concept of standard deviation, or its square, the variance, is really an inherently one dimensional concept. The more general concept, valid for multi-variate distributions, is the covariance matrix. The code that you are using is likely designed to be as general as possible, in order to be able to handle a case where the user feeds it a multivariate distribution. In fact, you'll notice as you work your way further down the stack trace, that the next method down, below set_bandwidth(), is one called compute_covariance(). If you know much about covariance matrices, it turns out that a popular way of analyzing and thinking about them is to reduce them to what are known as principal components. The effect of principal component analysis, after it is performed, is to diagonalize the initial covariance matrix, creating an effectively equivalent new matrix which has been transformed in such a way that it consists exclusively of a set of one-dimensional variances lined up only along the diagonal. These variances may be identified as the eigenvalues of the original, non-diagonal matrix, and it also turns out that, in linear algebra, one of the properties of matrices is that those which are related by having identical eigenvalues also inevitably have identical determinants.
So, what I suspect is happening in your case is that, by giving the code repeated instances of the same values as input, you are creating a covariance matrix which has at least one eigenvalue equal to zero, and this condition means that the determinant is zero also, since in the special case of a diagonalized matrix, the determinant will be simply the product of all the values along the diagonal. So, what do we call it when a matrix has a determinant of zero? According to the definition of an invertible matrix, "A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0." And that's why you are getting the error at the bottom of the stack trace--at some point, the code needs to invert the covariance matrix for whatever reason (you can see that the final method at the bottom of the trace is called _data_inv_cov()) but it can't do it because the matrix is singular and therefore non-invertible.
Bottom line, what's effectively happening is that, by giving the code repeated instances of the same number to use as input, you are basically generating the linear algebra equivalent of a divide-by-zero error. | statsmodels: error in kde on a list of repeated values | I've never used the Python statsmodels package, and I'm not familiar with it, but based upon the error messages, I think I have a pretty good guess as to what is probably going on, and it's not a bug- | statsmodels: error in kde on a list of repeated values
I've never used the Python statsmodels package, and I'm not familiar with it, but based upon the error messages, I think I have a pretty good guess as to what is probably going on, and it's not a bug--the problem is with your input. According to wikipedia, a key step in Kernal Density Estimation is bandwidth estimation. As described in the link, for Gaussian basis functions (which, based upon your error message referencing gaussian_kde(), appears to be what you are in fact using), one common choice for estimating the bandwidth, in the special case of one dimensional KDE, requires the sample standard deviation $\hat{\sigma}$ as an input.
Your mention that the error arises, specifically, in cases where your input is a list of repeated instances of the same number. Imagine calculating the sample standard deviation for a list of identical numbers, what do you get? Well, essentially what you are simulating in that case is a dirac delta function, so effectively your sample standard deviation is $\hat{\sigma} = 0$. Based upon the fact the next error down in the stack trace, below the exception caught by gaussian_kde(), occurs within a method called set_bandwidth(), I would say that what appears to be happening is that you are feeding the code a distribution whose standard deviation is zero, and the code is attempting to use this value to calculate an initial guess for the KDE bandwidth parameter, and it's choking because zero isn't really a valid value.
"O.K.", you reply, "but your explanation doesn't mention anything about linear algebra or singular matrices--why does the error manifest itself within a linear algebra routine, specifically?" Good question. I'm not absolutely certain, but here's what I suspect is happening. The concept of standard deviation, or its square, the variance, is really an inherently one dimensional concept. The more general concept, valid for multi-variate distributions, is the covariance matrix. The code that you are using is likely designed to be as general as possible, in order to be able to handle a case where the user feeds it a multivariate distribution. In fact, you'll notice as you work your way further down the stack trace, that the next method down, below set_bandwidth(), is one called compute_covariance(). If you know much about covariance matrices, it turns out that a popular way of analyzing and thinking about them is to reduce them to what are known as principal components. The effect of principal component analysis, after it is performed, is to diagonalize the initial covariance matrix, creating an effectively equivalent new matrix which has been transformed in such a way that it consists exclusively of a set of one-dimensional variances lined up only along the diagonal. These variances may be identified as the eigenvalues of the original, non-diagonal matrix, and it also turns out that, in linear algebra, one of the properties of matrices is that those which are related by having identical eigenvalues also inevitably have identical determinants.
So, what I suspect is happening in your case is that, by giving the code repeated instances of the same values as input, you are creating a covariance matrix which has at least one eigenvalue equal to zero, and this condition means that the determinant is zero also, since in the special case of a diagonalized matrix, the determinant will be simply the product of all the values along the diagonal. So, what do we call it when a matrix has a determinant of zero? According to the definition of an invertible matrix, "A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0." And that's why you are getting the error at the bottom of the stack trace--at some point, the code needs to invert the covariance matrix for whatever reason (you can see that the final method at the bottom of the trace is called _data_inv_cov()) but it can't do it because the matrix is singular and therefore non-invertible.
Bottom line, what's effectively happening is that, by giving the code repeated instances of the same number to use as input, you are basically generating the linear algebra equivalent of a divide-by-zero error. | statsmodels: error in kde on a list of repeated values
I've never used the Python statsmodels package, and I'm not familiar with it, but based upon the error messages, I think I have a pretty good guess as to what is probably going on, and it's not a bug- |
52,651 | statsmodels: error in kde on a list of repeated values | Float32 only has 1e-6 precision in numpy, therefore, if you are manipulating small numbers, similar instances could become identical (or very close) therefore producing singular or badly scaled matrices. This issue is particularly tricky to as there are no algebric reason for the desired inversion not to be possible. Two easy way to solve this issue could be to scale you dataset by a power of 10 (i.e. 10e6) or to use float64 or double point precision which have a 1e-15 precision in numpy.
dataset.astype('float64') is an easy way to go about the change. | statsmodels: error in kde on a list of repeated values | Float32 only has 1e-6 precision in numpy, therefore, if you are manipulating small numbers, similar instances could become identical (or very close) therefore producing singular or badly scaled matric | statsmodels: error in kde on a list of repeated values
Float32 only has 1e-6 precision in numpy, therefore, if you are manipulating small numbers, similar instances could become identical (or very close) therefore producing singular or badly scaled matrices. This issue is particularly tricky to as there are no algebric reason for the desired inversion not to be possible. Two easy way to solve this issue could be to scale you dataset by a power of 10 (i.e. 10e6) or to use float64 or double point precision which have a 1e-15 precision in numpy.
dataset.astype('float64') is an easy way to go about the change. | statsmodels: error in kde on a list of repeated values
Float32 only has 1e-6 precision in numpy, therefore, if you are manipulating small numbers, similar instances could become identical (or very close) therefore producing singular or badly scaled matric |
52,652 | What is the purpose of precision variables? | Regression coefficient is often characterisized as a partial correlation coefficient which means it will show effect of particular variable X to the outcome variable Y after effects of other variables, Z, are controlled.
What happens when you omit Z and leave only X? Do coefficient for X change?
If variables Z and X are orthogonal, which rarely happens outside experimental data, these coefficients will not change but in other situations you cannot say that coefficient for X variable measures just effect of variation in X for Y.
In econometrics omission of important variables is called omitted variable bias and it states that marginal effect from the X variable to the Y will no longer be estimated without bias.
Goal of modeling is to find out all relevant variables and to check that residual variation behaves well. | What is the purpose of precision variables? | Regression coefficient is often characterisized as a partial correlation coefficient which means it will show effect of particular variable X to the outcome variable Y after effects of other variables | What is the purpose of precision variables?
Regression coefficient is often characterisized as a partial correlation coefficient which means it will show effect of particular variable X to the outcome variable Y after effects of other variables, Z, are controlled.
What happens when you omit Z and leave only X? Do coefficient for X change?
If variables Z and X are orthogonal, which rarely happens outside experimental data, these coefficients will not change but in other situations you cannot say that coefficient for X variable measures just effect of variation in X for Y.
In econometrics omission of important variables is called omitted variable bias and it states that marginal effect from the X variable to the Y will no longer be estimated without bias.
Goal of modeling is to find out all relevant variables and to check that residual variation behaves well. | What is the purpose of precision variables?
Regression coefficient is often characterisized as a partial correlation coefficient which means it will show effect of particular variable X to the outcome variable Y after effects of other variables |
52,653 | What is the purpose of precision variables? | Here is my contribution to Anscombe's Quartet.
The below image shows a regression line on some imaginary data. The estimated regession line is $\hat{y}_{i} = 3 + .5x$, the p-value for the t-test of $H_{0}: \beta=0$ equals 0.002, and the $R^{2}$ for my regression model is 0.67 (just as for the four graphs in Anscombe, 1973).
This regression/regression line says that as $x$ increases by two units, $y$ increases by one unit. But there's something wrong: I have omitted another variable that also explains $y$. This second variable is dichotomous, and indicates which group one is in (the above analysis combined data from both groups). We can model it as $y_{i} = \beta_{0} + \beta_{x}x_{i} + \beta_{z}z_{i} + \varepsilon_{i}$, where $\varepsilon\sim \mathcal{N}\left(0,\sigma_{\varepsilon}\right)$.
Now we see that actually, as $x$ increases, $y$ actually decreases! This means our first model was not only incorrect, but badly so, because no matter whether one is purple or orange, $y$ has an inverse relation to $x$. In the first model we violated the assumption of no missing variables, and this is why it is important to include covariates (what you label "precision variables") in our model. | What is the purpose of precision variables? | Here is my contribution to Anscombe's Quartet.
The below image shows a regression line on some imaginary data. The estimated regession line is $\hat{y}_{i} = 3 + .5x$, the p-value for the t-test of $H | What is the purpose of precision variables?
Here is my contribution to Anscombe's Quartet.
The below image shows a regression line on some imaginary data. The estimated regession line is $\hat{y}_{i} = 3 + .5x$, the p-value for the t-test of $H_{0}: \beta=0$ equals 0.002, and the $R^{2}$ for my regression model is 0.67 (just as for the four graphs in Anscombe, 1973).
This regression/regression line says that as $x$ increases by two units, $y$ increases by one unit. But there's something wrong: I have omitted another variable that also explains $y$. This second variable is dichotomous, and indicates which group one is in (the above analysis combined data from both groups). We can model it as $y_{i} = \beta_{0} + \beta_{x}x_{i} + \beta_{z}z_{i} + \varepsilon_{i}$, where $\varepsilon\sim \mathcal{N}\left(0,\sigma_{\varepsilon}\right)$.
Now we see that actually, as $x$ increases, $y$ actually decreases! This means our first model was not only incorrect, but badly so, because no matter whether one is purple or orange, $y$ has an inverse relation to $x$. In the first model we violated the assumption of no missing variables, and this is why it is important to include covariates (what you label "precision variables") in our model. | What is the purpose of precision variables?
Here is my contribution to Anscombe's Quartet.
The below image shows a regression line on some imaginary data. The estimated regession line is $\hat{y}_{i} = 3 + .5x$, the p-value for the t-test of $H |
52,654 | What is the purpose of precision variables? | Precision variables would help to reduce the standard errors and hence shrink confidence intervals on the coefficients that you are interested in, and hence make it easier to find significant effects of the variables you are interested in. | What is the purpose of precision variables? | Precision variables would help to reduce the standard errors and hence shrink confidence intervals on the coefficients that you are interested in, and hence make it easier to find significant effects | What is the purpose of precision variables?
Precision variables would help to reduce the standard errors and hence shrink confidence intervals on the coefficients that you are interested in, and hence make it easier to find significant effects of the variables you are interested in. | What is the purpose of precision variables?
Precision variables would help to reduce the standard errors and hence shrink confidence intervals on the coefficients that you are interested in, and hence make it easier to find significant effects |
52,655 | Regression for really small data with high degree of multicollinearity and outliers | My advice is "don't try to do this".
25 observations with 15 variables is very overfit, even if it satisfies all the assumptions of linear regression. Collinearity will mess up your standard errors and make the output highly sensitive to small changes in input. Outliers may well be influential points (although they may not be).
If you want, you can run several simple regressions, with just one IV in each.
But if you need to run some model with all these IVs, you need a lot more data. Perhaps 10 times as much, maybe more. Then you might try regression trees. But don't use it on such a small data set, as anything it yields will also be overfit (although many tree methods will simply return no model with this sort of data). | Regression for really small data with high degree of multicollinearity and outliers | My advice is "don't try to do this".
25 observations with 15 variables is very overfit, even if it satisfies all the assumptions of linear regression. Collinearity will mess up your standard errors | Regression for really small data with high degree of multicollinearity and outliers
My advice is "don't try to do this".
25 observations with 15 variables is very overfit, even if it satisfies all the assumptions of linear regression. Collinearity will mess up your standard errors and make the output highly sensitive to small changes in input. Outliers may well be influential points (although they may not be).
If you want, you can run several simple regressions, with just one IV in each.
But if you need to run some model with all these IVs, you need a lot more data. Perhaps 10 times as much, maybe more. Then you might try regression trees. But don't use it on such a small data set, as anything it yields will also be overfit (although many tree methods will simply return no model with this sort of data). | Regression for really small data with high degree of multicollinearity and outliers
My advice is "don't try to do this".
25 observations with 15 variables is very overfit, even if it satisfies all the assumptions of linear regression. Collinearity will mess up your standard errors |
52,656 | Regression for really small data with high degree of multicollinearity and outliers | To begin in terms of an analytical framework...I would say you have 15 independent variables to choose from. You don't have 15 independent variables you have to include in your model. Given that, I have a couple of ideas. Hopefully one of them will be helpful. Before trying any of the following ideas, I would scrutinize the outliers. I would not hesitate to throw out a few of those (or use a dummy variable for the respective time period).
The first idea is to try what I would call manual stepwise regression. You first do a simple linear regression with the one independent variable that has the highest absolute correlation with your dependent variable. Next, you calculate the residual of this regression. And you look for the independent variable among the remaining ones that has the highest correlation with the residual of your regression. Next, you rerun your regression with those two independent variables. You can add a 3rd or 4th variable, repeating this process until you can tell that none of the remaining variables are correlated enough to the residual of your last regression. Typically, after selecting 3 or 4 independent variables this way, you are done. The model typically breaks apart when you add any more than that. This process is typically robust, and usually does not result in "overfit" models because it selects few variables.
My second idea is to try principal component analysis (PCA) which is suited to dealing with multicollinear variables. For it to work, you may have to reduce your number of variables anyway. After doing the stepwise regression exploration, it will become clear what variables are superfluous. The challenge of PCA is that it is rather difficult. Unless you have the appropriate statistical software, it is rather inaccessible. It also often creates a bit of a black box. The principal components are essentially indices of independent variable combinations. Sometimes those combinations of variables may have an explanatory narrative (the S&P 500 is a pretty good index capturing the performance of 500 different stocks). Unfortunately, most principal components do not have as clear an interpretation as the S&P 500.
In any case, I hope those ideas help. | Regression for really small data with high degree of multicollinearity and outliers | To begin in terms of an analytical framework...I would say you have 15 independent variables to choose from. You don't have 15 independent variables you have to include in your model. Given that, I ha | Regression for really small data with high degree of multicollinearity and outliers
To begin in terms of an analytical framework...I would say you have 15 independent variables to choose from. You don't have 15 independent variables you have to include in your model. Given that, I have a couple of ideas. Hopefully one of them will be helpful. Before trying any of the following ideas, I would scrutinize the outliers. I would not hesitate to throw out a few of those (or use a dummy variable for the respective time period).
The first idea is to try what I would call manual stepwise regression. You first do a simple linear regression with the one independent variable that has the highest absolute correlation with your dependent variable. Next, you calculate the residual of this regression. And you look for the independent variable among the remaining ones that has the highest correlation with the residual of your regression. Next, you rerun your regression with those two independent variables. You can add a 3rd or 4th variable, repeating this process until you can tell that none of the remaining variables are correlated enough to the residual of your last regression. Typically, after selecting 3 or 4 independent variables this way, you are done. The model typically breaks apart when you add any more than that. This process is typically robust, and usually does not result in "overfit" models because it selects few variables.
My second idea is to try principal component analysis (PCA) which is suited to dealing with multicollinear variables. For it to work, you may have to reduce your number of variables anyway. After doing the stepwise regression exploration, it will become clear what variables are superfluous. The challenge of PCA is that it is rather difficult. Unless you have the appropriate statistical software, it is rather inaccessible. It also often creates a bit of a black box. The principal components are essentially indices of independent variable combinations. Sometimes those combinations of variables may have an explanatory narrative (the S&P 500 is a pretty good index capturing the performance of 500 different stocks). Unfortunately, most principal components do not have as clear an interpretation as the S&P 500.
In any case, I hope those ideas help. | Regression for really small data with high degree of multicollinearity and outliers
To begin in terms of an analytical framework...I would say you have 15 independent variables to choose from. You don't have 15 independent variables you have to include in your model. Given that, I ha |
52,657 | Regression for really small data with high degree of multicollinearity and outliers | if this is a survey analysis, then you might want to try something called "structural equation modeling". it's an entire field in qualitative analysis, but you can get it working quickly with software like Stata.
the fact that you say your data is collinear is exactly the problem with which SEM deals with. | Regression for really small data with high degree of multicollinearity and outliers | if this is a survey analysis, then you might want to try something called "structural equation modeling". it's an entire field in qualitative analysis, but you can get it working quickly with software | Regression for really small data with high degree of multicollinearity and outliers
if this is a survey analysis, then you might want to try something called "structural equation modeling". it's an entire field in qualitative analysis, but you can get it working quickly with software like Stata.
the fact that you say your data is collinear is exactly the problem with which SEM deals with. | Regression for really small data with high degree of multicollinearity and outliers
if this is a survey analysis, then you might want to try something called "structural equation modeling". it's an entire field in qualitative analysis, but you can get it working quickly with software |
52,658 | How can I interpret this scatterplot? | Assuming the dots are your observations, the vectors seem to express upper and lower bounds on the range of $y$ as a function of $x$. The correlation appears weakly negative at a glance, but I wouldn't really trust my eyeballs to estimate Pearson's $r$. What does seem clear enough is that $y$ exhibits heteroscedasticity across $x$. Wikipedia's Consequences section may be of further interest to you. See also for comparison
Wikipedia's plot with random data showing heteroscedasticity: by Q9.
Like most distributional characteristics, heteroscedasticity is a matter of degree, and few real datasets are truly, absolutely homoscedastic. Despite my own behavior here, you may not want to "eyeball" a scatterplot for heteroscedasticity any more than for a correlation. Many tests of homoscedasticity exist, including levenes-test. Fixes are available, but the best may be robust methods that tolerate heteroscedasticity. This is not to say heteroscedasticity is only a data-analytic nuisance – it may also be of focal theoretical interest. An example from Wikipedia describes wealth and diet variability:
A classic example of heteroscedasticity is that of income versus expenditure on meals...A poorer person will spend a rather constant amount by always eating inexpensive food; a wealthier person may occasionally buy inexpensive food and at other times eat expensive meals. Those with higher incomes display a greater variability of food consumption.
Interested readers may wish to check the tag wiki for heteroscedasticity; I've edited in more info like this. | How can I interpret this scatterplot? | Assuming the dots are your observations, the vectors seem to express upper and lower bounds on the range of $y$ as a function of $x$. The correlation appears weakly negative at a glance, but I wouldn' | How can I interpret this scatterplot?
Assuming the dots are your observations, the vectors seem to express upper and lower bounds on the range of $y$ as a function of $x$. The correlation appears weakly negative at a glance, but I wouldn't really trust my eyeballs to estimate Pearson's $r$. What does seem clear enough is that $y$ exhibits heteroscedasticity across $x$. Wikipedia's Consequences section may be of further interest to you. See also for comparison
Wikipedia's plot with random data showing heteroscedasticity: by Q9.
Like most distributional characteristics, heteroscedasticity is a matter of degree, and few real datasets are truly, absolutely homoscedastic. Despite my own behavior here, you may not want to "eyeball" a scatterplot for heteroscedasticity any more than for a correlation. Many tests of homoscedasticity exist, including levenes-test. Fixes are available, but the best may be robust methods that tolerate heteroscedasticity. This is not to say heteroscedasticity is only a data-analytic nuisance – it may also be of focal theoretical interest. An example from Wikipedia describes wealth and diet variability:
A classic example of heteroscedasticity is that of income versus expenditure on meals...A poorer person will spend a rather constant amount by always eating inexpensive food; a wealthier person may occasionally buy inexpensive food and at other times eat expensive meals. Those with higher incomes display a greater variability of food consumption.
Interested readers may wish to check the tag wiki for heteroscedasticity; I've edited in more info like this. | How can I interpret this scatterplot?
Assuming the dots are your observations, the vectors seem to express upper and lower bounds on the range of $y$ as a function of $x$. The correlation appears weakly negative at a glance, but I wouldn' |
52,659 | Can nonstationarity be told from the autocorrelation function? | It is possible to get a general formula for stationary ARMA(p,q) autocovariance function. Suppose $X_t$ is a (zero mean) stationary solution of an ARMA(p,q) equation:
$$\phi(B)X_t=\theta(B)Z_t$$
Multiply this equation by $X_{t-h}$, $h>q$, take expectations and you will get
$$r(h)-\phi_1r(h-1)-...-\phi_pr(h-p)=0$$
This is a recursive equation, which has a general solution. If all the roots $\lambda_i$ of polynomial $\phi(z)=1-\phi_1z-...-\phi_pz^p$ are different,
$$r(h)=\sum_{i=1}^pC_i\lambda_i^{-h}$$
where $C_i$ are constants which can be derived from the initial conditions. Since $|\lambda_i|>1$ to ensure stationarity it is very clear why the autocorrelation function (which is autocovariance function scaled by a constant) is decaying rapidly (if $\lambda_i$ are not close to one).
I've covered the case of unique real roots of the polynomial $\phi(z)$, all other cases are covered in general theory, but formulas are a bit messier. Nevertheless the terms $\lambda^{-h}$ remain.
Answers to question 2 and 3 more or less follow from this formula. For $AR(1)$ process $r(h)=c\phi_1^h$ and when $\phi_1$ is close to one, i.e. close to non-stationarity, you get the behaviour you describle. The same goes for general formula, if the process is nearly unit-root one of the roots $\lambda_i$ is close to 1 and it dominates other terms, producing the slow decay. | Can nonstationarity be told from the autocorrelation function? | It is possible to get a general formula for stationary ARMA(p,q) autocovariance function. Suppose $X_t$ is a (zero mean) stationary solution of an ARMA(p,q) equation:
$$\phi(B)X_t=\theta(B)Z_t$$
Multi | Can nonstationarity be told from the autocorrelation function?
It is possible to get a general formula for stationary ARMA(p,q) autocovariance function. Suppose $X_t$ is a (zero mean) stationary solution of an ARMA(p,q) equation:
$$\phi(B)X_t=\theta(B)Z_t$$
Multiply this equation by $X_{t-h}$, $h>q$, take expectations and you will get
$$r(h)-\phi_1r(h-1)-...-\phi_pr(h-p)=0$$
This is a recursive equation, which has a general solution. If all the roots $\lambda_i$ of polynomial $\phi(z)=1-\phi_1z-...-\phi_pz^p$ are different,
$$r(h)=\sum_{i=1}^pC_i\lambda_i^{-h}$$
where $C_i$ are constants which can be derived from the initial conditions. Since $|\lambda_i|>1$ to ensure stationarity it is very clear why the autocorrelation function (which is autocovariance function scaled by a constant) is decaying rapidly (if $\lambda_i$ are not close to one).
I've covered the case of unique real roots of the polynomial $\phi(z)$, all other cases are covered in general theory, but formulas are a bit messier. Nevertheless the terms $\lambda^{-h}$ remain.
Answers to question 2 and 3 more or less follow from this formula. For $AR(1)$ process $r(h)=c\phi_1^h$ and when $\phi_1$ is close to one, i.e. close to non-stationarity, you get the behaviour you describle. The same goes for general formula, if the process is nearly unit-root one of the roots $\lambda_i$ is close to 1 and it dominates other terms, producing the slow decay. | Can nonstationarity be told from the autocorrelation function?
It is possible to get a general formula for stationary ARMA(p,q) autocovariance function. Suppose $X_t$ is a (zero mean) stationary solution of an ARMA(p,q) equation:
$$\phi(B)X_t=\theta(B)Z_t$$
Multi |
52,660 | Can nonstationarity be told from the autocorrelation function? | (1) A stationary ARMA process has a "rapidly" decaying acf because the acf shows the extent to which previous observations predict the current observation. An AR(1) model would have one significant lag, and AR(2) would have two significant lags....and so on.
Strictly speaking, an ACF cannot tell you whether the time series is non-stationary. You can no more discern whether or not a series is non-stationary by looking at the ACF than you can by just graphing the series. That is to say, it's just sort of obvious. GDP, for example, obviously exhibits a trend...but you should still use tests - ADF, KPSS, etc. - to properly diagnose the non-stationarity. The slow decaying ACF works the same way. It is not a proper diagnosis per-se, but it is a pretty obvious clue.
(2) Yes, a stationary ARMA process should have a rapidly decaying ACF. How rapidly depends on the process. Do not assume that just because the ACF doesn't decay after one or two lags that it is non-stationary. You may also have a fractionally integrated series - something you should also test for.
(3) The ACF looks very slow because effects of previous innovations on a stationary series die out over time. This is why the AR / MA coefficients of less than one are important. You can think of these coefficients as "discount factors" for the influence that previous innovations are having on the current period. They decay because the the structure of the series is such that last period matters the most, two periods ago matters a little less, three periods ago matters less than two periods but is still significant, until you go far enough back that innovations from several periods ago no longer influence current observations. In a non-stationary series, there is no decay.
(4) Slowly decaying is answered above. Seasonality doesn't necessarily imply non-stationarity. It just means that there is some periodicity in the the observations that you must take into account. If you have a monthly consumer spending time series, you might see some seasonality. Every twelve months there is a huge spike in consumer spending around the holidays. That periodicity needs to be taken into account when modeling the time series. | Can nonstationarity be told from the autocorrelation function? | (1) A stationary ARMA process has a "rapidly" decaying acf because the acf shows the extent to which previous observations predict the current observation. An AR(1) model would have one significant la | Can nonstationarity be told from the autocorrelation function?
(1) A stationary ARMA process has a "rapidly" decaying acf because the acf shows the extent to which previous observations predict the current observation. An AR(1) model would have one significant lag, and AR(2) would have two significant lags....and so on.
Strictly speaking, an ACF cannot tell you whether the time series is non-stationary. You can no more discern whether or not a series is non-stationary by looking at the ACF than you can by just graphing the series. That is to say, it's just sort of obvious. GDP, for example, obviously exhibits a trend...but you should still use tests - ADF, KPSS, etc. - to properly diagnose the non-stationarity. The slow decaying ACF works the same way. It is not a proper diagnosis per-se, but it is a pretty obvious clue.
(2) Yes, a stationary ARMA process should have a rapidly decaying ACF. How rapidly depends on the process. Do not assume that just because the ACF doesn't decay after one or two lags that it is non-stationary. You may also have a fractionally integrated series - something you should also test for.
(3) The ACF looks very slow because effects of previous innovations on a stationary series die out over time. This is why the AR / MA coefficients of less than one are important. You can think of these coefficients as "discount factors" for the influence that previous innovations are having on the current period. They decay because the the structure of the series is such that last period matters the most, two periods ago matters a little less, three periods ago matters less than two periods but is still significant, until you go far enough back that innovations from several periods ago no longer influence current observations. In a non-stationary series, there is no decay.
(4) Slowly decaying is answered above. Seasonality doesn't necessarily imply non-stationarity. It just means that there is some periodicity in the the observations that you must take into account. If you have a monthly consumer spending time series, you might see some seasonality. Every twelve months there is a huge spike in consumer spending around the holidays. That periodicity needs to be taken into account when modeling the time series. | Can nonstationarity be told from the autocorrelation function?
(1) A stationary ARMA process has a "rapidly" decaying acf because the acf shows the extent to which previous observations predict the current observation. An AR(1) model would have one significant la |
52,661 | Can nonstationarity be told from the autocorrelation function? | As @Clayton pointed out sample autocorrelation function cannot tell if series is non-stationary.
Problem is that there is various levels of non-stationarity, deterministic OR stochastic trends are just one way time series to be non-stationary.
Non-constant variance can be also problem even when expected value is independent of time since confidence intervals based on simple sample statistics are invalid. | Can nonstationarity be told from the autocorrelation function? | As @Clayton pointed out sample autocorrelation function cannot tell if series is non-stationary.
Problem is that there is various levels of non-stationarity, deterministic OR stochastic trends are ju | Can nonstationarity be told from the autocorrelation function?
As @Clayton pointed out sample autocorrelation function cannot tell if series is non-stationary.
Problem is that there is various levels of non-stationarity, deterministic OR stochastic trends are just one way time series to be non-stationary.
Non-constant variance can be also problem even when expected value is independent of time since confidence intervals based on simple sample statistics are invalid. | Can nonstationarity be told from the autocorrelation function?
As @Clayton pointed out sample autocorrelation function cannot tell if series is non-stationary.
Problem is that there is various levels of non-stationarity, deterministic OR stochastic trends are ju |
52,662 | Cholesky update for removing row | The usual approach involves interchanging the position of the column you want removed and the last one, and returning the Cholesky to triangular form via either Givens rotations or Householder transforms, whereupon you can simply drop the last one.
Similarly you can update an additional column into any position by adding to the end and then swapping into the desired position followed by the Givens/Householder transformations to fix it back up to triangular form.
When you do the swap of columns, you destroy the triangular form you need it to have. Each of the Givens rotations (or Householder transforms) then fixes up one of the values that are 'sticking out', until they are all fixed. This is quite a lot faster than recomputing from scratch.
I'll talk about rotations (but the Householder transforms do the same job, so much of the discussion carries over directly). For each non-zero value that's 'sticking out', you apply a Givens rotation (a 2x2 matrix of $\sin\theta$ and $\cos\theta$ values representing a rotation through $\theta$), so that it 'zeroes' that one value. It changes other values, but retains the $A=RR'$ property. By starting with the value that's sticking out the most and working in toward the diagonal of the triangle, you end up not losing the values you choose to zero out - that is, if you do things in the right order, the zeros you make at one step are still zeros the next step.
The Householder approach is a bit more efficient.
References. Hmm.
Matrix Computations (Golub and Van Loan) has a section on updating calculations - 12.5.2 "Adding or deleting a column" - it mostly talks about QR but much of the approaches and ideas carry over.
The documentation of the LINPACK routine SCHEX (*) and some of the other routines was actually where I first saw (and eventually understood, many, many years ago now) how to do it (if you have access to a library an old users guide may be available). I found its explanation relatively straightforward at the time.
* that function simply permutes the order of the variables in a Cholesky; this uses (I think) Givens rotations, which is relatively easy to find information on.
In fact, the documentation right in the code you can see online** is fairly informative. I am not sure the documentation said very much more than that. The description is clearly using Givens rotations.
** I mean the block of comments at the top
I'm sure I have something else on it somewhere; I'll have to see what other references I have. | Cholesky update for removing row | The usual approach involves interchanging the position of the column you want removed and the last one, and returning the Cholesky to triangular form via either Givens rotations or Householder transfo | Cholesky update for removing row
The usual approach involves interchanging the position of the column you want removed and the last one, and returning the Cholesky to triangular form via either Givens rotations or Householder transforms, whereupon you can simply drop the last one.
Similarly you can update an additional column into any position by adding to the end and then swapping into the desired position followed by the Givens/Householder transformations to fix it back up to triangular form.
When you do the swap of columns, you destroy the triangular form you need it to have. Each of the Givens rotations (or Householder transforms) then fixes up one of the values that are 'sticking out', until they are all fixed. This is quite a lot faster than recomputing from scratch.
I'll talk about rotations (but the Householder transforms do the same job, so much of the discussion carries over directly). For each non-zero value that's 'sticking out', you apply a Givens rotation (a 2x2 matrix of $\sin\theta$ and $\cos\theta$ values representing a rotation through $\theta$), so that it 'zeroes' that one value. It changes other values, but retains the $A=RR'$ property. By starting with the value that's sticking out the most and working in toward the diagonal of the triangle, you end up not losing the values you choose to zero out - that is, if you do things in the right order, the zeros you make at one step are still zeros the next step.
The Householder approach is a bit more efficient.
References. Hmm.
Matrix Computations (Golub and Van Loan) has a section on updating calculations - 12.5.2 "Adding or deleting a column" - it mostly talks about QR but much of the approaches and ideas carry over.
The documentation of the LINPACK routine SCHEX (*) and some of the other routines was actually where I first saw (and eventually understood, many, many years ago now) how to do it (if you have access to a library an old users guide may be available). I found its explanation relatively straightforward at the time.
* that function simply permutes the order of the variables in a Cholesky; this uses (I think) Givens rotations, which is relatively easy to find information on.
In fact, the documentation right in the code you can see online** is fairly informative. I am not sure the documentation said very much more than that. The description is clearly using Givens rotations.
** I mean the block of comments at the top
I'm sure I have something else on it somewhere; I'll have to see what other references I have. | Cholesky update for removing row
The usual approach involves interchanging the position of the column you want removed and the last one, and returning the Cholesky to triangular form via either Givens rotations or Householder transfo |
52,663 | Is it possible to train a one-class SVM to have zero training error? | As @Joe already mentioned, $\nu$:
sets an upper bound on the fraction of outliers (training examples regarded out-of-class) and,
serves as a lower bound on the number of training examples used as Support Vector.
Mathematically, the quadratic programming minimization function is:
So if $\nu$ is too small, the problem is becoming a hard margin algorithm (2nd item is infinity). And the algorithm will find the unique supporting hyperplane with the properties that it separates all data from the origin, and its distance to the origin is maximal among all such hyperplanes, which, as you said, a 100% training accuracy. You can try to set nu a small value rather than 0. Perhaps the package doesn't allow the -Inf occurrence in the cost function. | Is it possible to train a one-class SVM to have zero training error? | As @Joe already mentioned, $\nu$:
sets an upper bound on the fraction of outliers (training examples regarded out-of-class) and,
serves as a lower bound on the number of training examples used as Sup | Is it possible to train a one-class SVM to have zero training error?
As @Joe already mentioned, $\nu$:
sets an upper bound on the fraction of outliers (training examples regarded out-of-class) and,
serves as a lower bound on the number of training examples used as Support Vector.
Mathematically, the quadratic programming minimization function is:
So if $\nu$ is too small, the problem is becoming a hard margin algorithm (2nd item is infinity). And the algorithm will find the unique supporting hyperplane with the properties that it separates all data from the origin, and its distance to the origin is maximal among all such hyperplanes, which, as you said, a 100% training accuracy. You can try to set nu a small value rather than 0. Perhaps the package doesn't allow the -Inf occurrence in the cost function. | Is it possible to train a one-class SVM to have zero training error?
As @Joe already mentioned, $\nu$:
sets an upper bound on the fraction of outliers (training examples regarded out-of-class) and,
serves as a lower bound on the number of training examples used as Sup |
52,664 | Is it possible to train a one-class SVM to have zero training error? | Can I achieve zero training error in an SVM?
Yes, but only if the data is separable. The separability of a dataset might depend on the kernel function you're using (e.g., if you're using the dot product, then "separable" = "linearly separable"), but some data sets aren't separable under any kernel function, for example, the following data set in $\mathbb{R}^2$:
positive_examples = [(0,0), (1,1), (2,2)]
negative_examples = [(0,0), (2,1), (3,2)] # (0,0) is in both categories
What do we do if it's not separable?
The "hard margin" SVM seeks to perfectly separate the data with a (hyper)plane (possibly in some wacky space implied by the kernel function) and then maximize the margin (the space on either side of that plane). Maximizing the margin controls the generalization error.
A "soft margin" SVM tries to do the same thing, but it allows for a small amount of misclassification. The tradeoff between minimizing mis-classification and maximizing the margin is controlled by a hyperparameter called $C$. The parameter you mentioned ($\nu$) is a reparameterization of $C$ that's easier to pick and interpret than $C$ is.
Ok, so what is the interpretation of $\nu$?
Quoting from this StackOverflow post on Hard margin vs Soft margin SVM (which I recommend you read):
The parameter nu is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors relative to the total number of training examples. For example, if you set it to 0.05 you are guaranteed to find at most 5% of your training examples being misclassified (at the cost of a small margin, though) and at least 5% of your training examples being support vectors.
Why the hell do I want to allow errors?
You say "it is more important for me to make sure I don't label something as an anomaly rather than to miss an actual anomaly." That seems like a great argument, but we're talking about training error not generalization error here. Fitting the training data absolutely perfectly is a recipe for disaster overfitting. (Also, as a side note, it sounds like you might want to be optimizing for precision rather than accuracy, or at least weighting false positives as being worse errors than false negatives.)
Whenever we fit complex models to data, it's important that we understand the tradeoff of model complexity and generalization error. Simple models generalize well from very little data, but they may miss real complexities in the data. Complex models can match complexities in the data, but they also match noise in the data (which makes them generalize poorly). All machine learning models attempt to build in some sort of "regularization" which penalizes some measure of model complexity (which can be traded-off with accuracy, the parameter usually being chosen by cross validation). | Is it possible to train a one-class SVM to have zero training error? | Can I achieve zero training error in an SVM?
Yes, but only if the data is separable. The separability of a dataset might depend on the kernel function you're using (e.g., if you're using the dot produ | Is it possible to train a one-class SVM to have zero training error?
Can I achieve zero training error in an SVM?
Yes, but only if the data is separable. The separability of a dataset might depend on the kernel function you're using (e.g., if you're using the dot product, then "separable" = "linearly separable"), but some data sets aren't separable under any kernel function, for example, the following data set in $\mathbb{R}^2$:
positive_examples = [(0,0), (1,1), (2,2)]
negative_examples = [(0,0), (2,1), (3,2)] # (0,0) is in both categories
What do we do if it's not separable?
The "hard margin" SVM seeks to perfectly separate the data with a (hyper)plane (possibly in some wacky space implied by the kernel function) and then maximize the margin (the space on either side of that plane). Maximizing the margin controls the generalization error.
A "soft margin" SVM tries to do the same thing, but it allows for a small amount of misclassification. The tradeoff between minimizing mis-classification and maximizing the margin is controlled by a hyperparameter called $C$. The parameter you mentioned ($\nu$) is a reparameterization of $C$ that's easier to pick and interpret than $C$ is.
Ok, so what is the interpretation of $\nu$?
Quoting from this StackOverflow post on Hard margin vs Soft margin SVM (which I recommend you read):
The parameter nu is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors relative to the total number of training examples. For example, if you set it to 0.05 you are guaranteed to find at most 5% of your training examples being misclassified (at the cost of a small margin, though) and at least 5% of your training examples being support vectors.
Why the hell do I want to allow errors?
You say "it is more important for me to make sure I don't label something as an anomaly rather than to miss an actual anomaly." That seems like a great argument, but we're talking about training error not generalization error here. Fitting the training data absolutely perfectly is a recipe for disaster overfitting. (Also, as a side note, it sounds like you might want to be optimizing for precision rather than accuracy, or at least weighting false positives as being worse errors than false negatives.)
Whenever we fit complex models to data, it's important that we understand the tradeoff of model complexity and generalization error. Simple models generalize well from very little data, but they may miss real complexities in the data. Complex models can match complexities in the data, but they also match noise in the data (which makes them generalize poorly). All machine learning models attempt to build in some sort of "regularization" which penalizes some measure of model complexity (which can be traded-off with accuracy, the parameter usually being chosen by cross validation). | Is it possible to train a one-class SVM to have zero training error?
Can I achieve zero training error in an SVM?
Yes, but only if the data is separable. The separability of a dataset might depend on the kernel function you're using (e.g., if you're using the dot produ |
52,665 | Non-Stationary Time Series Forecasting | I don't know what "non-stationary limited data" means. So I will assume you mean "non-stationary data".
Exponential smoothing methods including Holt-Winters methods are appropriate for (some kinds of) non-stationary data. In fact, they are only really appropriate if the data are non-stationary. Using an exponential smoothing method on stationary data is not wrong but is sub-optimal.
If by "moving averages", you mean forecasting using a moving average of recent observations, then that is also ok for some kinds of non-stationary data. But it obviously won't work well with trends or seasonality.
If by "moving averages", you mean a moving average model (i.e. a model consisting of a linear combination of past error terms), then you do need a stationary time series. | Non-Stationary Time Series Forecasting | I don't know what "non-stationary limited data" means. So I will assume you mean "non-stationary data".
Exponential smoothing methods including Holt-Winters methods are appropriate for (some kinds of) | Non-Stationary Time Series Forecasting
I don't know what "non-stationary limited data" means. So I will assume you mean "non-stationary data".
Exponential smoothing methods including Holt-Winters methods are appropriate for (some kinds of) non-stationary data. In fact, they are only really appropriate if the data are non-stationary. Using an exponential smoothing method on stationary data is not wrong but is sub-optimal.
If by "moving averages", you mean forecasting using a moving average of recent observations, then that is also ok for some kinds of non-stationary data. But it obviously won't work well with trends or seasonality.
If by "moving averages", you mean a moving average model (i.e. a model consisting of a linear combination of past error terms), then you do need a stationary time series. | Non-Stationary Time Series Forecasting
I don't know what "non-stationary limited data" means. So I will assume you mean "non-stationary data".
Exponential smoothing methods including Holt-Winters methods are appropriate for (some kinds of) |
52,666 | Non-Stationary Time Series Forecasting | There is no problem with forecasting nonstationary data directly. Take the random walk
$$Y_t=Y_{t-1}+\epsilon_t$$
The best predictor of $Y_{t+1}$ given the information set $\{Y_t,Y_{t-1},...\}$ simply is
$$E(Y_{t+1}|Y_t)=E(Y_t|Y_{t})+E(\epsilon_{t+1}|Y_t)=Y_t$$ | Non-Stationary Time Series Forecasting | There is no problem with forecasting nonstationary data directly. Take the random walk
$$Y_t=Y_{t-1}+\epsilon_t$$
The best predictor of $Y_{t+1}$ given the information set $\{Y_t,Y_{t-1},...\}$ simply | Non-Stationary Time Series Forecasting
There is no problem with forecasting nonstationary data directly. Take the random walk
$$Y_t=Y_{t-1}+\epsilon_t$$
The best predictor of $Y_{t+1}$ given the information set $\{Y_t,Y_{t-1},...\}$ simply is
$$E(Y_{t+1}|Y_t)=E(Y_t|Y_{t})+E(\epsilon_{t+1}|Y_t)=Y_t$$ | Non-Stationary Time Series Forecasting
There is no problem with forecasting nonstationary data directly. Take the random walk
$$Y_t=Y_{t-1}+\epsilon_t$$
The best predictor of $Y_{t+1}$ given the information set $\{Y_t,Y_{t-1},...\}$ simply |
52,667 | Non-Stationary Time Series Forecasting | Stationarity refers to uniformity in the properties of the data. If you know that the data is non-stationary, it means that the useful properties of the data cannot be assumed to be the same for the entire series. Under such an assumption, why do you want to apply the same filter or model to the entire series ?
My suggestion is to look for properties that stay the same for a stretch of data and then changes but again stays the same for another stretch. then look for a criteria to transition between the two different stretches of data.
Alternatively, search for locally stationary series.
Also if smoothing is what you want, then I would suggest some non-parametric smoothing methods like kernel smoothing.
Edit after first comment : if you know the precise form of non-stationarity, or can approximate a functional form to the series, then use the properties of the form for your prediction. | Non-Stationary Time Series Forecasting | Stationarity refers to uniformity in the properties of the data. If you know that the data is non-stationary, it means that the useful properties of the data cannot be assumed to be the same for the e | Non-Stationary Time Series Forecasting
Stationarity refers to uniformity in the properties of the data. If you know that the data is non-stationary, it means that the useful properties of the data cannot be assumed to be the same for the entire series. Under such an assumption, why do you want to apply the same filter or model to the entire series ?
My suggestion is to look for properties that stay the same for a stretch of data and then changes but again stays the same for another stretch. then look for a criteria to transition between the two different stretches of data.
Alternatively, search for locally stationary series.
Also if smoothing is what you want, then I would suggest some non-parametric smoothing methods like kernel smoothing.
Edit after first comment : if you know the precise form of non-stationarity, or can approximate a functional form to the series, then use the properties of the form for your prediction. | Non-Stationary Time Series Forecasting
Stationarity refers to uniformity in the properties of the data. If you know that the data is non-stationary, it means that the useful properties of the data cannot be assumed to be the same for the e |
52,668 | Updating variance of a dataset [duplicate] | In the absence of information to the contrary I assume you want univariate calculations and you want the $n-1$-denominator version of variance.
One useful way is to update the sums of squares of deviations from the mean, which I'll call SSE.
Let's say that at time $t$ you have $\bar x$, $\text{SSE}$ and $s^2$, and an observation, $x_{t+1}$ and you want to have those three computed quantities at time $t+1$.
Here's how you can update them:
$t=t+1\\
e_t = x_t-\bar x\quad\text{ # note that the } \bar x \text{ term there is the previous mean} \\
\bar x = \bar x + e_t/t \\
\text{SSE} = \text{SSE} + e_t\cdot (x_t-\bar x)\quad\text{ # note that the term there is NOT } e_t^2\\
s^2 = SSE/(t-1)
$
This calculation is much more stable than the raw calculation of $\sum_i^t x_i^2$ version (which you'll find in many old books, and which is not too bad when working by hand, where you can see when you're losing precision). It is not the most stable possible calculation, but is sufficient for almost all purposes. If stability does become an issue for you there are some other things that can be done.
As @January rightly points out, I've given an on-line algorithm and you seem to only want the values at the end. In that case, the above algorithm can be sped up a little, for example, by not calculating $s^2$ each iteration.
Alternatively, the algorithm suggested by @January below can be made much more stable by subtracting an estimate of the mean from each observation before taking the sums and sums of squares. Algebraically it doesn't change the mean, but it makes it substantially more stable. It doesn't need a very good estimate to make a big difference; often the first observation is sufficient (since usually the situations where, say, the first observation is a very bad estimate of the mean are the situations where it really doesn't matter).
See also https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance | Updating variance of a dataset [duplicate] | In the absence of information to the contrary I assume you want univariate calculations and you want the $n-1$-denominator version of variance.
One useful way is to update the sums of squares of devia | Updating variance of a dataset [duplicate]
In the absence of information to the contrary I assume you want univariate calculations and you want the $n-1$-denominator version of variance.
One useful way is to update the sums of squares of deviations from the mean, which I'll call SSE.
Let's say that at time $t$ you have $\bar x$, $\text{SSE}$ and $s^2$, and an observation, $x_{t+1}$ and you want to have those three computed quantities at time $t+1$.
Here's how you can update them:
$t=t+1\\
e_t = x_t-\bar x\quad\text{ # note that the } \bar x \text{ term there is the previous mean} \\
\bar x = \bar x + e_t/t \\
\text{SSE} = \text{SSE} + e_t\cdot (x_t-\bar x)\quad\text{ # note that the term there is NOT } e_t^2\\
s^2 = SSE/(t-1)
$
This calculation is much more stable than the raw calculation of $\sum_i^t x_i^2$ version (which you'll find in many old books, and which is not too bad when working by hand, where you can see when you're losing precision). It is not the most stable possible calculation, but is sufficient for almost all purposes. If stability does become an issue for you there are some other things that can be done.
As @January rightly points out, I've given an on-line algorithm and you seem to only want the values at the end. In that case, the above algorithm can be sped up a little, for example, by not calculating $s^2$ each iteration.
Alternatively, the algorithm suggested by @January below can be made much more stable by subtracting an estimate of the mean from each observation before taking the sums and sums of squares. Algebraically it doesn't change the mean, but it makes it substantially more stable. It doesn't need a very good estimate to make a big difference; often the first observation is sufficient (since usually the situations where, say, the first observation is a very bad estimate of the mean are the situations where it really doesn't matter).
See also https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance | Updating variance of a dataset [duplicate]
In the absence of information to the contrary I assume you want univariate calculations and you want the $n-1$-denominator version of variance.
One useful way is to update the sums of squares of devia |
52,669 | Updating variance of a dataset [duplicate] | That is not really a question for CrossValidated, but here it comes: you only need to store three values:
current number of variables,
current sum of squares of the variables
current sum of the variables.
That's because variance can be expressed in terms of $\sum x_i^2$ and $(\sum x_i)^2$.
$var(X) = \frac{\sum (\bar{x} - x_i)^2}{n}$
and
$\sum( \bar{x} - x_i)^2 = \sum \bar{x}^2 - 2\cdot \bar{x} \sum x_i + \sum x_i^2 = $
$= n\cdot\bar{x}^2 - 2\cdot \bar{x} \cdot n\cdot \bar{x} + \sum x_i^2 =$
$=\sum x_i^2 - n\cdot\bar{x}^2 $
The main disadvantage of this method is that the sum of squares tends to grow very fast and might be problematic to store; Glenn's variant (in the other answer) is better in this respect, but computationally (slightly) more intensive. | Updating variance of a dataset [duplicate] | That is not really a question for CrossValidated, but here it comes: you only need to store three values:
current number of variables,
current sum of squares of the variables
current sum of the var | Updating variance of a dataset [duplicate]
That is not really a question for CrossValidated, but here it comes: you only need to store three values:
current number of variables,
current sum of squares of the variables
current sum of the variables.
That's because variance can be expressed in terms of $\sum x_i^2$ and $(\sum x_i)^2$.
$var(X) = \frac{\sum (\bar{x} - x_i)^2}{n}$
and
$\sum( \bar{x} - x_i)^2 = \sum \bar{x}^2 - 2\cdot \bar{x} \sum x_i + \sum x_i^2 = $
$= n\cdot\bar{x}^2 - 2\cdot \bar{x} \cdot n\cdot \bar{x} + \sum x_i^2 =$
$=\sum x_i^2 - n\cdot\bar{x}^2 $
The main disadvantage of this method is that the sum of squares tends to grow very fast and might be problematic to store; Glenn's variant (in the other answer) is better in this respect, but computationally (slightly) more intensive. | Updating variance of a dataset [duplicate]
That is not really a question for CrossValidated, but here it comes: you only need to store three values:
current number of variables,
current sum of squares of the variables
current sum of the var |
52,670 | Bernoulli random variable parameter estimation | Instead of tedious derivations, simply invoke the invariance property of MLEs. Then solve for $x$ using basic algebra. However, note that this approach will lead to the same estimator you derived, i.e., $\hat{x}=-\log(2m/n -1)$.
So what to do? First, ignore estimation for the moment. Look at what the true value of $x$ would be if you knew the true value of $\theta$.
Suppose $\theta = 0$. Then, $e^{-x} = -1$, which implies that $x = - \log (-1)$. As you noted, this is not a real value.
So what about $\theta = 1/2$? We have that $e^{-x} = 0$, which is a problem for you yet again.
The point here is that with a couple of special cases based on known values, we can see that $x$ is undefined in a large number of cases. Your estimation is not wrong per se. The value of $x$ is simply undefined (not real) for certain values of $\theta$. | Bernoulli random variable parameter estimation | Instead of tedious derivations, simply invoke the invariance property of MLEs. Then solve for $x$ using basic algebra. However, note that this approach will lead to the same estimator you derived, i.e | Bernoulli random variable parameter estimation
Instead of tedious derivations, simply invoke the invariance property of MLEs. Then solve for $x$ using basic algebra. However, note that this approach will lead to the same estimator you derived, i.e., $\hat{x}=-\log(2m/n -1)$.
So what to do? First, ignore estimation for the moment. Look at what the true value of $x$ would be if you knew the true value of $\theta$.
Suppose $\theta = 0$. Then, $e^{-x} = -1$, which implies that $x = - \log (-1)$. As you noted, this is not a real value.
So what about $\theta = 1/2$? We have that $e^{-x} = 0$, which is a problem for you yet again.
The point here is that with a couple of special cases based on known values, we can see that $x$ is undefined in a large number of cases. Your estimation is not wrong per se. The value of $x$ is simply undefined (not real) for certain values of $\theta$. | Bernoulli random variable parameter estimation
Instead of tedious derivations, simply invoke the invariance property of MLEs. Then solve for $x$ using basic algebra. However, note that this approach will lead to the same estimator you derived, i.e |
52,671 | Bernoulli random variable parameter estimation | Since your knowledge about $\theta$ restricts the parameter space to $\Theta = (\tfrac{1}{2}, 1)$, you need to respect that when solving for the maximum likelihood. In other words, the maximum likelihood estimator is the solution to the constrained maximization problem, not the unconstrained maximization problem you've solved.
The log-likelihood you're interested in is
$$
\ln \mathcal{L}(\theta)
=
(n - m)\ln(1 - \theta) + m\ln\theta
$$
and it turns out that it's concave almost everywhere, since
$$
\frac{d^2}{d\theta^2} \ln\mathcal{L}(\theta)
=
-\frac{n - m}{(1 - \theta)^2} - \frac{m}{\theta^2}
<
0 \text{ for all } \theta \not\in \{0, 1\}.
$$
As you've already pointed out, the unconstrained maximum is at $\theta_0 = m / n$. When this falls within your constraint $\Theta$, it is the maximum. When $\theta_0 \leq \tfrac{1}{2}$, the log-likelihood increases as $\theta$ moves towards $\theta_0$. However, the constraint requires that $\theta > \tfrac{1}{2}$, so the constrained maximum does not exist, and consequently, neither does the MLE. A similar situation occurs when $\theta_0 = 1$. Regardless of the actual value of $\theta_0$, the MLE does not exist because these situations are possible.
If you are willing to compromise and allow $x \in [0, \infty]$, so that $\Theta = [\tfrac{1}{2}, 1]$, then by the above it follows that the MLE of $\theta$ is
$$
\hat{\theta} = \max(\tfrac{1}{2}, \tfrac{m}{n})
$$
From a practical perspective, if you really need $x \in (0, \infty)$, you might instead assume $x \in [\epsilon, \ln\epsilon]$, where $\epsilon$ is a very small number. Alternatively, you could draw more observations; if your model is correct, eventually $\hat{\theta}$ should fall in $(\tfrac{1}{2}, 1)$. If it does not, you've either been hit by very bad luck, or you need to reconsider the validity of the model.
In situations where the log-likelihood is not so amenable to direct analysis, you could also solve for the MLE using constrained optimization techniques such as Lagrange multipliers.
Once you've found the MLE of $\theta$, then as John A. Ramey pointed out in his answer, you can invoke the invariance property of MLEs.
Edit: after thinking about this more, I've added a few more details. | Bernoulli random variable parameter estimation | Since your knowledge about $\theta$ restricts the parameter space to $\Theta = (\tfrac{1}{2}, 1)$, you need to respect that when solving for the maximum likelihood. In other words, the maximum likelih | Bernoulli random variable parameter estimation
Since your knowledge about $\theta$ restricts the parameter space to $\Theta = (\tfrac{1}{2}, 1)$, you need to respect that when solving for the maximum likelihood. In other words, the maximum likelihood estimator is the solution to the constrained maximization problem, not the unconstrained maximization problem you've solved.
The log-likelihood you're interested in is
$$
\ln \mathcal{L}(\theta)
=
(n - m)\ln(1 - \theta) + m\ln\theta
$$
and it turns out that it's concave almost everywhere, since
$$
\frac{d^2}{d\theta^2} \ln\mathcal{L}(\theta)
=
-\frac{n - m}{(1 - \theta)^2} - \frac{m}{\theta^2}
<
0 \text{ for all } \theta \not\in \{0, 1\}.
$$
As you've already pointed out, the unconstrained maximum is at $\theta_0 = m / n$. When this falls within your constraint $\Theta$, it is the maximum. When $\theta_0 \leq \tfrac{1}{2}$, the log-likelihood increases as $\theta$ moves towards $\theta_0$. However, the constraint requires that $\theta > \tfrac{1}{2}$, so the constrained maximum does not exist, and consequently, neither does the MLE. A similar situation occurs when $\theta_0 = 1$. Regardless of the actual value of $\theta_0$, the MLE does not exist because these situations are possible.
If you are willing to compromise and allow $x \in [0, \infty]$, so that $\Theta = [\tfrac{1}{2}, 1]$, then by the above it follows that the MLE of $\theta$ is
$$
\hat{\theta} = \max(\tfrac{1}{2}, \tfrac{m}{n})
$$
From a practical perspective, if you really need $x \in (0, \infty)$, you might instead assume $x \in [\epsilon, \ln\epsilon]$, where $\epsilon$ is a very small number. Alternatively, you could draw more observations; if your model is correct, eventually $\hat{\theta}$ should fall in $(\tfrac{1}{2}, 1)$. If it does not, you've either been hit by very bad luck, or you need to reconsider the validity of the model.
In situations where the log-likelihood is not so amenable to direct analysis, you could also solve for the MLE using constrained optimization techniques such as Lagrange multipliers.
Once you've found the MLE of $\theta$, then as John A. Ramey pointed out in his answer, you can invoke the invariance property of MLEs.
Edit: after thinking about this more, I've added a few more details. | Bernoulli random variable parameter estimation
Since your knowledge about $\theta$ restricts the parameter space to $\Theta = (\tfrac{1}{2}, 1)$, you need to respect that when solving for the maximum likelihood. In other words, the maximum likelih |
52,672 | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS? | No, this doesn't imply 'the model is wrong' in the least. It's telling you that you should be wary of interpreting raw correlations when other important variables exist.
Here's a set of data I just generated (in R). The sample correlation between y and x1 is negative:
print(cor(cbind(y,x1,x2)),d=3)
y x1 x2
y 1.0000 -0.0772 -0.830
x1 -0.0772 1.0000 0.196
x2 -0.8299 0.1961 1.000
Yet the coefficient in the regression is positive:
summary(lm(y~x1+x2))
... [snip]
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 11.8231 2.6183 4.516 9.73e-05 ***
x1 0.1203 0.1412 0.852 0.401
x2 -5.8462 0.7201 -8.119 5.94e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.466 on 29 degrees of freedom
Multiple R-squared: 0.6963, Adjusted R-squared: 0.6753
F-statistic: 33.24 on 2 and 29 DF, p-value: 3.132e-08
Is the 'model' wrong? No, I fitted the same model I used to create the data, one that satisfies all the regression assumptions,
$y = 9 + 0.2 x_1 - 5 x_2 + e $, where $e_i \sim N(0,4^2)$,
or in R: y= 9 + 0.2*x1 -5*x2 + rnorm(length(x2),0,4)
So how does this happen?
Look at two things. First, look at the plot of $y$ vs $x_1$:
And we see a (very slight in this case) negative correlation.
Now look at the same plot, but with the values at a particular value of $x_2$ ($x_2=4$) marked in red:
... at a given value of $x_2$, the relationship with $x_1$ is increasing, not decreasing. The same happens at the other values of $x_2$. For each value of $x_2$, the relationship between $y$ and $x_1$ is positive. So why is the correlation negative? Because $x_1$ and $x_2$ are related.
If we want to look at correlation and have it correspond to the regression, the partial correlation rather than the raw correlation is the relevant quantity; here's the table of partial correlations (using package ppcor):
print(pcor(cbind(y,x1,x2))$estimate,d=3)
y x1 x2
y 1.000 0.156 -0.833
x1 0.156 1.000 0.237
x2 -0.833 0.237 1.000
We see the partial correlation between $y$ and $x_1$ controlling for $x_2$ is positive.
It wasn't the regression results that one had to beware of, it was the misleading impression from looking at the raw correlation.
Incidentally, it's also quite possible to make it so both the correlation and regression coefficient are significantly different from zero and of opposite sign ... and there's still nothing wrong with the model. | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS? | No, this doesn't imply 'the model is wrong' in the least. It's telling you that you should be wary of interpreting raw correlations when other important variables exist.
Here's a set of data I just ge | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS?
No, this doesn't imply 'the model is wrong' in the least. It's telling you that you should be wary of interpreting raw correlations when other important variables exist.
Here's a set of data I just generated (in R). The sample correlation between y and x1 is negative:
print(cor(cbind(y,x1,x2)),d=3)
y x1 x2
y 1.0000 -0.0772 -0.830
x1 -0.0772 1.0000 0.196
x2 -0.8299 0.1961 1.000
Yet the coefficient in the regression is positive:
summary(lm(y~x1+x2))
... [snip]
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 11.8231 2.6183 4.516 9.73e-05 ***
x1 0.1203 0.1412 0.852 0.401
x2 -5.8462 0.7201 -8.119 5.94e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 4.466 on 29 degrees of freedom
Multiple R-squared: 0.6963, Adjusted R-squared: 0.6753
F-statistic: 33.24 on 2 and 29 DF, p-value: 3.132e-08
Is the 'model' wrong? No, I fitted the same model I used to create the data, one that satisfies all the regression assumptions,
$y = 9 + 0.2 x_1 - 5 x_2 + e $, where $e_i \sim N(0,4^2)$,
or in R: y= 9 + 0.2*x1 -5*x2 + rnorm(length(x2),0,4)
So how does this happen?
Look at two things. First, look at the plot of $y$ vs $x_1$:
And we see a (very slight in this case) negative correlation.
Now look at the same plot, but with the values at a particular value of $x_2$ ($x_2=4$) marked in red:
... at a given value of $x_2$, the relationship with $x_1$ is increasing, not decreasing. The same happens at the other values of $x_2$. For each value of $x_2$, the relationship between $y$ and $x_1$ is positive. So why is the correlation negative? Because $x_1$ and $x_2$ are related.
If we want to look at correlation and have it correspond to the regression, the partial correlation rather than the raw correlation is the relevant quantity; here's the table of partial correlations (using package ppcor):
print(pcor(cbind(y,x1,x2))$estimate,d=3)
y x1 x2
y 1.000 0.156 -0.833
x1 0.156 1.000 0.237
x2 -0.833 0.237 1.000
We see the partial correlation between $y$ and $x_1$ controlling for $x_2$ is positive.
It wasn't the regression results that one had to beware of, it was the misleading impression from looking at the raw correlation.
Incidentally, it's also quite possible to make it so both the correlation and regression coefficient are significantly different from zero and of opposite sign ... and there's still nothing wrong with the model. | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS?
No, this doesn't imply 'the model is wrong' in the least. It's telling you that you should be wary of interpreting raw correlations when other important variables exist.
Here's a set of data I just ge |
52,673 | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS? | In addition to looking at the coefficients, you should also look at their confidence intervals. If the interval is quite wide then a change from $0.351$ to $-0.150$ could be explained by random chance. Even if the intervals are narrow (and show significant difference) a change in sign is not uncommon.
Remember that the interpretation of a single slope is the effect of changing that variable while holding all others constant. Even with moderate correlation (well below the level that would cause a VIF to be interesting) this can be an unreasonable assumption that does not help with interpretation and can lead to the reversal. Think of what it means to increase someone's height while keeping their weight constant.
Another example. $Y$ is the value of all the coins in a person's pocket, $X_1$ is the total number of coins, $X_2$ is the number of coins in the pocket that are not quarters (or the highest common denomination of coin for the region). We would expect positive correlation between all 3 variables, but if we hold $X_1$ constant and increase $X_2$ then $Y$ would decrease. | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS? | In addition to looking at the coefficients, you should also look at their confidence intervals. If the interval is quite wide then a change from $0.351$ to $-0.150$ could be explained by random chanc | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS?
In addition to looking at the coefficients, you should also look at their confidence intervals. If the interval is quite wide then a change from $0.351$ to $-0.150$ could be explained by random chance. Even if the intervals are narrow (and show significant difference) a change in sign is not uncommon.
Remember that the interpretation of a single slope is the effect of changing that variable while holding all others constant. Even with moderate correlation (well below the level that would cause a VIF to be interesting) this can be an unreasonable assumption that does not help with interpretation and can lead to the reversal. Think of what it means to increase someone's height while keeping their weight constant.
Another example. $Y$ is the value of all the coins in a person's pocket, $X_1$ is the total number of coins, $X_2$ is the number of coins in the pocket that are not quarters (or the highest common denomination of coin for the region). We would expect positive correlation between all 3 variables, but if we hold $X_1$ constant and increase $X_2$ then $Y$ would decrease. | Is the model wrong if a coefficient changes from minus in correlation table to plus in OLS?
In addition to looking at the coefficients, you should also look at their confidence intervals. If the interval is quite wide then a change from $0.351$ to $-0.150$ could be explained by random chanc |
52,674 | Strange outcome when performing nonlinear least squares fit to a power law | It would be fairly unusual to find power law data for which the variance is actually constant across $x$ values. However, let's take that constant variance as given and proceed with the analysis as a least squares problem.
The easy way to fit this particular model is actually to use GLMs, not NLS! No need for starting values.
I don't have Matlab on this machine, but in R:
x <- seq(4,6.5,0.5)
y <- 1.0e-04 * c(0.011929047861902, 0.026053683604530, 0.057223759612162,
0.117413370572612, 0.242357468772138, 0.462327165034928)
powfit <- glm(y~log(x),family=gaussian(link=log))
summary(powfit)
[snip]
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -25.05254 0.16275 -153.93 1.07e-08 ***
log(x) 8.05095 0.08824 91.24 8.65e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for gaussian family taken to be 6.861052e-14)
Null deviance: 1.5012e-09 on 5 degrees of freedom
Residual deviance: 2.2273e-13 on 4 degrees of freedom
AIC: -162.52
Number of Fisher Scoring iterations: 1
> plot(x,y,log="xy")
> lines(x,fitted(powfit))
I highly recommend using Gaussian GLMs with the relevant link function whenever your NLS model can be cast in that form (it can for exponential and power functions, for example).
As to why you're having trouble with it in Matlab, I'm not sure - could it be to do with your convergence criteria? There's no issue with the data at all, but I suggest you start it off at the linear regression coefficients:
> lm(log(y)~log(x))
Call:
lm(formula = log(y) ~ log(x))
Coefficients:
(Intercept) log(x)
-24.203 7.568
So start your parameters at $\exp(-24.2)$ and $7.57$ and see how that works. Or even better, do it the easy way and you can forget about starting values altogether.
Here's my results in R using nls, first, starting with $b$ at the value from the linear log-log fit:
nls(y~a*x^b,start=list(a=exp(-24.2),b=7.57))
Nonlinear regression model
model: y ~ a * x^b
data: parent.frame()
a b
1.290e-11 8.063e+00
residual sum-of-squares: 2.215e-13
Number of iterations to convergence: 10
Achieved convergence tolerance: 1.237e-07
And then shifting to starting at b=10:
nls(y~a*x^b,start=list(a=exp(-24.2),b=10))
Nonlinear regression model
model: y ~ a * x^b
data: parent.frame()
a b
1.290e-11 8.063e+00
residual sum-of-squares: 2.215e-13
Number of iterations to convergence: 19
Achieved convergence tolerance: 9.267e-08
As you see, they're very close.
The GLM didn't quite match the estimates of the parameters - though a stricter convergence criterion should improve that (glm is only taking one step before concluding it's converged above). It doesn't make much difference to the fit, though. With a bit of fiddling, it's possible to find a closer fit with the GLM (better than the nls), but you start getting down to very tiny deviance (around $10^{-14}$), and I worry if we're really just shifting around numerical error down that far. | Strange outcome when performing nonlinear least squares fit to a power law | It would be fairly unusual to find power law data for which the variance is actually constant across $x$ values. However, let's take that constant variance as given and proceed with the analysis as a | Strange outcome when performing nonlinear least squares fit to a power law
It would be fairly unusual to find power law data for which the variance is actually constant across $x$ values. However, let's take that constant variance as given and proceed with the analysis as a least squares problem.
The easy way to fit this particular model is actually to use GLMs, not NLS! No need for starting values.
I don't have Matlab on this machine, but in R:
x <- seq(4,6.5,0.5)
y <- 1.0e-04 * c(0.011929047861902, 0.026053683604530, 0.057223759612162,
0.117413370572612, 0.242357468772138, 0.462327165034928)
powfit <- glm(y~log(x),family=gaussian(link=log))
summary(powfit)
[snip]
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -25.05254 0.16275 -153.93 1.07e-08 ***
log(x) 8.05095 0.08824 91.24 8.65e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for gaussian family taken to be 6.861052e-14)
Null deviance: 1.5012e-09 on 5 degrees of freedom
Residual deviance: 2.2273e-13 on 4 degrees of freedom
AIC: -162.52
Number of Fisher Scoring iterations: 1
> plot(x,y,log="xy")
> lines(x,fitted(powfit))
I highly recommend using Gaussian GLMs with the relevant link function whenever your NLS model can be cast in that form (it can for exponential and power functions, for example).
As to why you're having trouble with it in Matlab, I'm not sure - could it be to do with your convergence criteria? There's no issue with the data at all, but I suggest you start it off at the linear regression coefficients:
> lm(log(y)~log(x))
Call:
lm(formula = log(y) ~ log(x))
Coefficients:
(Intercept) log(x)
-24.203 7.568
So start your parameters at $\exp(-24.2)$ and $7.57$ and see how that works. Or even better, do it the easy way and you can forget about starting values altogether.
Here's my results in R using nls, first, starting with $b$ at the value from the linear log-log fit:
nls(y~a*x^b,start=list(a=exp(-24.2),b=7.57))
Nonlinear regression model
model: y ~ a * x^b
data: parent.frame()
a b
1.290e-11 8.063e+00
residual sum-of-squares: 2.215e-13
Number of iterations to convergence: 10
Achieved convergence tolerance: 1.237e-07
And then shifting to starting at b=10:
nls(y~a*x^b,start=list(a=exp(-24.2),b=10))
Nonlinear regression model
model: y ~ a * x^b
data: parent.frame()
a b
1.290e-11 8.063e+00
residual sum-of-squares: 2.215e-13
Number of iterations to convergence: 19
Achieved convergence tolerance: 9.267e-08
As you see, they're very close.
The GLM didn't quite match the estimates of the parameters - though a stricter convergence criterion should improve that (glm is only taking one step before concluding it's converged above). It doesn't make much difference to the fit, though. With a bit of fiddling, it's possible to find a closer fit with the GLM (better than the nls), but you start getting down to very tiny deviance (around $10^{-14}$), and I worry if we're really just shifting around numerical error down that far. | Strange outcome when performing nonlinear least squares fit to a power law
It would be fairly unusual to find power law data for which the variance is actually constant across $x$ values. However, let's take that constant variance as given and proceed with the analysis as a |
52,675 | Strange outcome when performing nonlinear least squares fit to a power law | Now that you have explored a couple ways of fitting a power-law, why not try out the recommended way. For the continuous case, the MLE for the scaling parameter is:
$$\hat \alpha = 1 + n \left[ \sum_{i=1}^n ln \frac{x_i}{x_{min}} \right]$$
One of the authors offers additional (somewhat acerbic) advice in his blog; he sort of has a thing about power-laws and their misuse:
Abusing linear regression makes the baby Gauss cry. Fitting a line to your log-log plot by least squares is a bad idea. It generally doesn't even give you a probability distribution, and even if your data do follow a power-law distribution, it gives you a bad estimate of the parameters. You cannot use the error estimates your regression software gives you, because those formulas incorporate assumptions which directly contradict the idea that you are seeing samples from a power law. And no, you cannot claim that because the line "explains" (really, describes) a lot of the variance that you must have a power law, because you can get a very high R^2 from other distributions (that test has no "power"). And this is without getting into the additional errors caused by trying to fit a line to binned histograms.
It's true that fitting lines on log-log graphs is what Pareto did back in the day when he started this whole power-law business, but "the day" was the 1890s. There's a time and a place for being old school; this isn't it.
TL;DR If this is a distribution, use the MLE. Check for goodness of fit and compare against alternative models. | Strange outcome when performing nonlinear least squares fit to a power law | Now that you have explored a couple ways of fitting a power-law, why not try out the recommended way. For the continuous case, the MLE for the scaling parameter is:
$$\hat \alpha = 1 + n \left[ \sum_{ | Strange outcome when performing nonlinear least squares fit to a power law
Now that you have explored a couple ways of fitting a power-law, why not try out the recommended way. For the continuous case, the MLE for the scaling parameter is:
$$\hat \alpha = 1 + n \left[ \sum_{i=1}^n ln \frac{x_i}{x_{min}} \right]$$
One of the authors offers additional (somewhat acerbic) advice in his blog; he sort of has a thing about power-laws and their misuse:
Abusing linear regression makes the baby Gauss cry. Fitting a line to your log-log plot by least squares is a bad idea. It generally doesn't even give you a probability distribution, and even if your data do follow a power-law distribution, it gives you a bad estimate of the parameters. You cannot use the error estimates your regression software gives you, because those formulas incorporate assumptions which directly contradict the idea that you are seeing samples from a power law. And no, you cannot claim that because the line "explains" (really, describes) a lot of the variance that you must have a power law, because you can get a very high R^2 from other distributions (that test has no "power"). And this is without getting into the additional errors caused by trying to fit a line to binned histograms.
It's true that fitting lines on log-log graphs is what Pareto did back in the day when he started this whole power-law business, but "the day" was the 1890s. There's a time and a place for being old school; this isn't it.
TL;DR If this is a distribution, use the MLE. Check for goodness of fit and compare against alternative models. | Strange outcome when performing nonlinear least squares fit to a power law
Now that you have explored a couple ways of fitting a power-law, why not try out the recommended way. For the continuous case, the MLE for the scaling parameter is:
$$\hat \alpha = 1 + n \left[ \sum_{ |
52,676 | Strange outcome when performing nonlinear least squares fit to a power law | The problem with your nonlinear regression is your initial estimate of A. You say you set the initial value to 1.0 "because it doesn't seem to matter". Try plotting a curve using your initial values, and you'll see that it is very far away from your data points, so far that the nonlinear regression algorithm is stuck and can't figure out whether to increase or decrease those initial values. Try 1e-10 as the initial value for A and 5 for the initial value of B. Now nonlinear regression works fine. Here are the results from GraphPad Prism: | Strange outcome when performing nonlinear least squares fit to a power law | The problem with your nonlinear regression is your initial estimate of A. You say you set the initial value to 1.0 "because it doesn't seem to matter". Try plotting a curve using your initial values, | Strange outcome when performing nonlinear least squares fit to a power law
The problem with your nonlinear regression is your initial estimate of A. You say you set the initial value to 1.0 "because it doesn't seem to matter". Try plotting a curve using your initial values, and you'll see that it is very far away from your data points, so far that the nonlinear regression algorithm is stuck and can't figure out whether to increase or decrease those initial values. Try 1e-10 as the initial value for A and 5 for the initial value of B. Now nonlinear regression works fine. Here are the results from GraphPad Prism: | Strange outcome when performing nonlinear least squares fit to a power law
The problem with your nonlinear regression is your initial estimate of A. You say you set the initial value to 1.0 "because it doesn't seem to matter". Try plotting a curve using your initial values, |
52,677 | Why is the k-means algorithm minimizing the within cluster variance? | Within-cluster-variance is a simple to understand measure of compactness (there are others, too).
So basically, the objective is to find the most compact partitioning of the data set into $k$ partitions.
K-Means, in the Lloyd version, actually originated from 1d PCM data as far as I know. So assuming you have a really bad telephone line, and someone is bleeping a number of tones on you, how do you assign frequencies to a scale of say 10 tones? Well, you can tell the other to just send a bulk of data, store the frequencies in a list, and then try to split it into 10 bins, such that these bins are somewhat compact and separated. Even when the frequencies are distorted by the transmission, there is a good chance they will still be separable with this approach.
This is also why k-means usually comes out best when you evaluate clusterings with any other measure of compactness. Because it's just two different measures for a similar concept. | Why is the k-means algorithm minimizing the within cluster variance? | Within-cluster-variance is a simple to understand measure of compactness (there are others, too).
So basically, the objective is to find the most compact partitioning of the data set into $k$ partitio | Why is the k-means algorithm minimizing the within cluster variance?
Within-cluster-variance is a simple to understand measure of compactness (there are others, too).
So basically, the objective is to find the most compact partitioning of the data set into $k$ partitions.
K-Means, in the Lloyd version, actually originated from 1d PCM data as far as I know. So assuming you have a really bad telephone line, and someone is bleeping a number of tones on you, how do you assign frequencies to a scale of say 10 tones? Well, you can tell the other to just send a bulk of data, store the frequencies in a list, and then try to split it into 10 bins, such that these bins are somewhat compact and separated. Even when the frequencies are distorted by the transmission, there is a good chance they will still be separable with this approach.
This is also why k-means usually comes out best when you evaluate clusterings with any other measure of compactness. Because it's just two different measures for a similar concept. | Why is the k-means algorithm minimizing the within cluster variance?
Within-cluster-variance is a simple to understand measure of compactness (there are others, too).
So basically, the objective is to find the most compact partitioning of the data set into $k$ partitio |
52,678 | Why is the k-means algorithm minimizing the within cluster variance? | There are several questions here at very different levels. In essence every text on cluster analysis is an answer. You have to keep reading!
Variance is at one level just one statistical standard which statistical people find convenient to think about. Roughly, minimising variance encourages -- nay, enforces -- clusters as relatively tight balls. What can be a limitation in much of statistics, the sensitivity of means and variances to squared deviations, can be a virtue in cluster analysis in so far as clusters are tight and compact.
But yes, there are many, many other ways of finding clusters, some but not all of which can be posed as minimising or maximising an objective function.
For every enthusiastic account of cluster analysis you read, you should try to impute an opposite statistical view, namely that cluster analysis is a vaguely posed problem to which there can be no well-defined answer, and that it is oversold to the desperate and the clueless as a way of finding structure which may or may not exist. (The meta-assumption that data must clump, somehow, cannot be well tested by any method whose very purpose is to identify clusters.)
There have been attempts to show that one method of cluster analysis is "the right way", but it's my impression that people never get past arguing about the first principles. If you can give a precise definition of what you desire about a clustering, someone expert can guide you on methods to achieve that. | Why is the k-means algorithm minimizing the within cluster variance? | There are several questions here at very different levels. In essence every text on cluster analysis is an answer. You have to keep reading!
Variance is at one level just one statistical standard whi | Why is the k-means algorithm minimizing the within cluster variance?
There are several questions here at very different levels. In essence every text on cluster analysis is an answer. You have to keep reading!
Variance is at one level just one statistical standard which statistical people find convenient to think about. Roughly, minimising variance encourages -- nay, enforces -- clusters as relatively tight balls. What can be a limitation in much of statistics, the sensitivity of means and variances to squared deviations, can be a virtue in cluster analysis in so far as clusters are tight and compact.
But yes, there are many, many other ways of finding clusters, some but not all of which can be posed as minimising or maximising an objective function.
For every enthusiastic account of cluster analysis you read, you should try to impute an opposite statistical view, namely that cluster analysis is a vaguely posed problem to which there can be no well-defined answer, and that it is oversold to the desperate and the clueless as a way of finding structure which may or may not exist. (The meta-assumption that data must clump, somehow, cannot be well tested by any method whose very purpose is to identify clusters.)
There have been attempts to show that one method of cluster analysis is "the right way", but it's my impression that people never get past arguing about the first principles. If you can give a precise definition of what you desire about a clustering, someone expert can guide you on methods to achieve that. | Why is the k-means algorithm minimizing the within cluster variance?
There are several questions here at very different levels. In essence every text on cluster analysis is an answer. You have to keep reading!
Variance is at one level just one statistical standard whi |
52,679 | Which glm algorithm to use when predictors are numerical as well as categorical? | When your dependent variable is binary ($1$ vs. $0$, "dead" vs. "alive"), the you might use logistic regression which is a glm with a binomial error distribution and a logit link function. When your dependent variable is ordinal (e.g. "bad"> "good" > "best"), you can use ordinal logistic regression. For a nominal (e.g. transportation: "walk", "car", "bicycle") dependent variable, you can use multinomial logistic regression.
EDIT
Your approach to convert the disease status into a 0,1-variable seems correct. If your outcome is continuous, you could use a GLM with a gaussian error distribution and an identity link function which is equivalent to a simple multiple regression model (OLS). | Which glm algorithm to use when predictors are numerical as well as categorical? | When your dependent variable is binary ($1$ vs. $0$, "dead" vs. "alive"), the you might use logistic regression which is a glm with a binomial error distribution and a logit link function. When your d | Which glm algorithm to use when predictors are numerical as well as categorical?
When your dependent variable is binary ($1$ vs. $0$, "dead" vs. "alive"), the you might use logistic regression which is a glm with a binomial error distribution and a logit link function. When your dependent variable is ordinal (e.g. "bad"> "good" > "best"), you can use ordinal logistic regression. For a nominal (e.g. transportation: "walk", "car", "bicycle") dependent variable, you can use multinomial logistic regression.
EDIT
Your approach to convert the disease status into a 0,1-variable seems correct. If your outcome is continuous, you could use a GLM with a gaussian error distribution and an identity link function which is equivalent to a simple multiple regression model (OLS). | Which glm algorithm to use when predictors are numerical as well as categorical?
When your dependent variable is binary ($1$ vs. $0$, "dead" vs. "alive"), the you might use logistic regression which is a glm with a binomial error distribution and a logit link function. When your d |
52,680 | Which glm algorithm to use when predictors are numerical as well as categorical? | The algorithm used by any implementation of generalised linear models is immaterial here -- or at least if there are grounds to choose one algorithm rather than another they don't hinge on any detail you mention.
Typically you present categorical predictors to GLM commands or functions as indicator variables (dummy variables, in a common vulgar terminology).
Quite how you do this depends a little on the software, but at the most complicated you just need to calculate your indicators in advance.
If you have a more specific question, please ask it. | Which glm algorithm to use when predictors are numerical as well as categorical? | The algorithm used by any implementation of generalised linear models is immaterial here -- or at least if there are grounds to choose one algorithm rather than another they don't hinge on any detail | Which glm algorithm to use when predictors are numerical as well as categorical?
The algorithm used by any implementation of generalised linear models is immaterial here -- or at least if there are grounds to choose one algorithm rather than another they don't hinge on any detail you mention.
Typically you present categorical predictors to GLM commands or functions as indicator variables (dummy variables, in a common vulgar terminology).
Quite how you do this depends a little on the software, but at the most complicated you just need to calculate your indicators in advance.
If you have a more specific question, please ask it. | Which glm algorithm to use when predictors are numerical as well as categorical?
The algorithm used by any implementation of generalised linear models is immaterial here -- or at least if there are grounds to choose one algorithm rather than another they don't hinge on any detail |
52,681 | Which glm algorithm to use when predictors are numerical as well as categorical? | glm requires that all variables be numeric. Typically, you convert categorical variables to numeric variables using dummy variables. | Which glm algorithm to use when predictors are numerical as well as categorical? | glm requires that all variables be numeric. Typically, you convert categorical variables to numeric variables using dummy variables. | Which glm algorithm to use when predictors are numerical as well as categorical?
glm requires that all variables be numeric. Typically, you convert categorical variables to numeric variables using dummy variables. | Which glm algorithm to use when predictors are numerical as well as categorical?
glm requires that all variables be numeric. Typically, you convert categorical variables to numeric variables using dummy variables. |
52,682 | Homogeneity testing of baseline characteristics in medical trials | It is generally thought nowadays that testing for baseline differences in randomized experiments is misleading. Stephen Senn's book Statistical Issues in Drug Development discusses this. One of the many issues involved is that you never know when to stop. How many uncollected variables do you go back and collect in order to test for balance? Couple that with low power and no real interpretation and it's a waste of time unless cheating is suspected. | Homogeneity testing of baseline characteristics in medical trials | It is generally thought nowadays that testing for baseline differences in randomized experiments is misleading. Stephen Senn's book Statistical Issues in Drug Development discusses this. One of the | Homogeneity testing of baseline characteristics in medical trials
It is generally thought nowadays that testing for baseline differences in randomized experiments is misleading. Stephen Senn's book Statistical Issues in Drug Development discusses this. One of the many issues involved is that you never know when to stop. How many uncollected variables do you go back and collect in order to test for balance? Couple that with low power and no real interpretation and it's a waste of time unless cheating is suspected. | Homogeneity testing of baseline characteristics in medical trials
It is generally thought nowadays that testing for baseline differences in randomized experiments is misleading. Stephen Senn's book Statistical Issues in Drug Development discusses this. One of the |
52,683 | Homogeneity testing of baseline characteristics in medical trials | This is not actually an answer to your question but the logic behind such tests seems fundamentally misguided, no matter what the specifics are.
If treatment assignment is not or cannot be properly randomized, showing that both groups have approximately the same characteristics on some arbitrary set of variables is not going to replace randomization. If treatment assignment is in fact properly randomized, tests on demographic characteristics provide absolutely no information. At the conventional level, one in twenty tests should be significant because the null hypothesis is true by construction.
Furthermore, why should you care about the groups' composition? If variables like age or gender do not interact with the treatment, it does not matter in the least. If, on the other hand, you have reasons to believe your treatment does not produce the same effect in different subgroups, you will loose power but randomization ensures that it does not threaten inferences. At the same time, even groups with exactly the same composition will not help you improve power or understand the effect in each subgroup. For that, you need to include the relevant variable in the model or estimate the effect separately on each subgroup.
In any case, interpreting a large p-value as evidence that there is no difference is mistaken, especially for such small sample sizes. If you consider a situation in which treatment cannot be randomized, the power of a test for differences in age or gender will of course depend a lot on sample size. With a small sample, you have basically no power to detect anything but obvious differences, even if smaller differences do matter. With a large sample, you will find small differences (e.g. a few months differences in age) to be “significant” even if they are so small so as to have absolutely no effect on your outcome. | Homogeneity testing of baseline characteristics in medical trials | This is not actually an answer to your question but the logic behind such tests seems fundamentally misguided, no matter what the specifics are.
If treatment assignment is not or cannot be properly ra | Homogeneity testing of baseline characteristics in medical trials
This is not actually an answer to your question but the logic behind such tests seems fundamentally misguided, no matter what the specifics are.
If treatment assignment is not or cannot be properly randomized, showing that both groups have approximately the same characteristics on some arbitrary set of variables is not going to replace randomization. If treatment assignment is in fact properly randomized, tests on demographic characteristics provide absolutely no information. At the conventional level, one in twenty tests should be significant because the null hypothesis is true by construction.
Furthermore, why should you care about the groups' composition? If variables like age or gender do not interact with the treatment, it does not matter in the least. If, on the other hand, you have reasons to believe your treatment does not produce the same effect in different subgroups, you will loose power but randomization ensures that it does not threaten inferences. At the same time, even groups with exactly the same composition will not help you improve power or understand the effect in each subgroup. For that, you need to include the relevant variable in the model or estimate the effect separately on each subgroup.
In any case, interpreting a large p-value as evidence that there is no difference is mistaken, especially for such small sample sizes. If you consider a situation in which treatment cannot be randomized, the power of a test for differences in age or gender will of course depend a lot on sample size. With a small sample, you have basically no power to detect anything but obvious differences, even if smaller differences do matter. With a large sample, you will find small differences (e.g. a few months differences in age) to be “significant” even if they are so small so as to have absolutely no effect on your outcome. | Homogeneity testing of baseline characteristics in medical trials
This is not actually an answer to your question but the logic behind such tests seems fundamentally misguided, no matter what the specifics are.
If treatment assignment is not or cannot be properly ra |
52,684 | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic regression? [duplicate] | A positive interaction effect between A and B means that when A increases, the effect (in this case log odds ratio) of B increases. A negative interaction effect means that when A increases, the effect of B decreases.
When interpreting the results, I often find it easiest to work in the odds metric rather than the log(odds) metric. I tend to start with the baseline odds just to refresh my (and my audience's) memory on what odds are, then continue to interpret the odds ratios of the main effects (odds ratios are literary that: ratios of odds), and then go on to the interaction effects, which in logistic regression are ratios of odds ratios. A complete example is given in my answer to Interpreting interaction terms in logit regression with categorical variables | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic | A positive interaction effect between A and B means that when A increases, the effect (in this case log odds ratio) of B increases. A negative interaction effect means that when A increases, the effec | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic regression? [duplicate]
A positive interaction effect between A and B means that when A increases, the effect (in this case log odds ratio) of B increases. A negative interaction effect means that when A increases, the effect of B decreases.
When interpreting the results, I often find it easiest to work in the odds metric rather than the log(odds) metric. I tend to start with the baseline odds just to refresh my (and my audience's) memory on what odds are, then continue to interpret the odds ratios of the main effects (odds ratios are literary that: ratios of odds), and then go on to the interaction effects, which in logistic regression are ratios of odds ratios. A complete example is given in my answer to Interpreting interaction terms in logit regression with categorical variables | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic
A positive interaction effect between A and B means that when A increases, the effect (in this case log odds ratio) of B increases. A negative interaction effect means that when A increases, the effec |
52,685 | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic regression? [duplicate] | I'm not familiar with binary logistic regression, but in interaction effects in general, the way you understand them is by plotting (usually means, perhaps different in this case?). That will allow you to see the relationship between different levels and state the interaction effect specifically. | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic | I'm not familiar with binary logistic regression, but in interaction effects in general, the way you understand them is by plotting (usually means, perhaps different in this case?). That will allow y | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic regression? [duplicate]
I'm not familiar with binary logistic regression, but in interaction effects in general, the way you understand them is by plotting (usually means, perhaps different in this case?). That will allow you to see the relationship between different levels and state the interaction effect specifically. | Is there a simple rule for interpretation of Interactions (and their directions) in binary logistic
I'm not familiar with binary logistic regression, but in interaction effects in general, the way you understand them is by plotting (usually means, perhaps different in this case?). That will allow y |
52,686 | Paired t-test and two-sample t-test | It would be bad -- very bad. If you ignore the pairing and use a two-sample t-test where you should have used the paired, chances are that you will not be able to detect the effect of interest. In this case, between subject variation is included in the estimate of variance used to measure the effect of interest. The variance is inflated, and only a substantial effect size will be deemed significant. Example: students are given a math test, followed by an instruction module, and then another test. If you ignore the pairing (before and after for each student), difference in ability is added to the effect of the module, thus making the effect harder to detect.
On the other hand, if you foist pairing onto an unpaired sample you reduce the degrees of freedom of the test. You are adding variables to the model (the pairing effect) that are basically random. In a sense, for each pair, you are adding a parameter for that pair -- but these parameters mean nothing. You would be adding noise, in a sense. | Paired t-test and two-sample t-test | It would be bad -- very bad. If you ignore the pairing and use a two-sample t-test where you should have used the paired, chances are that you will not be able to detect the effect of interest. In thi | Paired t-test and two-sample t-test
It would be bad -- very bad. If you ignore the pairing and use a two-sample t-test where you should have used the paired, chances are that you will not be able to detect the effect of interest. In this case, between subject variation is included in the estimate of variance used to measure the effect of interest. The variance is inflated, and only a substantial effect size will be deemed significant. Example: students are given a math test, followed by an instruction module, and then another test. If you ignore the pairing (before and after for each student), difference in ability is added to the effect of the module, thus making the effect harder to detect.
On the other hand, if you foist pairing onto an unpaired sample you reduce the degrees of freedom of the test. You are adding variables to the model (the pairing effect) that are basically random. In a sense, for each pair, you are adding a parameter for that pair -- but these parameters mean nothing. You would be adding noise, in a sense. | Paired t-test and two-sample t-test
It would be bad -- very bad. If you ignore the pairing and use a two-sample t-test where you should have used the paired, chances are that you will not be able to detect the effect of interest. In thi |
52,687 | Paired t-test and two-sample t-test | I thought I'd post R code here to illustrate the point that I made in the comments to Placidia's answer above (which I like, but I wanted to go into more detail on this particular point.) This is the working that went into that comment.
The following code imposes random pairings of observations from two independent samples. If we repeat this imposition of random pairs 1000 times, running a paired t-test each time, you'll see that there is considerable fluctuation in the observed p-value. This is arising from fluctuation of the estimated std error and then t-statistic, of course, but I'm just illustrating the p-values here. But note that this problem has implications for the construction of confidence intervals too, rather than just for hypothesis tests per se.
I selected the random seed below to illustrate what happens when the unpaired t-test is "significant" (i.e. the "correct" analysis for these data, arising as they do from independent samples) but there is nothing canonical about that choice.
## Code written in R 2.15.2
set.seed(2464858)
## set a and b to be random selections from
a <- rnorm(20, m = .5, s = 1)
b <- rnorm(20, m = 1, s = 1)
# "Appropriate" unpaired t-test
t.test(a, b, paired=F)
# should return: t = -2.2397, df = 37.458, p-value = 0.03113
## Now 1000 iterations for imposing "fake" pairs at random:
## with apologies for inefficient looping of code.
paired.ps <- array(NA, 1000)
for (i in 1:1000){
b2 <- sample(b, length(b))
t.result <- t.test(a, b2, paired=T)
paired.ps[i] <- t.result$p.value
}
## Summary and histogram of distribution of p-values returned.
summary(paired.ps)
## returns Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00106 0.02533 0.03876 0.03936 0.05106 0.10000
hist(paired.ps)
Note that you can tweak mean/standard deviation in the 2 populations below to see the impact of these parameters on the fluctuation of p-values from an (inappropriately applied) paired t-test. | Paired t-test and two-sample t-test | I thought I'd post R code here to illustrate the point that I made in the comments to Placidia's answer above (which I like, but I wanted to go into more detail on this particular point.) This is the | Paired t-test and two-sample t-test
I thought I'd post R code here to illustrate the point that I made in the comments to Placidia's answer above (which I like, but I wanted to go into more detail on this particular point.) This is the working that went into that comment.
The following code imposes random pairings of observations from two independent samples. If we repeat this imposition of random pairs 1000 times, running a paired t-test each time, you'll see that there is considerable fluctuation in the observed p-value. This is arising from fluctuation of the estimated std error and then t-statistic, of course, but I'm just illustrating the p-values here. But note that this problem has implications for the construction of confidence intervals too, rather than just for hypothesis tests per se.
I selected the random seed below to illustrate what happens when the unpaired t-test is "significant" (i.e. the "correct" analysis for these data, arising as they do from independent samples) but there is nothing canonical about that choice.
## Code written in R 2.15.2
set.seed(2464858)
## set a and b to be random selections from
a <- rnorm(20, m = .5, s = 1)
b <- rnorm(20, m = 1, s = 1)
# "Appropriate" unpaired t-test
t.test(a, b, paired=F)
# should return: t = -2.2397, df = 37.458, p-value = 0.03113
## Now 1000 iterations for imposing "fake" pairs at random:
## with apologies for inefficient looping of code.
paired.ps <- array(NA, 1000)
for (i in 1:1000){
b2 <- sample(b, length(b))
t.result <- t.test(a, b2, paired=T)
paired.ps[i] <- t.result$p.value
}
## Summary and histogram of distribution of p-values returned.
summary(paired.ps)
## returns Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00106 0.02533 0.03876 0.03936 0.05106 0.10000
hist(paired.ps)
Note that you can tweak mean/standard deviation in the 2 populations below to see the impact of these parameters on the fluctuation of p-values from an (inappropriately applied) paired t-test. | Paired t-test and two-sample t-test
I thought I'd post R code here to illustrate the point that I made in the comments to Placidia's answer above (which I like, but I wanted to go into more detail on this particular point.) This is the |
52,688 | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | It is hard to underestimate the contribution of Andrey Kolmogorov, for putting probability theory on a rigorous mathematical footing during the Soviet era.
Also, Andrey Markov, for his contribution to stochastic processes and Markov chains in particular, though most of his work was in the pre-Soviet era.
Possibly worth mentioning, though he was pre-Soviet Russia, Pafnuty Chebyshev for the Chebyshev inequality. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | It is hard to underestimate the contribution of Andrey Kolmogorov, for putting probability theory on a rigorous mathematical footing during the Soviet era.
Also, Andrey Markov, for his contribution to | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
It is hard to underestimate the contribution of Andrey Kolmogorov, for putting probability theory on a rigorous mathematical footing during the Soviet era.
Also, Andrey Markov, for his contribution to stochastic processes and Markov chains in particular, though most of his work was in the pre-Soviet era.
Possibly worth mentioning, though he was pre-Soviet Russia, Pafnuty Chebyshev for the Chebyshev inequality. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
It is hard to underestimate the contribution of Andrey Kolmogorov, for putting probability theory on a rigorous mathematical footing during the Soviet era.
Also, Andrey Markov, for his contribution to |
52,689 | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | Vladimir Vapnik - important contributions to computational learning theory (e.g. VC theory) and an inventor of the support vector machine. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | Vladimir Vapnik - important contributions to computational learning theory (e.g. VC theory) and an inventor of the support vector machine. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
Vladimir Vapnik - important contributions to computational learning theory (e.g. VC theory) and an inventor of the support vector machine. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
Vladimir Vapnik - important contributions to computational learning theory (e.g. VC theory) and an inventor of the support vector machine. |
52,690 | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | As @PSellaz mentions, Kolmogorov's contributions are very fundamental and include the standard axiomatization of the notion of probability (there are other ways to define a probability, but they aren't as well known as Kolmogorov's definition.) You might have also heard of Kolmogorov-Smirnov test and Kolmogorov's 0-1 Theorem. Due to his influence, there was a Soviet school of probability during the Soviet years. Some of Kolmogorov's students became well-known probability theorists. Yakov Sinai (currently in Princeton) is one of them. Yuri Prokhorov and Eugene Dynkin are two others. The American Mathematical Society has for many years translated the journal Theory of Probability and Mathematical Statistics from Russian into English. The list of editors and contributors should give you a good idea of who the current players are. To get an idea of how things were during Kolmogorov's time, look at Paul Halmos's book I Want to Be a Mathematician. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | As @PSellaz mentions, Kolmogorov's contributions are very fundamental and include the standard axiomatization of the notion of probability (there are other ways to define a probability, but they aren' | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
As @PSellaz mentions, Kolmogorov's contributions are very fundamental and include the standard axiomatization of the notion of probability (there are other ways to define a probability, but they aren't as well known as Kolmogorov's definition.) You might have also heard of Kolmogorov-Smirnov test and Kolmogorov's 0-1 Theorem. Due to his influence, there was a Soviet school of probability during the Soviet years. Some of Kolmogorov's students became well-known probability theorists. Yakov Sinai (currently in Princeton) is one of them. Yuri Prokhorov and Eugene Dynkin are two others. The American Mathematical Society has for many years translated the journal Theory of Probability and Mathematical Statistics from Russian into English. The list of editors and contributors should give you a good idea of who the current players are. To get an idea of how things were during Kolmogorov's time, look at Paul Halmos's book I Want to Be a Mathematician. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
As @PSellaz mentions, Kolmogorov's contributions are very fundamental and include the standard axiomatization of the notion of probability (there are other ways to define a probability, but they aren' |
52,691 | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | USSR was strong in both probability and statistics, but after breakdown of USSR many of the most prominent scientists leaved Russia and have worked during a long term in Western Universities. However, some of them now work partially in Russia.
I don't say that I mention all of prominent statistics with deep connection with Russia, I just provide a list of people I encountered. Order is alphabetic.
Golubev - now works on oracle inequalities for linear regression.
Shiryaev - one of the most widely known Russian scientists, disciple of the Kolmogorov probability school. He has prominent results in almost all probabily-related fields.
Spokoiny - now he is on the cutting edge of modern statistics, works mostly in Germany now.
Tsybakov - well-known statistics with h index as large as 28 according to the Microsoft Academics search. I visited his lectures on sparsity in statistics. Works in France. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed] | USSR was strong in both probability and statistics, but after breakdown of USSR many of the most prominent scientists leaved Russia and have worked during a long term in Western Universities. However, | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
USSR was strong in both probability and statistics, but after breakdown of USSR many of the most prominent scientists leaved Russia and have worked during a long term in Western Universities. However, some of them now work partially in Russia.
I don't say that I mention all of prominent statistics with deep connection with Russia, I just provide a list of people I encountered. Order is alphabetic.
Golubev - now works on oracle inequalities for linear regression.
Shiryaev - one of the most widely known Russian scientists, disciple of the Kolmogorov probability school. He has prominent results in almost all probabily-related fields.
Spokoiny - now he is on the cutting edge of modern statistics, works mostly in Germany now.
Tsybakov - well-known statistics with h index as large as 28 according to the Microsoft Academics search. I visited his lectures on sparsity in statistics. Works in France. | Who are some famous USSR/Russian Statistics/probability academic researchers? [closed]
USSR was strong in both probability and statistics, but after breakdown of USSR many of the most prominent scientists leaved Russia and have worked during a long term in Western Universities. However, |
52,692 | Partitioning data for k-fold cross validation that will not have equal partitions | Usually the $k$-fold cross validation subsets have approximately equal size. It is just crucial that they don't overlap.
For example I just had a look at what WEKA does. Say that you have $N$ instances and $k$ folds, then
$$ r = N \mod k $$
(the remainder of $N$ divided by $k$) is the number of surplus records. The first $r$ partitions will have $\lfloor N/k \rfloor + 1$ records, the other ones just $\lfloor N/k \rfloor$ instead
Regarding your example:
$$N = 76 $$
$$k = 10 $$
$$ r = N \mod k = 6 $$
First $6$ partitions will have $ \lfloor N/k \rfloor + 1 = 7 + 1 = 8$ records, the other ones $ 7 $ instead. | Partitioning data for k-fold cross validation that will not have equal partitions | Usually the $k$-fold cross validation subsets have approximately equal size. It is just crucial that they don't overlap.
For example I just had a look at what WEKA does. Say that you have $N$ instanc | Partitioning data for k-fold cross validation that will not have equal partitions
Usually the $k$-fold cross validation subsets have approximately equal size. It is just crucial that they don't overlap.
For example I just had a look at what WEKA does. Say that you have $N$ instances and $k$ folds, then
$$ r = N \mod k $$
(the remainder of $N$ divided by $k$) is the number of surplus records. The first $r$ partitions will have $\lfloor N/k \rfloor + 1$ records, the other ones just $\lfloor N/k \rfloor$ instead
Regarding your example:
$$N = 76 $$
$$k = 10 $$
$$ r = N \mod k = 6 $$
First $6$ partitions will have $ \lfloor N/k \rfloor + 1 = 7 + 1 = 8$ records, the other ones $ 7 $ instead. | Partitioning data for k-fold cross validation that will not have equal partitions
Usually the $k$-fold cross validation subsets have approximately equal size. It is just crucial that they don't overlap.
For example I just had a look at what WEKA does. Say that you have $N$ instanc |
52,693 | Partitioning data for k-fold cross validation that will not have equal partitions | As Simone said, it's usually not essential for each fold to be exactly the same size. It'd be perfectly reasonable to have six folds containing eight records and four containing seven records each. That's probably a better solution than having nine folds of size seven and shoving the excess into the last one.
10-fold cross validation is usually a pretty reasonable choice, but you should be aware that there are a passel of related approaches (see this thread), and some of those might better choices, depending on the particulars of your data set. For example, if your classes are very unbalanced, you may want to consider stratified cross-validation, which tries to distribute the classes evenly across the folds (e.g., if you have 16 examples of class A, they're ideally spread across all 10 folds, not lumped together into fold #1 and #2).
Some other schemes, like 5x2 CV, also have relatively nice properties if you're doing inference on the cross-validation results. | Partitioning data for k-fold cross validation that will not have equal partitions | As Simone said, it's usually not essential for each fold to be exactly the same size. It'd be perfectly reasonable to have six folds containing eight records and four containing seven records each. Th | Partitioning data for k-fold cross validation that will not have equal partitions
As Simone said, it's usually not essential for each fold to be exactly the same size. It'd be perfectly reasonable to have six folds containing eight records and four containing seven records each. That's probably a better solution than having nine folds of size seven and shoving the excess into the last one.
10-fold cross validation is usually a pretty reasonable choice, but you should be aware that there are a passel of related approaches (see this thread), and some of those might better choices, depending on the particulars of your data set. For example, if your classes are very unbalanced, you may want to consider stratified cross-validation, which tries to distribute the classes evenly across the folds (e.g., if you have 16 examples of class A, they're ideally spread across all 10 folds, not lumped together into fold #1 and #2).
Some other schemes, like 5x2 CV, also have relatively nice properties if you're doing inference on the cross-validation results. | Partitioning data for k-fold cross validation that will not have equal partitions
As Simone said, it's usually not essential for each fold to be exactly the same size. It'd be perfectly reasonable to have six folds containing eight records and four containing seven records each. Th |
52,694 | Partitioning data for k-fold cross validation that will not have equal partitions | You may want to use kfoldcv function to calculate sample sizes for the k groups.
kfoldcv(k, N, nlevel=NULL)
Arguments:
k number of groups.
N total sample size.
nlevel a vector of sample sizes for stratified sampling.
You will have to install the ipred package. | Partitioning data for k-fold cross validation that will not have equal partitions | You may want to use kfoldcv function to calculate sample sizes for the k groups.
kfoldcv(k, N, nlevel=NULL)
Arguments:
k number of groups.
N total sample size.
nlevel a vector of sample sizes | Partitioning data for k-fold cross validation that will not have equal partitions
You may want to use kfoldcv function to calculate sample sizes for the k groups.
kfoldcv(k, N, nlevel=NULL)
Arguments:
k number of groups.
N total sample size.
nlevel a vector of sample sizes for stratified sampling.
You will have to install the ipred package. | Partitioning data for k-fold cross validation that will not have equal partitions
You may want to use kfoldcv function to calculate sample sizes for the k groups.
kfoldcv(k, N, nlevel=NULL)
Arguments:
k number of groups.
N total sample size.
nlevel a vector of sample sizes |
52,695 | Robust standard errors in econometrics | If the assumption of homoskedasticity is truly valid, the simple estimator of the VCE is more efficient than the robust sandwich version. That means it has smaller variance, so your estimates are less uncertain.
Of course, you can always do a heteroskedasticity test first and estimate accordingly. | Robust standard errors in econometrics | If the assumption of homoskedasticity is truly valid, the simple estimator of the VCE is more efficient than the robust sandwich version. That means it has smaller variance, so your estimates are less | Robust standard errors in econometrics
If the assumption of homoskedasticity is truly valid, the simple estimator of the VCE is more efficient than the robust sandwich version. That means it has smaller variance, so your estimates are less uncertain.
Of course, you can always do a heteroskedasticity test first and estimate accordingly. | Robust standard errors in econometrics
If the assumption of homoskedasticity is truly valid, the simple estimator of the VCE is more efficient than the robust sandwich version. That means it has smaller variance, so your estimates are less |
52,696 | Robust standard errors in econometrics | There's also an interesting point raised by King & Roberts (2014): if your classical and robust standard errors diverge, your model suffers from misspecification that need to be fixed. "Settling" for the misspecified model and just correcting the standard errors will lead to "biased estimators of all but a few quantities of interest." | Robust standard errors in econometrics | There's also an interesting point raised by King & Roberts (2014): if your classical and robust standard errors diverge, your model suffers from misspecification that need to be fixed. "Settling" for | Robust standard errors in econometrics
There's also an interesting point raised by King & Roberts (2014): if your classical and robust standard errors diverge, your model suffers from misspecification that need to be fixed. "Settling" for the misspecified model and just correcting the standard errors will lead to "biased estimators of all but a few quantities of interest." | Robust standard errors in econometrics
There's also an interesting point raised by King & Roberts (2014): if your classical and robust standard errors diverge, your model suffers from misspecification that need to be fixed. "Settling" for |
52,697 | How do I compute sales forecasts? | I would have a look at Holt-Winters exponential smoothing. You can implement this fairly easily in R.
It picks up quite easily on trend and seasonality, and it will often (according to Colin Chatfield) give as good results as more complicated techniques such as ARIMA modelling.
Are you familiar with R?
You can download it here
There is a tutorial on Holt-Winters here
But I wouldn't get too bogged down in the equations (unless you want to! :) ).
Let me know if that's not enough detail.
EDIT: This tutorial is better I think http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/src/timeseries.html#forecasts-using-exponential-smoothing | How do I compute sales forecasts? | I would have a look at Holt-Winters exponential smoothing. You can implement this fairly easily in R.
It picks up quite easily on trend and seasonality, and it will often (according to Colin Chatfield | How do I compute sales forecasts?
I would have a look at Holt-Winters exponential smoothing. You can implement this fairly easily in R.
It picks up quite easily on trend and seasonality, and it will often (according to Colin Chatfield) give as good results as more complicated techniques such as ARIMA modelling.
Are you familiar with R?
You can download it here
There is a tutorial on Holt-Winters here
But I wouldn't get too bogged down in the equations (unless you want to! :) ).
Let me know if that's not enough detail.
EDIT: This tutorial is better I think http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/src/timeseries.html#forecasts-using-exponential-smoothing | How do I compute sales forecasts?
I would have a look at Holt-Winters exponential smoothing. You can implement this fairly easily in R.
It picks up quite easily on trend and seasonality, and it will often (according to Colin Chatfield |
52,698 | How do I compute sales forecasts? | Chris Chatfield's comments came about 10-15 years and were based on his experiences with Box-Jenkins methods that did not include features commonly available today as his software was inadequate as compared to today's software.
Holt-Winters model may be adequate if and only if
1) You assume that the parameters of the model are invariant over time 2) assume non-stationarity in the mean should be dealt with by one trend and not adjusting for multiple mean shifts in the data or multiple trends 3)assume seasonal structure is autoregressive rather than deterministic (seasonal pulses) rather than considering mixing and matching both deterministic and autoregressive structure 4)assume that any power transformation can be identified from the y series rather than the error series which is where the assumptions are placed 5)assume that the error variance of the model is homogenous and glm is unneeded 6) assume that there are no known support/explanatory variables like price, promotion, Holidays/events etc.
In summary the introductory texts teach what is easy to understand but simply inadequate to deal with real world data that is plagued by outliers, seasonal pulses, multiple level shifts and/or local time trends.
What I am saying here is meant to uplift the group and highlight that academic refences made many decades ago sometimes have a date stamp on them ! | How do I compute sales forecasts? | Chris Chatfield's comments came about 10-15 years and were based on his experiences with Box-Jenkins methods that did not include features commonly available today as his software was inadequate as co | How do I compute sales forecasts?
Chris Chatfield's comments came about 10-15 years and were based on his experiences with Box-Jenkins methods that did not include features commonly available today as his software was inadequate as compared to today's software.
Holt-Winters model may be adequate if and only if
1) You assume that the parameters of the model are invariant over time 2) assume non-stationarity in the mean should be dealt with by one trend and not adjusting for multiple mean shifts in the data or multiple trends 3)assume seasonal structure is autoregressive rather than deterministic (seasonal pulses) rather than considering mixing and matching both deterministic and autoregressive structure 4)assume that any power transformation can be identified from the y series rather than the error series which is where the assumptions are placed 5)assume that the error variance of the model is homogenous and glm is unneeded 6) assume that there are no known support/explanatory variables like price, promotion, Holidays/events etc.
In summary the introductory texts teach what is easy to understand but simply inadequate to deal with real world data that is plagued by outliers, seasonal pulses, multiple level shifts and/or local time trends.
What I am saying here is meant to uplift the group and highlight that academic refences made many decades ago sometimes have a date stamp on them ! | How do I compute sales forecasts?
Chris Chatfield's comments came about 10-15 years and were based on his experiences with Box-Jenkins methods that did not include features commonly available today as his software was inadequate as co |
52,699 | How do I compute sales forecasts? | Since this question cannot possibly be answered with the best answer because it deals with simulations and it's only speculation, I thought I'd also include what another friend told me to consider as well. He suggested looking at Monte Carlo simulations.
http://en.wikipedia.org/wiki/Monte_Carlo_method | How do I compute sales forecasts? | Since this question cannot possibly be answered with the best answer because it deals with simulations and it's only speculation, I thought I'd also include what another friend told me to consider as | How do I compute sales forecasts?
Since this question cannot possibly be answered with the best answer because it deals with simulations and it's only speculation, I thought I'd also include what another friend told me to consider as well. He suggested looking at Monte Carlo simulations.
http://en.wikipedia.org/wiki/Monte_Carlo_method | How do I compute sales forecasts?
Since this question cannot possibly be answered with the best answer because it deals with simulations and it's only speculation, I thought I'd also include what another friend told me to consider as |
52,700 | Comparing coefficients in logistic regression | What you need is a postestimation test, which tests for significance of difference between two regression models, one of which is nested, i.e., it results from the first regression model plus some restrictions. In your case, the restriction would be $β₁ > β₂$, Null Hypothesis is $β₁ − β₂ = 0$.
Examples for such postestimation tests are the Wald test or the Likelihood ratio test. I do not know which statistical package you use, but here is a link that describes those tests in STATA.
Interaction terms can be used to test for difference in coefficients for specific groups in your sample. The interaction is then constructed with a dummy variable that identifies the group which might feature a different coefficient in the regression model. I do not believe that an interaction term between price and attitude could help to answer your question. | Comparing coefficients in logistic regression | What you need is a postestimation test, which tests for significance of difference between two regression models, one of which is nested, i.e., it results from the first regression model plus some res | Comparing coefficients in logistic regression
What you need is a postestimation test, which tests for significance of difference between two regression models, one of which is nested, i.e., it results from the first regression model plus some restrictions. In your case, the restriction would be $β₁ > β₂$, Null Hypothesis is $β₁ − β₂ = 0$.
Examples for such postestimation tests are the Wald test or the Likelihood ratio test. I do not know which statistical package you use, but here is a link that describes those tests in STATA.
Interaction terms can be used to test for difference in coefficients for specific groups in your sample. The interaction is then constructed with a dummy variable that identifies the group which might feature a different coefficient in the regression model. I do not believe that an interaction term between price and attitude could help to answer your question. | Comparing coefficients in logistic regression
What you need is a postestimation test, which tests for significance of difference between two regression models, one of which is nested, i.e., it results from the first regression model plus some res |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.