idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,101 | Can we correctly identify all the non-zero coefficients in the linear regression model? | I do not have a good answer, but let me rephrase some of you thoughts and questions and give comments.
<...> what is the point of those $p$−values of each individual coefficient?
A $p$-value is a valid tool when assessing the significance of a single regressor individually. If you care about whether $X_i$ has a non-zero coefficient in population, you look at the $p$-value associated with the coefficient $\beta_i$ at $X_i$. If this is the only question you have, this is a satisfactory answer. That is the point of the individual $p$-values.
That's why we need to look at the $F$−test of the overall significance of the model, i.e., whether there is at least one coefficient that is significantly different from 0.
Yes, the $F$-statistic informs you whether all the regressors taken together have zero coefficients in population. But this is only a special case of what you are interested in, if I understand you correctly. So the $F$-statistic is not useful here -- unless it is low enough to conclude that at the given significance level there is not enough evidence to reject the null.
<...> if we find there are 12 coefficients whose $p$−values are below $\alpha$, can we conclude that these 12 coefficients are non-zero? At what confidence level? $1−\alpha$? or something else?
You can take any single coefficient individually and conclude at $1-\alpha$ confidence level that it is non-zero, but you cannot do that jointly for all 12 coefficients at $1-\alpha$ confidence level. If the significance tests for the twelve coefficients were independent, you could say that the confidence level is $(1-\alpha)^{12}$ which is (considerably) lower than $1-\alpha$.
If we perform forward selection/ backward elimination/ step-wise selection, in the final set of predictors produced by these methods, at what confidence level can we conclude that those corresponding coefficients are non-zero?
This is a tough question. The $p$-values and the $F$-statistic in the final model are conditional on how that model was built, i.e. on the forward selection / backward elimination / step-wise selection mechanism. Hence, they cannot be used as is for making inference about whether the coefficients are zero in population; these values have to be adjusted. There may exist a procedure for that (because the issue has been known for a long time), but I cannot remember any relevant reference.
If we run $2m$ regression model (where $m$ is total number of predictors), will this help us to identify the set of non-zero coefficients at least theoretically? At what confidence level?
Recall that all the models that will have omitted variables (variables that have non-zero coefficients in population) will suffer from the omitted variable bias and generally will have "wrong" $p$-values etc., so the approach appears problematic.
Finally, note that the "nice assumptions"
Assume all the nice assumptions hold, e.g., $\epsilon$ is normal, we have a set of i.i.d. observations.
require $\epsilon$ -- rather than the observations $Y$ and $X$ -- to be i.i.d. | Can we correctly identify all the non-zero coefficients in the linear regression model? | I do not have a good answer, but let me rephrase some of you thoughts and questions and give comments.
<...> what is the point of those $p$−values of each individual coefficient?
A $p$-value is a va | Can we correctly identify all the non-zero coefficients in the linear regression model?
I do not have a good answer, but let me rephrase some of you thoughts and questions and give comments.
<...> what is the point of those $p$−values of each individual coefficient?
A $p$-value is a valid tool when assessing the significance of a single regressor individually. If you care about whether $X_i$ has a non-zero coefficient in population, you look at the $p$-value associated with the coefficient $\beta_i$ at $X_i$. If this is the only question you have, this is a satisfactory answer. That is the point of the individual $p$-values.
That's why we need to look at the $F$−test of the overall significance of the model, i.e., whether there is at least one coefficient that is significantly different from 0.
Yes, the $F$-statistic informs you whether all the regressors taken together have zero coefficients in population. But this is only a special case of what you are interested in, if I understand you correctly. So the $F$-statistic is not useful here -- unless it is low enough to conclude that at the given significance level there is not enough evidence to reject the null.
<...> if we find there are 12 coefficients whose $p$−values are below $\alpha$, can we conclude that these 12 coefficients are non-zero? At what confidence level? $1−\alpha$? or something else?
You can take any single coefficient individually and conclude at $1-\alpha$ confidence level that it is non-zero, but you cannot do that jointly for all 12 coefficients at $1-\alpha$ confidence level. If the significance tests for the twelve coefficients were independent, you could say that the confidence level is $(1-\alpha)^{12}$ which is (considerably) lower than $1-\alpha$.
If we perform forward selection/ backward elimination/ step-wise selection, in the final set of predictors produced by these methods, at what confidence level can we conclude that those corresponding coefficients are non-zero?
This is a tough question. The $p$-values and the $F$-statistic in the final model are conditional on how that model was built, i.e. on the forward selection / backward elimination / step-wise selection mechanism. Hence, they cannot be used as is for making inference about whether the coefficients are zero in population; these values have to be adjusted. There may exist a procedure for that (because the issue has been known for a long time), but I cannot remember any relevant reference.
If we run $2m$ regression model (where $m$ is total number of predictors), will this help us to identify the set of non-zero coefficients at least theoretically? At what confidence level?
Recall that all the models that will have omitted variables (variables that have non-zero coefficients in population) will suffer from the omitted variable bias and generally will have "wrong" $p$-values etc., so the approach appears problematic.
Finally, note that the "nice assumptions"
Assume all the nice assumptions hold, e.g., $\epsilon$ is normal, we have a set of i.i.d. observations.
require $\epsilon$ -- rather than the observations $Y$ and $X$ -- to be i.i.d. | Can we correctly identify all the non-zero coefficients in the linear regression model?
I do not have a good answer, but let me rephrase some of you thoughts and questions and give comments.
<...> what is the point of those $p$−values of each individual coefficient?
A $p$-value is a va |
55,102 | Can we correctly identify all the non-zero coefficients in the linear regression model? | Running statistical algorithms to find the "correct" variables is almost futile. A relevant simulation is in Section 4.3 of my Regression Modeling Strategies course notes at http://biostat.mc.vanderbilt.edu/RmS#Materials
The simplest way to look at the difficulty of the task is to use the bootstrap to get confidence intervals for the importance rankings of variables competing in a multivariable regression. A demonstration of this is in Section 5.4 of the same course notes. | Can we correctly identify all the non-zero coefficients in the linear regression model? | Running statistical algorithms to find the "correct" variables is almost futile. A relevant simulation is in Section 4.3 of my Regression Modeling Strategies course notes at http://biostat.mc.vanderb | Can we correctly identify all the non-zero coefficients in the linear regression model?
Running statistical algorithms to find the "correct" variables is almost futile. A relevant simulation is in Section 4.3 of my Regression Modeling Strategies course notes at http://biostat.mc.vanderbilt.edu/RmS#Materials
The simplest way to look at the difficulty of the task is to use the bootstrap to get confidence intervals for the importance rankings of variables competing in a multivariable regression. A demonstration of this is in Section 5.4 of the same course notes. | Can we correctly identify all the non-zero coefficients in the linear regression model?
Running statistical algorithms to find the "correct" variables is almost futile. A relevant simulation is in Section 4.3 of my Regression Modeling Strategies course notes at http://biostat.mc.vanderb |
55,103 | What optimization (maximization/minimization) methods exist contours with lots of kinks? | The state-of-the-art for non-convex optimization in a complex, ill-conditioned and multi-modal landscape is Covariance Matrix Adaptation - Evolution Strategies, aka CMA-ES, which in various versions (such as BIPOP-CMA-ES) has scored first in several global optimization contests (see e.g. the Black-Box Optimization Benchmarking, BBOB 2009).
You can find a lot of information, code for several languages (C, C++, Java, Matlab, Octave, Python, Scilab), research papers and tutorials about CMA-ES on Nikolaus Hansen's website.
The relevant paper is:
Hansen, Nikolaus, Sibylle D. Müller, and Petros Koumoutsakos. "Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)." Evolutionary computation 11.1 (2003): 1-18;
which you can find on the first author's website linked above. I also recommend his tutorial paper, which you can also find on his webpage.
The disadvantage is that CMA-ES requires a lot of functions evaluations to be effective (e.g., usually not less than $10^3$-$10^4$, and even more); however, it is able to solve problems that are just impossible for other solvers.
A possible alternative, if your target function is costly to evaluate, is to use Bayesian Optimization (BO) assuming noise in the target function; this would smooth away kinks in the target distribution. See also this question and my reply there to get some references about BO or other optimizers. | What optimization (maximization/minimization) methods exist contours with lots of kinks? | The state-of-the-art for non-convex optimization in a complex, ill-conditioned and multi-modal landscape is Covariance Matrix Adaptation - Evolution Strategies, aka CMA-ES, which in various versions ( | What optimization (maximization/minimization) methods exist contours with lots of kinks?
The state-of-the-art for non-convex optimization in a complex, ill-conditioned and multi-modal landscape is Covariance Matrix Adaptation - Evolution Strategies, aka CMA-ES, which in various versions (such as BIPOP-CMA-ES) has scored first in several global optimization contests (see e.g. the Black-Box Optimization Benchmarking, BBOB 2009).
You can find a lot of information, code for several languages (C, C++, Java, Matlab, Octave, Python, Scilab), research papers and tutorials about CMA-ES on Nikolaus Hansen's website.
The relevant paper is:
Hansen, Nikolaus, Sibylle D. Müller, and Petros Koumoutsakos. "Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)." Evolutionary computation 11.1 (2003): 1-18;
which you can find on the first author's website linked above. I also recommend his tutorial paper, which you can also find on his webpage.
The disadvantage is that CMA-ES requires a lot of functions evaluations to be effective (e.g., usually not less than $10^3$-$10^4$, and even more); however, it is able to solve problems that are just impossible for other solvers.
A possible alternative, if your target function is costly to evaluate, is to use Bayesian Optimization (BO) assuming noise in the target function; this would smooth away kinks in the target distribution. See also this question and my reply there to get some references about BO or other optimizers. | What optimization (maximization/minimization) methods exist contours with lots of kinks?
The state-of-the-art for non-convex optimization in a complex, ill-conditioned and multi-modal landscape is Covariance Matrix Adaptation - Evolution Strategies, aka CMA-ES, which in various versions ( |
55,104 | Interpreting accuracy results for an ARIMA model fit | Here are a few points.
Your ARIMA model is not "estimating a second time series", it is filtering it.
The function accuracy gives you multiple measures of accuracy of the model fit: mean error (ME), root mean squared error (RMSE), mean absolute error (MAE), mean percentage error (MPE), mean absolute percentage error (MAPE), mean absolute scaled error (MASE) and the first-order autocorrelation coefficient (ACF1). It is up to you to decide, based on the accuracy measures, whether you consider this a good fit or not. For example, mean percentage error of nearly -70% does not look good to me in general, but that may depend on what your series are and how much predictability you may realistically expect.
It is often a good idea to plot the original series and the fitted values, and also model residuals. You may occasionally learn more from the plot than from the few summarizing measures such as the ones given by the accuracy function. | Interpreting accuracy results for an ARIMA model fit | Here are a few points.
Your ARIMA model is not "estimating a second time series", it is filtering it.
The function accuracy gives you multiple measures of accuracy of the model fit: mean error (ME), | Interpreting accuracy results for an ARIMA model fit
Here are a few points.
Your ARIMA model is not "estimating a second time series", it is filtering it.
The function accuracy gives you multiple measures of accuracy of the model fit: mean error (ME), root mean squared error (RMSE), mean absolute error (MAE), mean percentage error (MPE), mean absolute percentage error (MAPE), mean absolute scaled error (MASE) and the first-order autocorrelation coefficient (ACF1). It is up to you to decide, based on the accuracy measures, whether you consider this a good fit or not. For example, mean percentage error of nearly -70% does not look good to me in general, but that may depend on what your series are and how much predictability you may realistically expect.
It is often a good idea to plot the original series and the fitted values, and also model residuals. You may occasionally learn more from the plot than from the few summarizing measures such as the ones given by the accuracy function. | Interpreting accuracy results for an ARIMA model fit
Here are a few points.
Your ARIMA model is not "estimating a second time series", it is filtering it.
The function accuracy gives you multiple measures of accuracy of the model fit: mean error (ME), |
55,105 | Interpreting accuracy results for an ARIMA model fit | Out of all the one simplest to understand is MAPE (Mean absolute percentage error). It considers actual values fed into model and fitted values from the model and calculates absolute difference between the two as a percentage of actual value and finally calculates mean of that.
For example if below are your actual data and results from ARIMA model
ActualData FittedValue AbsolutePercentageError
120 119.5 (abs(120-119)/120)*100 = 0.83%
128 126 (abs(128-126)/128)*100 = 1.56%
MAPE = (0.83%+1.56%)/2 = 1.195%
Similarly you can do a quick google search to find out how meaning of other criterias. As per my experience MAPE is easiest one to explain to a layman, in case you want to explain model accuracy to a business user who is statistics illiterate. Also, you should forecast for a holdout sample for future and do the similar exercise to see how well it fits for future values Vs. the actuals.
Edit: This is an interesting discussion which accuracy metric to use in what scenario. Why use a certain measure of forecast error (e.g. MAD) as opposed to another (e.g. MSE)? | Interpreting accuracy results for an ARIMA model fit | Out of all the one simplest to understand is MAPE (Mean absolute percentage error). It considers actual values fed into model and fitted values from the model and calculates absolute difference betwee | Interpreting accuracy results for an ARIMA model fit
Out of all the one simplest to understand is MAPE (Mean absolute percentage error). It considers actual values fed into model and fitted values from the model and calculates absolute difference between the two as a percentage of actual value and finally calculates mean of that.
For example if below are your actual data and results from ARIMA model
ActualData FittedValue AbsolutePercentageError
120 119.5 (abs(120-119)/120)*100 = 0.83%
128 126 (abs(128-126)/128)*100 = 1.56%
MAPE = (0.83%+1.56%)/2 = 1.195%
Similarly you can do a quick google search to find out how meaning of other criterias. As per my experience MAPE is easiest one to explain to a layman, in case you want to explain model accuracy to a business user who is statistics illiterate. Also, you should forecast for a holdout sample for future and do the similar exercise to see how well it fits for future values Vs. the actuals.
Edit: This is an interesting discussion which accuracy metric to use in what scenario. Why use a certain measure of forecast error (e.g. MAD) as opposed to another (e.g. MSE)? | Interpreting accuracy results for an ARIMA model fit
Out of all the one simplest to understand is MAPE (Mean absolute percentage error). It considers actual values fed into model and fitted values from the model and calculates absolute difference betwee |
55,106 | Interpreting accuracy results for an ARIMA model fit | I was searching myself how to interpret these indices.
Note that the best method remains to plot the predictions over the real data of the same period.
I found these information on various websites including Wikipedia, stackexchange / stackoverflow, statisticshowto and other web places:
-You may "Ecosia" some of these phrases to find their source.
ME: Mean Error -- The mean error is an informal term that usually
refers to the average of -- all the errors in a set. An “error” in
this context is an uncertainty in a measurement, -- or the
difference between the measured value and true/correct value
RMSE: Root Mean Squared Error
2.1 MAE: Mean Absolute Error -- The MAE
measures the average magnitude of the errors in a set of forecasts,
-- without considering their direction. It measures accuracy for continuous variables. -- The RMSE will always be larger or equal
to the MAE; -- the greater difference between them, the greater
the variance in the individual errors -- in the sample. If the
RMSE=MAE, then all the errors are of the same magnitude -- Both the
MAE and RMSE can range from 0 to ∞. -- They are
negatively-oriented scores: Lower values are better.
MPE: Mean Percentage Error -- the mean percentage error (MPE) is
the computed average of -- percentage errors by which forecasts of
a model differ from actual values of the -- quantity being
forecast.
MAPE: Mean Absolute Percentage Error -- The MAPE, as a percentage,
only makes sense for values where divisions and -- ratios make
sense. It doesn't make sense to calculate percentages of
temperatures -- MAPEs greater than 100% can occur. -- then this
may lead to negative accuracy, which people may have a hard time
understanding -- Error close to 0% => Increasing forecast accuracy
-- Around 2.2% MAPE implies the model is about 97.8% accurate in predicting the next 15 observations.
MASE: Mean Absolute Scaled Error -- Scale invariance: The mean
absolute scaled error is independent of the scale of the data, --
so can be used to compare forecasts across data sets with different
scales. -- ok for scales that do not have a meaningful 0, --
penalizes positive and negative forecast errors equally -- Values
greater than one indicate that in-sample one-step forecasts
from the naïve method perform better than the forecast values under consideration. -- When comparing forecasting methods, the
method with the lowest MASE is the preferred method.
ACF1: Autocorrelation of errors at lag 1.' -- it is a measure of
how much is the current value influenced by the previous values in a
time series. -- Specifically, the autocorrelation function tells
you the correlation between points separated by various time lags
-- the ACF tells you how correlated points are with each other, -- based on how many time steps they are separated by. That is the gist
of autocorrelation, -- it is how correlated past data points are to
future data points, for different values of the time separation. --
Typically, you'd expect the autocorrelation function -- to fall
towards 0 as points become more separated (i.e. n becomes large in
the above notation) -- because its generally harder to forecast
further into the future from a given set of data. -- This is not a
rule, but is typical. -- ACF(0)=1 (all data are perfectly
correlated with themselves), -- ACF(1)=.9 (the correlation between
a point and the next point is 0.9), ACF(2)=.4 -- (the correlation
between a point and a point two time steps ahead is 0.4)...etc.
MAPE (???), Correlation and Min-Max Error can be used an RMSE of
100 for a series whose mean is in 1000’s is better than an RMSE of 5
for series in 10’s. So, you can’t really use them to compare the
forecasts of two different scaled time series.
..........
All your indicators (ME, RMSE, MAE, MPE, MAPE, MASE, ACF1,...) are
aggregations of two types of errors : a bias (you have the wrong
model but an accurate fit) + a variance (you have the right model
but a inaccurate fit). And there is no statistical method to know
if you have a high bias and low variance or a high variance and low
bias. So I suggest, you make a plot and make an eye-stimate to
select the "best" one, best meaning with the least business
consequences if you are wrong.
Generally, all of these values to be as small as possible | Interpreting accuracy results for an ARIMA model fit | I was searching myself how to interpret these indices.
Note that the best method remains to plot the predictions over the real data of the same period.
I found these information on various websites | Interpreting accuracy results for an ARIMA model fit
I was searching myself how to interpret these indices.
Note that the best method remains to plot the predictions over the real data of the same period.
I found these information on various websites including Wikipedia, stackexchange / stackoverflow, statisticshowto and other web places:
-You may "Ecosia" some of these phrases to find their source.
ME: Mean Error -- The mean error is an informal term that usually
refers to the average of -- all the errors in a set. An “error” in
this context is an uncertainty in a measurement, -- or the
difference between the measured value and true/correct value
RMSE: Root Mean Squared Error
2.1 MAE: Mean Absolute Error -- The MAE
measures the average magnitude of the errors in a set of forecasts,
-- without considering their direction. It measures accuracy for continuous variables. -- The RMSE will always be larger or equal
to the MAE; -- the greater difference between them, the greater
the variance in the individual errors -- in the sample. If the
RMSE=MAE, then all the errors are of the same magnitude -- Both the
MAE and RMSE can range from 0 to ∞. -- They are
negatively-oriented scores: Lower values are better.
MPE: Mean Percentage Error -- the mean percentage error (MPE) is
the computed average of -- percentage errors by which forecasts of
a model differ from actual values of the -- quantity being
forecast.
MAPE: Mean Absolute Percentage Error -- The MAPE, as a percentage,
only makes sense for values where divisions and -- ratios make
sense. It doesn't make sense to calculate percentages of
temperatures -- MAPEs greater than 100% can occur. -- then this
may lead to negative accuracy, which people may have a hard time
understanding -- Error close to 0% => Increasing forecast accuracy
-- Around 2.2% MAPE implies the model is about 97.8% accurate in predicting the next 15 observations.
MASE: Mean Absolute Scaled Error -- Scale invariance: The mean
absolute scaled error is independent of the scale of the data, --
so can be used to compare forecasts across data sets with different
scales. -- ok for scales that do not have a meaningful 0, --
penalizes positive and negative forecast errors equally -- Values
greater than one indicate that in-sample one-step forecasts
from the naïve method perform better than the forecast values under consideration. -- When comparing forecasting methods, the
method with the lowest MASE is the preferred method.
ACF1: Autocorrelation of errors at lag 1.' -- it is a measure of
how much is the current value influenced by the previous values in a
time series. -- Specifically, the autocorrelation function tells
you the correlation between points separated by various time lags
-- the ACF tells you how correlated points are with each other, -- based on how many time steps they are separated by. That is the gist
of autocorrelation, -- it is how correlated past data points are to
future data points, for different values of the time separation. --
Typically, you'd expect the autocorrelation function -- to fall
towards 0 as points become more separated (i.e. n becomes large in
the above notation) -- because its generally harder to forecast
further into the future from a given set of data. -- This is not a
rule, but is typical. -- ACF(0)=1 (all data are perfectly
correlated with themselves), -- ACF(1)=.9 (the correlation between
a point and the next point is 0.9), ACF(2)=.4 -- (the correlation
between a point and a point two time steps ahead is 0.4)...etc.
MAPE (???), Correlation and Min-Max Error can be used an RMSE of
100 for a series whose mean is in 1000’s is better than an RMSE of 5
for series in 10’s. So, you can’t really use them to compare the
forecasts of two different scaled time series.
..........
All your indicators (ME, RMSE, MAE, MPE, MAPE, MASE, ACF1,...) are
aggregations of two types of errors : a bias (you have the wrong
model but an accurate fit) + a variance (you have the right model
but a inaccurate fit). And there is no statistical method to know
if you have a high bias and low variance or a high variance and low
bias. So I suggest, you make a plot and make an eye-stimate to
select the "best" one, best meaning with the least business
consequences if you are wrong.
Generally, all of these values to be as small as possible | Interpreting accuracy results for an ARIMA model fit
I was searching myself how to interpret these indices.
Note that the best method remains to plot the predictions over the real data of the same period.
I found these information on various websites |
55,107 | What is the definition of "death rate" in survival analysis? | A rate has a specific definition of $\frac{\# \mbox{events}}{\# \mbox{person-years}}$. A risk on the other hand refers to a particular individual's risk of experiencing an outcome of interest, and it is risk which is intrinsically related to the hazard (instantaneous risk). The language the question uses is consistent with this understanding. If I had to change it, I would say, "The death rate for smokers is twice that of *non-smokers". They also failed to mention whether these were age adjusted rates or not.
To understand this a little more deeply, relative rates and relative risks are estimated with fundamentally different models.
If you wanted to formalize a rate, you can think of this as estimating:
$$E \left( \frac{\# \mbox{events}}{\# \mbox{person-years}} \right) =\frac{\sum_i Pr(Y_i < t_i)} {\sum_i t_i} $$
($Y_i$ is the death time and $t_i$ is the observation time for the $i$-th individual, note the times are considered fixed and not random!)
You'll recognize the numerator is a bunch of CDFs, or 1-survival functions, and the relationship with survival functions and hazards is well known.
So if you took a ratio of rates:
$$ 2 = E \left( \frac{ \# \mbox{smoker deaths} \times \# \mbox{non-smoker person-years}}{\# {non-smoker deaths} \times \# \mbox{smoker person years}} \right) = \frac{\sum_i t_i}{\sum_j t_j} \frac{\sum_j Pr(Y_j < t_j)}{\sum_i Pr(Y_i < t_i)}$$
$$ = \frac{\sum_i t_i}{\sum_j t_j} \frac{ n_j-\sum_jS(t_j)}{n_i-\sum_iS(t_i)}$$
Since it's self study, you should probably do the algebra and solve the remainder of the equation! | What is the definition of "death rate" in survival analysis? | A rate has a specific definition of $\frac{\# \mbox{events}}{\# \mbox{person-years}}$. A risk on the other hand refers to a particular individual's risk of experiencing an outcome of interest, and it | What is the definition of "death rate" in survival analysis?
A rate has a specific definition of $\frac{\# \mbox{events}}{\# \mbox{person-years}}$. A risk on the other hand refers to a particular individual's risk of experiencing an outcome of interest, and it is risk which is intrinsically related to the hazard (instantaneous risk). The language the question uses is consistent with this understanding. If I had to change it, I would say, "The death rate for smokers is twice that of *non-smokers". They also failed to mention whether these were age adjusted rates or not.
To understand this a little more deeply, relative rates and relative risks are estimated with fundamentally different models.
If you wanted to formalize a rate, you can think of this as estimating:
$$E \left( \frac{\# \mbox{events}}{\# \mbox{person-years}} \right) =\frac{\sum_i Pr(Y_i < t_i)} {\sum_i t_i} $$
($Y_i$ is the death time and $t_i$ is the observation time for the $i$-th individual, note the times are considered fixed and not random!)
You'll recognize the numerator is a bunch of CDFs, or 1-survival functions, and the relationship with survival functions and hazards is well known.
So if you took a ratio of rates:
$$ 2 = E \left( \frac{ \# \mbox{smoker deaths} \times \# \mbox{non-smoker person-years}}{\# {non-smoker deaths} \times \# \mbox{smoker person years}} \right) = \frac{\sum_i t_i}{\sum_j t_j} \frac{\sum_j Pr(Y_j < t_j)}{\sum_i Pr(Y_i < t_i)}$$
$$ = \frac{\sum_i t_i}{\sum_j t_j} \frac{ n_j-\sum_jS(t_j)}{n_i-\sum_iS(t_i)}$$
Since it's self study, you should probably do the algebra and solve the remainder of the equation! | What is the definition of "death rate" in survival analysis?
A rate has a specific definition of $\frac{\# \mbox{events}}{\# \mbox{person-years}}$. A risk on the other hand refers to a particular individual's risk of experiencing an outcome of interest, and it |
55,108 | What is the definition of "death rate" in survival analysis? | By "death rate", they essentially mean hazard rate. Death rates are often reported as deaths per 100,000 subjects per year, making the death rate proportional to the hazard rate, so they are not necessarily exactly alike.
In regards to the book, part "i" of the problem is quite easy as you said. Using what they ask you to prove in part "ii" of the problem, we can actually show that proportional hazards imply proportional death rates in case you don't want to just blindly take my word. Which you shouldn't.
Part "ii" says "using the relation between the hazard rates, show that the survival probability of smokers is the square of the survival probability for non-smokers" (paraphrasing).
This tells us that
$S_s(t) = S_{ns}(t)^2$
Since hazard rates are related to survival curves by
$S(t) = e^{-\int h(t)}$
This implies that
$e^{-\int h_s(t)}$ = $(e^{-\int h_{ns}(t)})^2$
Further implying that
$e^{-\int h_s(t)}$ = $(e^{-2\int h_{ns}(t)})$
Which finally leaves us with $h_s(t) = 2 h_{ns}(t)$ (provided some assumptions about the smoothness of the hazard function).
Since $h_s(t) = 2 h_{ns}(t)$, note that this has the exact same relation as described with the "death rate". Technically, this doesn't prove that "death rate" is the same as hazard rate but it implies that if death rates are proportional (whatever they are), then hazard rates are proportional. | What is the definition of "death rate" in survival analysis? | By "death rate", they essentially mean hazard rate. Death rates are often reported as deaths per 100,000 subjects per year, making the death rate proportional to the hazard rate, so they are not neces | What is the definition of "death rate" in survival analysis?
By "death rate", they essentially mean hazard rate. Death rates are often reported as deaths per 100,000 subjects per year, making the death rate proportional to the hazard rate, so they are not necessarily exactly alike.
In regards to the book, part "i" of the problem is quite easy as you said. Using what they ask you to prove in part "ii" of the problem, we can actually show that proportional hazards imply proportional death rates in case you don't want to just blindly take my word. Which you shouldn't.
Part "ii" says "using the relation between the hazard rates, show that the survival probability of smokers is the square of the survival probability for non-smokers" (paraphrasing).
This tells us that
$S_s(t) = S_{ns}(t)^2$
Since hazard rates are related to survival curves by
$S(t) = e^{-\int h(t)}$
This implies that
$e^{-\int h_s(t)}$ = $(e^{-\int h_{ns}(t)})^2$
Further implying that
$e^{-\int h_s(t)}$ = $(e^{-2\int h_{ns}(t)})$
Which finally leaves us with $h_s(t) = 2 h_{ns}(t)$ (provided some assumptions about the smoothness of the hazard function).
Since $h_s(t) = 2 h_{ns}(t)$, note that this has the exact same relation as described with the "death rate". Technically, this doesn't prove that "death rate" is the same as hazard rate but it implies that if death rates are proportional (whatever they are), then hazard rates are proportional. | What is the definition of "death rate" in survival analysis?
By "death rate", they essentially mean hazard rate. Death rates are often reported as deaths per 100,000 subjects per year, making the death rate proportional to the hazard rate, so they are not neces |
55,109 | Definition of residuals versus prediction errors? | I find your post quite confusing, especially the part about the statistic and the example; how are they relevant here? Instead, let me provide my own understanding of [model] residuals and prediction errors.
A stochastic model includes an error term to allow the relationship between the variables to be stochastic (have some randomness to it) rather than deterministic (fixed, perfect). For example,
$$ y = \beta_0 + \beta_1 x + \varepsilon $$
implies a linear relationship between $y$ and $x$, up to some error $\varepsilon$. When the model is estimated, one gets the realized values of the model errors which are called [model] residuals (denoted $\hat\varepsilon$ or $e$):
$$ y = \hat\beta_0 + \hat\beta_1 x + \hat\varepsilon. $$
Now consider another expression which defines fitted values,
$$ \hat y := \hat\beta_0 + \hat\beta_1 x. $$
Together the above two expressions yield another expression for the [model] residuals; they are the difference between the actual and the fitted values of the dependent variable:
$$ \hat\varepsilon = y - \hat y. $$
Meanwhile, prediction errors arise in the context of forecasting. A prediction error is the difference between the realized value and the predicted value:
$$ e^{fcst} := y - y^{fcst}. $$
(Since the prediction $y^{fcst}$ is produced without (or before) having observed the realized value $y$, the prediction errors are generally not zero.)
Now to respond to your Wikipedia quote, let us look at it more closely:
<...> the capital letter Y is used in specifying the model, while lower-case y in the definition of the residuals; that is because the former are hypothesized random variables and the latter are actual data.
It only says that Y are hypothesized random variables and y are actual data. (If I had used this notation, I should have had capital Y in my first equation but lower-case y in my second equation and elsewhere.) The cited definition of residuals is five lines above the quoted text; there indeed is a formula including lower-case y and defining [model] residuals. If I interpret you correctly, you seem to have understood that y is called the residuals -- which it is not, if you read the Wikipedia quote carefully. | Definition of residuals versus prediction errors? | I find your post quite confusing, especially the part about the statistic and the example; how are they relevant here? Instead, let me provide my own understanding of [model] residuals and prediction | Definition of residuals versus prediction errors?
I find your post quite confusing, especially the part about the statistic and the example; how are they relevant here? Instead, let me provide my own understanding of [model] residuals and prediction errors.
A stochastic model includes an error term to allow the relationship between the variables to be stochastic (have some randomness to it) rather than deterministic (fixed, perfect). For example,
$$ y = \beta_0 + \beta_1 x + \varepsilon $$
implies a linear relationship between $y$ and $x$, up to some error $\varepsilon$. When the model is estimated, one gets the realized values of the model errors which are called [model] residuals (denoted $\hat\varepsilon$ or $e$):
$$ y = \hat\beta_0 + \hat\beta_1 x + \hat\varepsilon. $$
Now consider another expression which defines fitted values,
$$ \hat y := \hat\beta_0 + \hat\beta_1 x. $$
Together the above two expressions yield another expression for the [model] residuals; they are the difference between the actual and the fitted values of the dependent variable:
$$ \hat\varepsilon = y - \hat y. $$
Meanwhile, prediction errors arise in the context of forecasting. A prediction error is the difference between the realized value and the predicted value:
$$ e^{fcst} := y - y^{fcst}. $$
(Since the prediction $y^{fcst}$ is produced without (or before) having observed the realized value $y$, the prediction errors are generally not zero.)
Now to respond to your Wikipedia quote, let us look at it more closely:
<...> the capital letter Y is used in specifying the model, while lower-case y in the definition of the residuals; that is because the former are hypothesized random variables and the latter are actual data.
It only says that Y are hypothesized random variables and y are actual data. (If I had used this notation, I should have had capital Y in my first equation but lower-case y in my second equation and elsewhere.) The cited definition of residuals is five lines above the quoted text; there indeed is a formula including lower-case y and defining [model] residuals. If I interpret you correctly, you seem to have understood that y is called the residuals -- which it is not, if you read the Wikipedia quote carefully. | Definition of residuals versus prediction errors?
I find your post quite confusing, especially the part about the statistic and the example; how are they relevant here? Instead, let me provide my own understanding of [model] residuals and prediction |
55,110 | increase in number of filters in convolutional neural nets | Here is an example of what a convolutional network might be looking for, layer by layer.
Layer 1: simple edges at various orientations
Layer 2: combinations of simple edges that form more complex edges and textures such as rounded edges or multiple edges touching
Layer 3: combinations of complex edges that form parts of objects, such as circles or grid like patterns
Layer 4: combinations of object parts that form whole objects, such as faces, cars, or trees
Each layer has a much wider scope of different things it can look for.
The first layer doesn't have a wide variety of things to look for because it is so general. There are vertical edges, horizontal edges, diagonal edges, and some angles in between. But the final layer has a huge variety of things it could be looking for, so a larger number of filters is beneficial.
This image may also make it clearer. A technique is used to visualize the learned filters of the first, 2nd, and 3rd layers. | increase in number of filters in convolutional neural nets | Here is an example of what a convolutional network might be looking for, layer by layer.
Layer 1: simple edges at various orientations
Layer 2: combinations of simple edges that form more complex edge | increase in number of filters in convolutional neural nets
Here is an example of what a convolutional network might be looking for, layer by layer.
Layer 1: simple edges at various orientations
Layer 2: combinations of simple edges that form more complex edges and textures such as rounded edges or multiple edges touching
Layer 3: combinations of complex edges that form parts of objects, such as circles or grid like patterns
Layer 4: combinations of object parts that form whole objects, such as faces, cars, or trees
Each layer has a much wider scope of different things it can look for.
The first layer doesn't have a wide variety of things to look for because it is so general. There are vertical edges, horizontal edges, diagonal edges, and some angles in between. But the final layer has a huge variety of things it could be looking for, so a larger number of filters is beneficial.
This image may also make it clearer. A technique is used to visualize the learned filters of the first, 2nd, and 3rd layers. | increase in number of filters in convolutional neural nets
Here is an example of what a convolutional network might be looking for, layer by layer.
Layer 1: simple edges at various orientations
Layer 2: combinations of simple edges that form more complex edge |
55,111 | Understanding probabilistic neural networks | PNN are easy to understand when taking an example. So let's say I want to classify with a PNN points in 2D and my training points are the blue and red dots in the figure:
I can take as base function a gaussian of variance, say 0.1. There's no training in a PNN as soon as the variance $\sigma$ of the Gaussian is fixed, so we'll now get to the core of your questions with this fixed $\sigma$ (you could of course try to find an optimal $\sigma$...).
So I want to classify this green cross ($x=1.2$, y=$0.8$). What PNN does is the following:
the input layer is the feature vector ($x=1.2$, y=$0.8$);
the hidden layer is composed of six nodes (corresponding to the six training dots) : each node evaluates the gaussian centered at its plot, e.g. if the first node is the blue one at ($x_0=-1.2$, $y_0=-1.1$), then this node evaluates $G_0(x,y)\approx \exp(-((x-x_0)^2 + (y-y_0)^2)/2\sigma^2)$. At the end each of the nodes output the value of his gaussian for the green cross (often you threshold when the value is too low).
the summation layer is composed of $\#(labels)$ nodes. Each real value output of step 2 is sent to the correponding node (the three red node send their values to the summation red node and the three blue nodes send their values to the summation blue node). Each of the label node sums the guassian values they received.
the last node is just a max node that takes all the outputs of the summation nodes and outputs the max, e.g. the label node that had the highest score.
Here you can see that every blue point will have a gaussian (with variance $\sigma=0.1$) equal to 0 whereas the red ones will have quite high values. Then the summation of all the blue gaussians will be 0 (or almost) and the red one high, so the max is red label : the green cross is categorized as red.
As you pointed out, the main task is to find this $\sigma$. There are a lot of techniques and you can find a lot of training strategies on the internet. You have to take a $\sigma$ small enough to capture the locality and not to small otherwise you overfit. You can imagine cross-validating to take the optimal one inside a grid! (Note also that you could assign a different $\sigma_i$ to each label e.g.).
Here's a video which is well done! | Understanding probabilistic neural networks | PNN are easy to understand when taking an example. So let's say I want to classify with a PNN points in 2D and my training points are the blue and red dots in the figure:
I can take as base function | Understanding probabilistic neural networks
PNN are easy to understand when taking an example. So let's say I want to classify with a PNN points in 2D and my training points are the blue and red dots in the figure:
I can take as base function a gaussian of variance, say 0.1. There's no training in a PNN as soon as the variance $\sigma$ of the Gaussian is fixed, so we'll now get to the core of your questions with this fixed $\sigma$ (you could of course try to find an optimal $\sigma$...).
So I want to classify this green cross ($x=1.2$, y=$0.8$). What PNN does is the following:
the input layer is the feature vector ($x=1.2$, y=$0.8$);
the hidden layer is composed of six nodes (corresponding to the six training dots) : each node evaluates the gaussian centered at its plot, e.g. if the first node is the blue one at ($x_0=-1.2$, $y_0=-1.1$), then this node evaluates $G_0(x,y)\approx \exp(-((x-x_0)^2 + (y-y_0)^2)/2\sigma^2)$. At the end each of the nodes output the value of his gaussian for the green cross (often you threshold when the value is too low).
the summation layer is composed of $\#(labels)$ nodes. Each real value output of step 2 is sent to the correponding node (the three red node send their values to the summation red node and the three blue nodes send their values to the summation blue node). Each of the label node sums the guassian values they received.
the last node is just a max node that takes all the outputs of the summation nodes and outputs the max, e.g. the label node that had the highest score.
Here you can see that every blue point will have a gaussian (with variance $\sigma=0.1$) equal to 0 whereas the red ones will have quite high values. Then the summation of all the blue gaussians will be 0 (or almost) and the red one high, so the max is red label : the green cross is categorized as red.
As you pointed out, the main task is to find this $\sigma$. There are a lot of techniques and you can find a lot of training strategies on the internet. You have to take a $\sigma$ small enough to capture the locality and not to small otherwise you overfit. You can imagine cross-validating to take the optimal one inside a grid! (Note also that you could assign a different $\sigma_i$ to each label e.g.).
Here's a video which is well done! | Understanding probabilistic neural networks
PNN are easy to understand when taking an example. So let's say I want to classify with a PNN points in 2D and my training points are the blue and red dots in the figure:
I can take as base function |
55,112 | How to interpret Mann-Whitney's statistical significance if median is equal? | The Mann-Whitney is not a test of medians. At best, the Mann-Whitney test can only be claimed to a be a test of differences in mean-rank between two populations' pooled ranking.
You can easily calculate medians empirically and perform a basic Wald test if you need a test of medians.
The Mann-Whitney test happens to be a reasonably powerful test of medians only when the underlying distributions are symmetric, an assumption that is clearly violated in these data. However, if a distribution is symmetric, the median also happens to be the mean (when variance is finite). This means the Mann-Whitney and the t-test are testing the same hypothesis in symmetric distributions. | How to interpret Mann-Whitney's statistical significance if median is equal? | The Mann-Whitney is not a test of medians. At best, the Mann-Whitney test can only be claimed to a be a test of differences in mean-rank between two populations' pooled ranking.
You can easily calcula | How to interpret Mann-Whitney's statistical significance if median is equal?
The Mann-Whitney is not a test of medians. At best, the Mann-Whitney test can only be claimed to a be a test of differences in mean-rank between two populations' pooled ranking.
You can easily calculate medians empirically and perform a basic Wald test if you need a test of medians.
The Mann-Whitney test happens to be a reasonably powerful test of medians only when the underlying distributions are symmetric, an assumption that is clearly violated in these data. However, if a distribution is symmetric, the median also happens to be the mean (when variance is finite). This means the Mann-Whitney and the t-test are testing the same hypothesis in symmetric distributions. | How to interpret Mann-Whitney's statistical significance if median is equal?
The Mann-Whitney is not a test of medians. At best, the Mann-Whitney test can only be claimed to a be a test of differences in mean-rank between two populations' pooled ranking.
You can easily calcula |
55,113 | How to interpret Mann-Whitney's statistical significance if median is equal? | Mann-Whitney U test is a rank-sum test, hence it doesn't really care about distribution properties such as mean, media, etc, it only cares that one of your variables tends to have higher values than the other, hence the former has a higher sum of ranks. Nevertheless, if you look closely at this table:
Variable | Mean | StDev | Minimum | Q1 |Median| Q3 | Maximum
positives | 4.13 | 13.17 | 1.00 |1.00| 1.00 |1.00| 116.00
negatives | 6.851 | 20.503 | 0.000 |1.00| 1.00 |5.00| 434.000
you might notice that both variables have equal 25-percentiles, means and medians, while the seconds one has a higher 75-percentile. That supports the observation that the second distribution is likely to have a higher rank-sum.
Edit
Inspired by AdamO's comment, I made a little research on the U-test. According to this published response in Arthritis and Rheumatism journal (impact-factor >7) the test can only compare two distributions of similar shape. This assumption is clearly violated.
The Mann-Whitney U test (2) and the Kruskal-Wallis test (3) are
nonparametric methods designed to detect whether 2 or more samples
come from the same distribution or to test whether medians between
comparison groups are different, under the assumption that the shapes
of the underlying distributions are the same. Thus, these
nonparametric tests are commonly used to determine whether medians,
not means, are different between comparison groups. Although these
tests are often used to compare means when normality assumption is not
violated, strictly speaking, interpreting the results of non-
parametric tests for mean comparison is inaccurate. When the
distribution of a variable is skewed (for example, as in the values
for C-reactive protein that van der Helm-van Mil et al present in
Table 2 of their article 1), only assertions on whether medians, and
not means, were different between groups should be made using
nonparametric methods.
Given that your sample is not small I would recommend you to try a permutation test to answer your question. Here is a good discussion on their limitations | How to interpret Mann-Whitney's statistical significance if median is equal? | Mann-Whitney U test is a rank-sum test, hence it doesn't really care about distribution properties such as mean, media, etc, it only cares that one of your variables tends to have higher values than t | How to interpret Mann-Whitney's statistical significance if median is equal?
Mann-Whitney U test is a rank-sum test, hence it doesn't really care about distribution properties such as mean, media, etc, it only cares that one of your variables tends to have higher values than the other, hence the former has a higher sum of ranks. Nevertheless, if you look closely at this table:
Variable | Mean | StDev | Minimum | Q1 |Median| Q3 | Maximum
positives | 4.13 | 13.17 | 1.00 |1.00| 1.00 |1.00| 116.00
negatives | 6.851 | 20.503 | 0.000 |1.00| 1.00 |5.00| 434.000
you might notice that both variables have equal 25-percentiles, means and medians, while the seconds one has a higher 75-percentile. That supports the observation that the second distribution is likely to have a higher rank-sum.
Edit
Inspired by AdamO's comment, I made a little research on the U-test. According to this published response in Arthritis and Rheumatism journal (impact-factor >7) the test can only compare two distributions of similar shape. This assumption is clearly violated.
The Mann-Whitney U test (2) and the Kruskal-Wallis test (3) are
nonparametric methods designed to detect whether 2 or more samples
come from the same distribution or to test whether medians between
comparison groups are different, under the assumption that the shapes
of the underlying distributions are the same. Thus, these
nonparametric tests are commonly used to determine whether medians,
not means, are different between comparison groups. Although these
tests are often used to compare means when normality assumption is not
violated, strictly speaking, interpreting the results of non-
parametric tests for mean comparison is inaccurate. When the
distribution of a variable is skewed (for example, as in the values
for C-reactive protein that van der Helm-van Mil et al present in
Table 2 of their article 1), only assertions on whether medians, and
not means, were different between groups should be made using
nonparametric methods.
Given that your sample is not small I would recommend you to try a permutation test to answer your question. Here is a good discussion on their limitations | How to interpret Mann-Whitney's statistical significance if median is equal?
Mann-Whitney U test is a rank-sum test, hence it doesn't really care about distribution properties such as mean, media, etc, it only cares that one of your variables tends to have higher values than t |
55,114 | Expected value of a function including the cumulative normal distribution | As $C$ varies from $-1$ to $+1$, the function
$\Phi\left(\frac{C-\mu}{\sigma}\right)$ is a slowly increasing
function whose value increases from varies
$\Phi\left(\frac{-1-\mu}{\sigma}\right)$ to
$\Phi\left(\frac{1-\mu}{\sigma}\right)$.
A more generic question is:
What is $E[\Phi(X)]$ when $X$ is uniformly distributed on (a,b)?
The answer can be obtained via integration by parts and use of
the result $\frac{\mathrm d}{\mathrm dx}\phi(x) = -x\cdot \phi(x)$
where $\phi(x)$ is the standard normal density function.
We have that
\begin{align}
E[\Phi(X)] &= \frac{1}{b-a}\int_a^b \Phi(x)\,\mathrm dx\\
&= \left.\left.\left.\frac{1}{b-a}\right[\Phi(x)\cdot x\right\vert_a^b
- \int_a^b \phi(x)\cdot x \,\mathrm dx\right]\\
&= \left. \frac{b\Phi(b) - a\Phi(a)}{b-a}
+ \frac{\phi(x)}{b-a}\right\vert_a^b\\
&= \frac{b\Phi(b) - a\Phi(a) + \phi(b)-\phi(a)}{b-a}.
\end{align}
Similarly,
\begin{align}
E\left[\Phi^2(x)\right] &= \frac{1}{b-a}\int_a^b \Phi^2(x)\,\mathrm dx\\
&= \left.\left.\left.\frac{1}{b-a}\right[\Phi^2(x)\cdot x\right\vert_a^b
- \int_a^b 2\Phi(x)\phi(x)\cdot x \,\mathrm dx\right]\\
&= \frac{b\Phi^2(b) - a\Phi^2(a)}{b-a}
- \frac{1}{b-a}\int_a^b 2\Phi(x)\phi(x)\cdot x \,\mathrm dx\\
&= \frac{b\Phi^2(b) - a\Phi^2(a)}{b-a}
- \left.\left.\left.\frac{2}{b-a}\right[-\Phi(x)\phi(x)\right\vert_a^b
+ \int_a^b \phi^2(x)\,\mathrm dx\right]\\
&= \frac{\left(b\Phi^2(b) - a\Phi^2(a)\right)+2\left(\Phi(b)\phi(b)
\right)-2\left(\Phi(a)\phi(a)\right)}{b-a}\\
&\qquad\qquad- \frac{1}{(b-a)\sqrt{\pi}}\int_a^b \frac{e^{-x^2}}{\sqrt{\pi}}\, \mathrm dx
\end{align}
Now, $\displaystyle \frac{e^{-x^2}}{\sqrt{\pi}}$
is the density of a normal random variable $Z$ with mean $0$ and
variance $\frac 12$, and so that last integral
is just
$P\{a < Z < b\} = \Phi\left(\sqrt{2}b\right)-\Phi\left(\sqrt{2}a\right)$.
I will leave to you the task of working out the details, then
plugging in $\frac{\pm 1 - \mu}{\sigma}$ for $b$ and $a$ in the above formulas and finally figuring out
$E[Y]$. | Expected value of a function including the cumulative normal distribution | As $C$ varies from $-1$ to $+1$, the function
$\Phi\left(\frac{C-\mu}{\sigma}\right)$ is a slowly increasing
function whose value increases from varies
$\Phi\left(\frac{-1-\mu}{\sigma}\right)$ to
$ | Expected value of a function including the cumulative normal distribution
As $C$ varies from $-1$ to $+1$, the function
$\Phi\left(\frac{C-\mu}{\sigma}\right)$ is a slowly increasing
function whose value increases from varies
$\Phi\left(\frac{-1-\mu}{\sigma}\right)$ to
$\Phi\left(\frac{1-\mu}{\sigma}\right)$.
A more generic question is:
What is $E[\Phi(X)]$ when $X$ is uniformly distributed on (a,b)?
The answer can be obtained via integration by parts and use of
the result $\frac{\mathrm d}{\mathrm dx}\phi(x) = -x\cdot \phi(x)$
where $\phi(x)$ is the standard normal density function.
We have that
\begin{align}
E[\Phi(X)] &= \frac{1}{b-a}\int_a^b \Phi(x)\,\mathrm dx\\
&= \left.\left.\left.\frac{1}{b-a}\right[\Phi(x)\cdot x\right\vert_a^b
- \int_a^b \phi(x)\cdot x \,\mathrm dx\right]\\
&= \left. \frac{b\Phi(b) - a\Phi(a)}{b-a}
+ \frac{\phi(x)}{b-a}\right\vert_a^b\\
&= \frac{b\Phi(b) - a\Phi(a) + \phi(b)-\phi(a)}{b-a}.
\end{align}
Similarly,
\begin{align}
E\left[\Phi^2(x)\right] &= \frac{1}{b-a}\int_a^b \Phi^2(x)\,\mathrm dx\\
&= \left.\left.\left.\frac{1}{b-a}\right[\Phi^2(x)\cdot x\right\vert_a^b
- \int_a^b 2\Phi(x)\phi(x)\cdot x \,\mathrm dx\right]\\
&= \frac{b\Phi^2(b) - a\Phi^2(a)}{b-a}
- \frac{1}{b-a}\int_a^b 2\Phi(x)\phi(x)\cdot x \,\mathrm dx\\
&= \frac{b\Phi^2(b) - a\Phi^2(a)}{b-a}
- \left.\left.\left.\frac{2}{b-a}\right[-\Phi(x)\phi(x)\right\vert_a^b
+ \int_a^b \phi^2(x)\,\mathrm dx\right]\\
&= \frac{\left(b\Phi^2(b) - a\Phi^2(a)\right)+2\left(\Phi(b)\phi(b)
\right)-2\left(\Phi(a)\phi(a)\right)}{b-a}\\
&\qquad\qquad- \frac{1}{(b-a)\sqrt{\pi}}\int_a^b \frac{e^{-x^2}}{\sqrt{\pi}}\, \mathrm dx
\end{align}
Now, $\displaystyle \frac{e^{-x^2}}{\sqrt{\pi}}$
is the density of a normal random variable $Z$ with mean $0$ and
variance $\frac 12$, and so that last integral
is just
$P\{a < Z < b\} = \Phi\left(\sqrt{2}b\right)-\Phi\left(\sqrt{2}a\right)$.
I will leave to you the task of working out the details, then
plugging in $\frac{\pm 1 - \mu}{\sigma}$ for $b$ and $a$ in the above formulas and finally figuring out
$E[Y]$. | Expected value of a function including the cumulative normal distribution
As $C$ varies from $-1$ to $+1$, the function
$\Phi\left(\frac{C-\mu}{\sigma}\right)$ is a slowly increasing
function whose value increases from varies
$\Phi\left(\frac{-1-\mu}{\sigma}\right)$ to
$ |
55,115 | Expected value of a function including the cumulative normal distribution | Random variable $Y$ can be expressed as:
where Erf[z] denotes the error function $\frac{2}{\sqrt{\pi }}\int _0^z e^{-t^2}d t$, and where $X \sim \text{Uniform}(-1,1)$ with pdf $f(x)$:
Then, $E[Y]$ can be solved analytically as:
where I am using the Expect function from the mathStatica add-on to Mathematica to do the nitty-gritties.
While the result is not necessarily pretty, it is exact and symbolic (which is what the OP was seeking), and one can differentiate it, or plot it etc.
Here is a plot of the solution $E[Y]$, as $\sigma$ increases, when $\mu = 0$ (blue), $\mu = 1$ (orange), and $\mu = 2$ (green) | Expected value of a function including the cumulative normal distribution | Random variable $Y$ can be expressed as:
where Erf[z] denotes the error function $\frac{2}{\sqrt{\pi }}\int _0^z e^{-t^2}d t$, and where $X \sim \text{Uniform}(-1,1)$ with pdf $f(x)$:
Then, $E[Y]$ c | Expected value of a function including the cumulative normal distribution
Random variable $Y$ can be expressed as:
where Erf[z] denotes the error function $\frac{2}{\sqrt{\pi }}\int _0^z e^{-t^2}d t$, and where $X \sim \text{Uniform}(-1,1)$ with pdf $f(x)$:
Then, $E[Y]$ can be solved analytically as:
where I am using the Expect function from the mathStatica add-on to Mathematica to do the nitty-gritties.
While the result is not necessarily pretty, it is exact and symbolic (which is what the OP was seeking), and one can differentiate it, or plot it etc.
Here is a plot of the solution $E[Y]$, as $\sigma$ increases, when $\mu = 0$ (blue), $\mu = 1$ (orange), and $\mu = 2$ (green) | Expected value of a function including the cumulative normal distribution
Random variable $Y$ can be expressed as:
where Erf[z] denotes the error function $\frac{2}{\sqrt{\pi }}\int _0^z e^{-t^2}d t$, and where $X \sim \text{Uniform}(-1,1)$ with pdf $f(x)$:
Then, $E[Y]$ c |
55,116 | What can we conclude from a Bayesian credible interval? | Confidence intervals can be used equivalently to hypothesis tests, but highest density intervals are not the same as confidence intervals. Let's start with what $p$-value is by quoting Cohen (1994)
What we want to know is "Given this data what is the probability that
$H_0$ is true?" But as most of us know, what it $p$-value tells us
is "Given that $H_0$ is true, what is the probability of this (or more
extreme) data?" These are not the same (...)
So $p$-value tells us what is the $P(D|H_0)$. In Bayesian approach we want to learn directly (rather than indirectly) about probability of some parameter given the data that we have $P(\theta|D)$ by employing the Bayes theorem and using priors for $\theta$
$$ \underbrace{P(\theta|D)}_\text{posterior} \propto \underbrace{P(D|\theta)}_\text{likelihood} \times \underbrace{P(\theta)}_\text{prior} $$
So if 95% confidence interval does not include the null value(s), than you can reject your null hypothesis: your data is more extreme than you would expect given your hypothesis. On the other hand, if in Bayesian setting your 95% highest density interval does not include null value(s), than you can conclude that probability of observing such value(s) is less than 95%.
Kruschke (2010) can be quoted for comparison of both approaches
The primary goal of NHST [Null Hypothesis Significance Testing] is determining whether a particular "null"
value of a parameter can be rejected. One can also ask what range of
parameter values would not be rejected. This range of non-rejectable
parameter values is called the confidence interval.
(...) The
confidence interval tells us something about the probability of
extreme unobserved data values that we might have gotten if we
repeated the experiment (...)
A concept in Bayesian inference, that is somewhat analogous to the
NHST confidence interval, is the highest density interval (HDI), (...) The 95% HDI consists of those
values of $\theta$ that have at least some minimal level of posterior
believability, such that the total probability of all such $\theta$ values is 95%.
(...) The NHST
confidence interval, on the other hand, has no direct relationship
with what we want to know; there's no clear relationship between the
probability of rejecting the value $\theta$ and the believability of
$\theta$.
Posterior probability can be used and is used for testing hypothesis, but you have to remember that it provides answer for a different question than $p$-values.
See also: What is the connection between credible regions and Bayesian hypothesis tests? and Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Cohen, J. (1994). The earth is round (p<.05). American Psychologist, 49, 997-1003.
Kruschke, J.K. (2010). Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Academic Press / Elsevier. | What can we conclude from a Bayesian credible interval? | Confidence intervals can be used equivalently to hypothesis tests, but highest density intervals are not the same as confidence intervals. Let's start with what $p$-value is by quoting Cohen (1994)
W | What can we conclude from a Bayesian credible interval?
Confidence intervals can be used equivalently to hypothesis tests, but highest density intervals are not the same as confidence intervals. Let's start with what $p$-value is by quoting Cohen (1994)
What we want to know is "Given this data what is the probability that
$H_0$ is true?" But as most of us know, what it $p$-value tells us
is "Given that $H_0$ is true, what is the probability of this (or more
extreme) data?" These are not the same (...)
So $p$-value tells us what is the $P(D|H_0)$. In Bayesian approach we want to learn directly (rather than indirectly) about probability of some parameter given the data that we have $P(\theta|D)$ by employing the Bayes theorem and using priors for $\theta$
$$ \underbrace{P(\theta|D)}_\text{posterior} \propto \underbrace{P(D|\theta)}_\text{likelihood} \times \underbrace{P(\theta)}_\text{prior} $$
So if 95% confidence interval does not include the null value(s), than you can reject your null hypothesis: your data is more extreme than you would expect given your hypothesis. On the other hand, if in Bayesian setting your 95% highest density interval does not include null value(s), than you can conclude that probability of observing such value(s) is less than 95%.
Kruschke (2010) can be quoted for comparison of both approaches
The primary goal of NHST [Null Hypothesis Significance Testing] is determining whether a particular "null"
value of a parameter can be rejected. One can also ask what range of
parameter values would not be rejected. This range of non-rejectable
parameter values is called the confidence interval.
(...) The
confidence interval tells us something about the probability of
extreme unobserved data values that we might have gotten if we
repeated the experiment (...)
A concept in Bayesian inference, that is somewhat analogous to the
NHST confidence interval, is the highest density interval (HDI), (...) The 95% HDI consists of those
values of $\theta$ that have at least some minimal level of posterior
believability, such that the total probability of all such $\theta$ values is 95%.
(...) The NHST
confidence interval, on the other hand, has no direct relationship
with what we want to know; there's no clear relationship between the
probability of rejecting the value $\theta$ and the believability of
$\theta$.
Posterior probability can be used and is used for testing hypothesis, but you have to remember that it provides answer for a different question than $p$-values.
See also: What is the connection between credible regions and Bayesian hypothesis tests? and Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?
Cohen, J. (1994). The earth is round (p<.05). American Psychologist, 49, 997-1003.
Kruschke, J.K. (2010). Doing Bayesian Data Analysis: A Tutorial with R and BUGS. Academic Press / Elsevier. | What can we conclude from a Bayesian credible interval?
Confidence intervals can be used equivalently to hypothesis tests, but highest density intervals are not the same as confidence intervals. Let's start with what $p$-value is by quoting Cohen (1994)
W |
55,117 | Maximizing likelihood vs. minimizing cost [duplicate] | You already know a lot. Two observations.
Take linear regression. Minimizing the squared error turns out to be equivalent to maximizing the likelihood. Loosely one could say that minimizing the squared error is an intuitive method, and maximizing the likelihood a more formal approach that allows for proofs using properties of for example the normal distribution. The outcomes can overlap.
Second minimizing or maximizing is often AFAIK arbitrary. Minimizing the negative is the same as maximizing the positive. There are a lot of routines that are written in the minimization mode: this is sort of coincidence. For reasons of parsimony/readability this has become standard. | Maximizing likelihood vs. minimizing cost [duplicate] | You already know a lot. Two observations.
Take linear regression. Minimizing the squared error turns out to be equivalent to maximizing the likelihood. Loosely one could say that minimizing the squar | Maximizing likelihood vs. minimizing cost [duplicate]
You already know a lot. Two observations.
Take linear regression. Minimizing the squared error turns out to be equivalent to maximizing the likelihood. Loosely one could say that minimizing the squared error is an intuitive method, and maximizing the likelihood a more formal approach that allows for proofs using properties of for example the normal distribution. The outcomes can overlap.
Second minimizing or maximizing is often AFAIK arbitrary. Minimizing the negative is the same as maximizing the positive. There are a lot of routines that are written in the minimization mode: this is sort of coincidence. For reasons of parsimony/readability this has become standard. | Maximizing likelihood vs. minimizing cost [duplicate]
You already know a lot. Two observations.
Take linear regression. Minimizing the squared error turns out to be equivalent to maximizing the likelihood. Loosely one could say that minimizing the squar |
55,118 | Bivariate normal distribution with $|\rho|=1$ | A simple-minded (that is, non-measure-theoretic) version of the answer is as follows.
If random variables $X$ and $Y$ are such
that
every point $(x,y)$ in a region $\mathcal A$
of the plane is a possible realization of $(X,Y)$
The area of $\mathcal A$ is greater than $0$
and
$P\{(X,Y) \in \mathcal A\} = 1$
then $X$ and $Y$ are said to be jointly continuous random
variables, and their probabilistic behavior can be determined
from their joint density function $f_{X,Y}(x,y)$ whose support
is $\mathcal A$. Note that $X$ and $Y$ are also (marginally) continuous
random variables.
If $(X,Y)$ have a bivariate normal distribution, then they
are marginally normal random variables too. In particular,
$X$ and $Y$ are continuous random variables.
But $X$
and $Y$ are jointly continuous (and thus enjoy the
bivariate normal joint
density function that you have found or been told about)
only if their (Pearson)
correlation coefficient $\rho \in (-1,1)$. When $\rho = \pm 1$,
$X$ and $Y$ are not jointly continuous and they don't have
a joint density function. They do, however, continue to
enjoy the properties stated in the highlighted paragraph above.
that is, they are still said to have a bivariate normal
distribution (even though they don't have the bivariate
normal density), and they are individually normal random
variables (and hence continuous). Note that in this case,
all realizations of $(X,Y)$ lie on the straight line
$$y = \mu_Y + \frac{\sigma_Y}{\sigma_X}(x-\mu_X)$$
passing through $(\mu_X,\mu_Y)$. Note that the straight line has
zero area. Since $Y = \mu_Y + \frac{\sigma_Y}{\sigma_X}(X-\mu_X)$,
any questions about the probabilistic behavior of $(X,Y)$
can be translated into a question about the probabilistic
behavior of $X$ alone and answered based on the knowledge
that $X \sim N(\mu_X,\sigma_X^2)$. Since it is also
true that $X = \mu_X + \frac{\sigma_X}{\sigma_Y}(Y-\mu_Y)$,
contrary-minded folks might prefer to translate the question
about $(X,Y)$ that has been asked into a question
about the probabilistic behavior of $Y$ alone and answer it
based on the knowledge that $Y \sim N(\mu_Y,\sigma_Y^2)$ | Bivariate normal distribution with $|\rho|=1$ | A simple-minded (that is, non-measure-theoretic) version of the answer is as follows.
If random variables $X$ and $Y$ are such
that
every point $(x,y)$ in a region $\mathcal A$
of the plane is a pos | Bivariate normal distribution with $|\rho|=1$
A simple-minded (that is, non-measure-theoretic) version of the answer is as follows.
If random variables $X$ and $Y$ are such
that
every point $(x,y)$ in a region $\mathcal A$
of the plane is a possible realization of $(X,Y)$
The area of $\mathcal A$ is greater than $0$
and
$P\{(X,Y) \in \mathcal A\} = 1$
then $X$ and $Y$ are said to be jointly continuous random
variables, and their probabilistic behavior can be determined
from their joint density function $f_{X,Y}(x,y)$ whose support
is $\mathcal A$. Note that $X$ and $Y$ are also (marginally) continuous
random variables.
If $(X,Y)$ have a bivariate normal distribution, then they
are marginally normal random variables too. In particular,
$X$ and $Y$ are continuous random variables.
But $X$
and $Y$ are jointly continuous (and thus enjoy the
bivariate normal joint
density function that you have found or been told about)
only if their (Pearson)
correlation coefficient $\rho \in (-1,1)$. When $\rho = \pm 1$,
$X$ and $Y$ are not jointly continuous and they don't have
a joint density function. They do, however, continue to
enjoy the properties stated in the highlighted paragraph above.
that is, they are still said to have a bivariate normal
distribution (even though they don't have the bivariate
normal density), and they are individually normal random
variables (and hence continuous). Note that in this case,
all realizations of $(X,Y)$ lie on the straight line
$$y = \mu_Y + \frac{\sigma_Y}{\sigma_X}(x-\mu_X)$$
passing through $(\mu_X,\mu_Y)$. Note that the straight line has
zero area. Since $Y = \mu_Y + \frac{\sigma_Y}{\sigma_X}(X-\mu_X)$,
any questions about the probabilistic behavior of $(X,Y)$
can be translated into a question about the probabilistic
behavior of $X$ alone and answered based on the knowledge
that $X \sim N(\mu_X,\sigma_X^2)$. Since it is also
true that $X = \mu_X + \frac{\sigma_X}{\sigma_Y}(Y-\mu_Y)$,
contrary-minded folks might prefer to translate the question
about $(X,Y)$ that has been asked into a question
about the probabilistic behavior of $Y$ alone and answer it
based on the knowledge that $Y \sim N(\mu_Y,\sigma_Y^2)$ | Bivariate normal distribution with $|\rho|=1$
A simple-minded (that is, non-measure-theoretic) version of the answer is as follows.
If random variables $X$ and $Y$ are such
that
every point $(x,y)$ in a region $\mathcal A$
of the plane is a pos |
55,119 | Bivariate normal distribution with $|\rho|=1$ | The Pearson product moment correlation coefficient is a measure of linear dependence. At its extremes, one of the random variables is a linear function of the other with probability one and so there is no randomness. | Bivariate normal distribution with $|\rho|=1$ | The Pearson product moment correlation coefficient is a measure of linear dependence. At its extremes, one of the random variables is a linear function of the other with probability one and so there i | Bivariate normal distribution with $|\rho|=1$
The Pearson product moment correlation coefficient is a measure of linear dependence. At its extremes, one of the random variables is a linear function of the other with probability one and so there is no randomness. | Bivariate normal distribution with $|\rho|=1$
The Pearson product moment correlation coefficient is a measure of linear dependence. At its extremes, one of the random variables is a linear function of the other with probability one and so there i |
55,120 | Estimating a cumulative distribution function from a mixture model | Such mixture models are prominently featured in the theory of multiple testing. Here again you have such a mixture; the so-called "two-groups" model. The one group corresponds to hypotheses drawn from the null distribution and the other to hypotheses drawn from the alternative distribution. Indeed, there are people crazy enough to attempt to estimate both $G$ and $H$ and $\alpha$ from the same sample! This is often called empirical null modelling and it was essentially pioneered by Prof. Bradley Efron. One of the assumptions to do that kind of modelling is that $\alpha > 0.9$, but I digress. My main point from this paragraph is that you can find a lot of inspiration to answer your question in the multiple testing literature; and I will attempt to do so below.
What people actually usually end up assuming in the multiple testing literature is that the distribution under the null hypothesis (say it is $G$) is known. Then, using your notation (i.e. $\hat{F}(x)$ is the ECDF), one can estimate $H$ as follows:
$$\hat{H}(x)=\frac{\hat{F}(x)-aG(x)}{1-a}$$
This as you say is unbiased and consistent, but not a distribution function! In one of my favorite preprints, Bodhisattva Sen and Rohit Kumar Patra try to improve the estimation, by imposing exactly the same condition you talked about!
In particular, let $X_1, \dotsc, X_n \sim F$ be the observations from $F$ and assume for now that $\alpha$ is known. Then they solve the following optimization problem:
$$ \displaystyle\min_{W \text{ CDF}} \sum_{i=1}^n (W(X_i)-\hat{H}(X_i))^2$$
The argmin of the above is our new estimator $\tilde{H}$ which actually is a distribution function! In other words they project the naive estimator $\hat{H}$ onto the space of distribution functions and thus improve estimation. They show that this is a convex problem which can be quickly solved with PAVA (pool-adjacent-violator algorithm) and then derive lots of nice asymptotic properties of this estimator.
They also go a bit further and show when and how one can also estimate $\alpha$ when it is unknown (it is not always identifiable) and prove results for that as well, though in your case you already know it.
So basically I think you can just apply this method with $G(x)$ replaced by $\hat{G}(x)$. Indeed, because you estimate $G$ based on an independent data-set, I am quite confident that you could also adapt all their asymptotic consistency results to the case where you are also estimating $G$ by its ECDF. | Estimating a cumulative distribution function from a mixture model | Such mixture models are prominently featured in the theory of multiple testing. Here again you have such a mixture; the so-called "two-groups" model. The one group corresponds to hypotheses drawn fro | Estimating a cumulative distribution function from a mixture model
Such mixture models are prominently featured in the theory of multiple testing. Here again you have such a mixture; the so-called "two-groups" model. The one group corresponds to hypotheses drawn from the null distribution and the other to hypotheses drawn from the alternative distribution. Indeed, there are people crazy enough to attempt to estimate both $G$ and $H$ and $\alpha$ from the same sample! This is often called empirical null modelling and it was essentially pioneered by Prof. Bradley Efron. One of the assumptions to do that kind of modelling is that $\alpha > 0.9$, but I digress. My main point from this paragraph is that you can find a lot of inspiration to answer your question in the multiple testing literature; and I will attempt to do so below.
What people actually usually end up assuming in the multiple testing literature is that the distribution under the null hypothesis (say it is $G$) is known. Then, using your notation (i.e. $\hat{F}(x)$ is the ECDF), one can estimate $H$ as follows:
$$\hat{H}(x)=\frac{\hat{F}(x)-aG(x)}{1-a}$$
This as you say is unbiased and consistent, but not a distribution function! In one of my favorite preprints, Bodhisattva Sen and Rohit Kumar Patra try to improve the estimation, by imposing exactly the same condition you talked about!
In particular, let $X_1, \dotsc, X_n \sim F$ be the observations from $F$ and assume for now that $\alpha$ is known. Then they solve the following optimization problem:
$$ \displaystyle\min_{W \text{ CDF}} \sum_{i=1}^n (W(X_i)-\hat{H}(X_i))^2$$
The argmin of the above is our new estimator $\tilde{H}$ which actually is a distribution function! In other words they project the naive estimator $\hat{H}$ onto the space of distribution functions and thus improve estimation. They show that this is a convex problem which can be quickly solved with PAVA (pool-adjacent-violator algorithm) and then derive lots of nice asymptotic properties of this estimator.
They also go a bit further and show when and how one can also estimate $\alpha$ when it is unknown (it is not always identifiable) and prove results for that as well, though in your case you already know it.
So basically I think you can just apply this method with $G(x)$ replaced by $\hat{G}(x)$. Indeed, because you estimate $G$ based on an independent data-set, I am quite confident that you could also adapt all their asymptotic consistency results to the case where you are also estimating $G$ by its ECDF. | Estimating a cumulative distribution function from a mixture model
Such mixture models are prominently featured in the theory of multiple testing. Here again you have such a mixture; the so-called "two-groups" model. The one group corresponds to hypotheses drawn fro |
55,121 | Programming probability distribution in R | This answer walks you through steps that are common to most stochastic simulations, showing how to create the code in small, simple, easily-tested chunks on any software platform. The process is illustrated with R code. You can run the code snippets as you go along to see what they produce.
FWIW, here's a compact, quick-and-dirty solution that compresses the calculations into one line. It prints out an estimate based on a simulation with n iterations. (Figure around four million iterations per second of CPU time.)
n <- 1e6; mean(rowSums(matrix(runif(4*n) <= 0.20, nrow=n)) >= 2)
Step 1: Simulate one farmer.
Create a box with tickets, one per farmer. On those tickets are written "insured" and "not insured." The proportion of "insured" in the box must be 20%. (For a detailed account of the tickets-in-a-box model of random variables, see What is meant by a random variable
In almost every software system that supports random number generation, there is a function to draw a ticket out of a very large box (and replace it after the ticket is observed). These tickets have floating point values from $0$ to just a tiny bit less than $1$ written on them. For any interval $0 \le a \le b \lt 1$, the proportion of tickets with values between $a$ and $b$ is $b-a$. You can exploit this function by writing "insured" on 20% of all the numbers between $0$ and $1$. For instance, you could write "insured" on all numbers between $0$ and $20/100$.
This is the Uniform distribution (between $0$ and $1$). In R this function is called runif. A single draw of a ticket from this box could be programmed as
label <- ifelse(runif(1) <= 20/100, "insured", "not insured")
It is faster and more convenient to dispense with the label, though, and replace it with the number $1$ (reserving $0$ for all other tickets). This gives the simpler R code
label.indicator <- runif(1) <= 20/100
because in R (as in many systems) a true value is equated with $1$ and a false value with $0$.
Step 2: Simulate four farmers.
Just draw tickets from the box independently. That's done with a loop. In R the loop is performed for you (very efficiently, behind the scenes) when you request multiple values from runif. What is important to know is that these values are (pseudo) independent: they do not appear to depend on one another. Thus,
label.indicators <- runif(4) <= 20/100
will simulate drawing the tickets of four farmers from this box. It produces an array of four numbers from the set {FALSE, TRUE} or equivalently $\{0,1\}$.
Step 3: Compute the statistic (the farmer count).
The number of farmers in any group is found by summing their indicators.
farmer.count <- sum(label.indicators)
This is because each insured farmer contributes a $1$ to the sum and each uninsured farmer contributes a $0$. The sum merely counts the insured farmers. The efficiency of using indicators, instead of labels, is apparent here.
Step 4: Repeat many times.
This is a loop in most platforms. In some, including R, it's faster to draw lots of tickets and group them in sets of four. (This is because often it's almost as quick to generate many random values as it is to generate just one: there's less overhead involved.) Each group represents one iteration of the simulation. The following code puts the four tickets for each iteration in rows of an array and then sums each row (as in Step 3).
n.sim <- 1e6 # Number of iterations
n.farmers <- 4 # Number of farmers per iteration
simulation <- rowSums(matrix(runif(n.sim * n.farmers) <= 20/100, nrow=n.sim))
Step 5: Post-process.
The chance of observing two or more insured farmers can be estimated as the proportion of iterations in which two or more insured farmers were found in the sample. As before, this is efficiently found by testing and summing (or testing and averaging):
estimate <- mean(simulation >= 2)
Step 6: Evaluate.
You have just performed a computer experiment. Like any other experiment, the results are variable, so you ought to provide a standard error for the result. In this Binomial experiment, the standard error of the estimate $\hat p$ (with $n$ iterations) is
$$\text{se}(\hat p) = \sqrt{\hat p\left(1 - \hat p\right) / n}.$$
Compute this and print out the results:
se.estimate <- sqrt(estimate * (1-estimate) / n.sim)
print(c(Estimate=estimate, SE=se.estimate), digits=2)
When the random seed is set to $17$ (see the full code below) and a million iterations are run, the output is
Estimate SE
0.18127 0.00039
Here's the full solution. It is structured to permit easy variation of the parameters so, by repeating it (the calculation takes less than a second), you can study how the results depend on the parameters to get a more intuitive feel for what is happening.
#
# Specify the problem.
#
p <- 20/100 # Chance of insurance
k <- 2 # Minimum number of insured farmers
n.farmers <- 4 # Number of farmers per iteration
n.sim <- 1e6 # Number of iterations
#
# Simulate.
#
set.seed(17) # Optional: provides a reproducible result
simulation <- rowSums(matrix(runif(n.sim * n.farmers) <= p, nrow=n.sim))
#
# Post-process and report.
#
estimate <- mean(simulation >= k)
se.estimate <- sqrt(estimate * (1-estimate) / n.sim)
print(c(Estimate=estimate, SE=se.estimate), digits=2) | Programming probability distribution in R | This answer walks you through steps that are common to most stochastic simulations, showing how to create the code in small, simple, easily-tested chunks on any software platform. The process is illu | Programming probability distribution in R
This answer walks you through steps that are common to most stochastic simulations, showing how to create the code in small, simple, easily-tested chunks on any software platform. The process is illustrated with R code. You can run the code snippets as you go along to see what they produce.
FWIW, here's a compact, quick-and-dirty solution that compresses the calculations into one line. It prints out an estimate based on a simulation with n iterations. (Figure around four million iterations per second of CPU time.)
n <- 1e6; mean(rowSums(matrix(runif(4*n) <= 0.20, nrow=n)) >= 2)
Step 1: Simulate one farmer.
Create a box with tickets, one per farmer. On those tickets are written "insured" and "not insured." The proportion of "insured" in the box must be 20%. (For a detailed account of the tickets-in-a-box model of random variables, see What is meant by a random variable
In almost every software system that supports random number generation, there is a function to draw a ticket out of a very large box (and replace it after the ticket is observed). These tickets have floating point values from $0$ to just a tiny bit less than $1$ written on them. For any interval $0 \le a \le b \lt 1$, the proportion of tickets with values between $a$ and $b$ is $b-a$. You can exploit this function by writing "insured" on 20% of all the numbers between $0$ and $1$. For instance, you could write "insured" on all numbers between $0$ and $20/100$.
This is the Uniform distribution (between $0$ and $1$). In R this function is called runif. A single draw of a ticket from this box could be programmed as
label <- ifelse(runif(1) <= 20/100, "insured", "not insured")
It is faster and more convenient to dispense with the label, though, and replace it with the number $1$ (reserving $0$ for all other tickets). This gives the simpler R code
label.indicator <- runif(1) <= 20/100
because in R (as in many systems) a true value is equated with $1$ and a false value with $0$.
Step 2: Simulate four farmers.
Just draw tickets from the box independently. That's done with a loop. In R the loop is performed for you (very efficiently, behind the scenes) when you request multiple values from runif. What is important to know is that these values are (pseudo) independent: they do not appear to depend on one another. Thus,
label.indicators <- runif(4) <= 20/100
will simulate drawing the tickets of four farmers from this box. It produces an array of four numbers from the set {FALSE, TRUE} or equivalently $\{0,1\}$.
Step 3: Compute the statistic (the farmer count).
The number of farmers in any group is found by summing their indicators.
farmer.count <- sum(label.indicators)
This is because each insured farmer contributes a $1$ to the sum and each uninsured farmer contributes a $0$. The sum merely counts the insured farmers. The efficiency of using indicators, instead of labels, is apparent here.
Step 4: Repeat many times.
This is a loop in most platforms. In some, including R, it's faster to draw lots of tickets and group them in sets of four. (This is because often it's almost as quick to generate many random values as it is to generate just one: there's less overhead involved.) Each group represents one iteration of the simulation. The following code puts the four tickets for each iteration in rows of an array and then sums each row (as in Step 3).
n.sim <- 1e6 # Number of iterations
n.farmers <- 4 # Number of farmers per iteration
simulation <- rowSums(matrix(runif(n.sim * n.farmers) <= 20/100, nrow=n.sim))
Step 5: Post-process.
The chance of observing two or more insured farmers can be estimated as the proportion of iterations in which two or more insured farmers were found in the sample. As before, this is efficiently found by testing and summing (or testing and averaging):
estimate <- mean(simulation >= 2)
Step 6: Evaluate.
You have just performed a computer experiment. Like any other experiment, the results are variable, so you ought to provide a standard error for the result. In this Binomial experiment, the standard error of the estimate $\hat p$ (with $n$ iterations) is
$$\text{se}(\hat p) = \sqrt{\hat p\left(1 - \hat p\right) / n}.$$
Compute this and print out the results:
se.estimate <- sqrt(estimate * (1-estimate) / n.sim)
print(c(Estimate=estimate, SE=se.estimate), digits=2)
When the random seed is set to $17$ (see the full code below) and a million iterations are run, the output is
Estimate SE
0.18127 0.00039
Here's the full solution. It is structured to permit easy variation of the parameters so, by repeating it (the calculation takes less than a second), you can study how the results depend on the parameters to get a more intuitive feel for what is happening.
#
# Specify the problem.
#
p <- 20/100 # Chance of insurance
k <- 2 # Minimum number of insured farmers
n.farmers <- 4 # Number of farmers per iteration
n.sim <- 1e6 # Number of iterations
#
# Simulate.
#
set.seed(17) # Optional: provides a reproducible result
simulation <- rowSums(matrix(runif(n.sim * n.farmers) <= p, nrow=n.sim))
#
# Post-process and report.
#
estimate <- mean(simulation >= k)
se.estimate <- sqrt(estimate * (1-estimate) / n.sim)
print(c(Estimate=estimate, SE=se.estimate), digits=2) | Programming probability distribution in R
This answer walks you through steps that are common to most stochastic simulations, showing how to create the code in small, simple, easily-tested chunks on any software platform. The process is illu |
55,122 | Programming probability distribution in R | Here is a possible solution to your question
prop=0
T=1e6
for (t in 1:T){
insrd=sample(0:1,4,rep=TRUE,prob=c(8,2))
prop=prop+(sum(insrd)>1)}
print(prop/T)
with answer 0.1809. But you can compute the exact value [0.1808] of this probability by realising that the draw of four farmers is a Binomial B(4,0.2) random variable when counting the number of insured farmers out of the four. | Programming probability distribution in R | Here is a possible solution to your question
prop=0
T=1e6
for (t in 1:T){
insrd=sample(0:1,4,rep=TRUE,prob=c(8,2))
prop=prop+(sum(insrd)>1)}
print(prop/T)
with answer 0.1809. But you can compute | Programming probability distribution in R
Here is a possible solution to your question
prop=0
T=1e6
for (t in 1:T){
insrd=sample(0:1,4,rep=TRUE,prob=c(8,2))
prop=prop+(sum(insrd)>1)}
print(prop/T)
with answer 0.1809. But you can compute the exact value [0.1808] of this probability by realising that the draw of four farmers is a Binomial B(4,0.2) random variable when counting the number of insured farmers out of the four. | Programming probability distribution in R
Here is a possible solution to your question
prop=0
T=1e6
for (t in 1:T){
insrd=sample(0:1,4,rep=TRUE,prob=c(8,2))
prop=prop+(sum(insrd)>1)}
print(prop/T)
with answer 0.1809. But you can compute |
55,123 | Variational Bayes: Understanding Mean field approximation | Taking the equation from Wikipedia
$D_{KL}(Q||P) = \sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} +\log P(X)$
What we want is to minimize KL distance wrt $Q$ distribution.
Since $P(X)$ is independent of $Q$ we need to care only about the first term.
Substituting the factored approximation, $Q=\prod\limits_{i=1}^M q(Z_i|X)$
$$\sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} = \sum_\limits{z}Q(Z)\log\{ \prod\limits_{i=1}^M q(Z_i|X)\}-\sum_\limits{z}Q(Z)\log P(Z,X)\\$$
Considering only terms dependent only on a particular factor $q(z_j)$
$$= \sum\limits_{z_j}q(z_j|X)\log q(z_j|X)-\sum_\limits{z_j}q(z_j|X) \left\lbrace \sum\limits_{z_{i\neq j}}\left(\prod\limits_{i\neq j}q(z_i|X)\right)\log P(Z,X)\right\rbrace+ \text{const}$$
In the above equation the terms inside the curly braces is an expectation of the function $\log P(Z,X)$. Where $\mathbb{E}_{i\neq j}$ denotes expectation under $Q(Z)$ distribution wrt all variables except $z_j$. This is where the expectation wrt other variables of log joint model comes into equation. Defining $ \log\tilde{P}(Z_j,X) \triangleq \mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]$ (Note that once you do the expectation it is a function of $z_j$).
$$\begin{equation}= \sum\limits_{z_j}q(z_j|X)\log q(z_j|X)-\sum_\limits{z_j}q(z_j|X)\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]+ \text{const}
\end{equation}$$
$$\begin{equation}
=\sum\limits_{z_j}q(z_j|X)\left\lbrace \log q(z_j|X)-\log\tilde{P}(Z_j,X) \right\rbrace+\text{const}\\
=\sum\limits_{z_j}q(z_j|X)\log \left(\frac{q(z_j|X)}{\tilde P(Zj,X)}\right)+\text{const}
\end{equation}$$
Now if we make $\tilde{P}(Z_j,X)$ a probability mass function by including a normalization constant, (such that $\tilde{P}(Z_j,X)$ sums to 1 over all possible $Z_j$), this normalizing constant can be absorbed in to the constant term.
That will make this again a KL distance between $q(Z_j)$ and the the new distribution which is proportional to $\tilde{P}(Z_j,X)$. If the two distributions are equal, KL distance becomes zero and gives the minimum of our original objective.
Therefore, in order to minimize $D_{KL}(Q||P)$ wrt each $q(z_j)$ we want
$
\begin{equation}
q(z_j|X) \propto \tilde{P}(Z_j,X)
\end{equation}$
Since $\tilde{P}(Z_j,X) = \exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}$
$q(Z_j|X) = \dfrac{\exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}}{\sum\limits_{Z_j}\exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}}$ | Variational Bayes: Understanding Mean field approximation | Taking the equation from Wikipedia
$D_{KL}(Q||P) = \sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} +\log P(X)$
What we want is to minimize KL distance wrt $Q$ distribution.
Since $P(X)$ is independent of | Variational Bayes: Understanding Mean field approximation
Taking the equation from Wikipedia
$D_{KL}(Q||P) = \sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} +\log P(X)$
What we want is to minimize KL distance wrt $Q$ distribution.
Since $P(X)$ is independent of $Q$ we need to care only about the first term.
Substituting the factored approximation, $Q=\prod\limits_{i=1}^M q(Z_i|X)$
$$\sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} = \sum_\limits{z}Q(Z)\log\{ \prod\limits_{i=1}^M q(Z_i|X)\}-\sum_\limits{z}Q(Z)\log P(Z,X)\\$$
Considering only terms dependent only on a particular factor $q(z_j)$
$$= \sum\limits_{z_j}q(z_j|X)\log q(z_j|X)-\sum_\limits{z_j}q(z_j|X) \left\lbrace \sum\limits_{z_{i\neq j}}\left(\prod\limits_{i\neq j}q(z_i|X)\right)\log P(Z,X)\right\rbrace+ \text{const}$$
In the above equation the terms inside the curly braces is an expectation of the function $\log P(Z,X)$. Where $\mathbb{E}_{i\neq j}$ denotes expectation under $Q(Z)$ distribution wrt all variables except $z_j$. This is where the expectation wrt other variables of log joint model comes into equation. Defining $ \log\tilde{P}(Z_j,X) \triangleq \mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]$ (Note that once you do the expectation it is a function of $z_j$).
$$\begin{equation}= \sum\limits_{z_j}q(z_j|X)\log q(z_j|X)-\sum_\limits{z_j}q(z_j|X)\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]+ \text{const}
\end{equation}$$
$$\begin{equation}
=\sum\limits_{z_j}q(z_j|X)\left\lbrace \log q(z_j|X)-\log\tilde{P}(Z_j,X) \right\rbrace+\text{const}\\
=\sum\limits_{z_j}q(z_j|X)\log \left(\frac{q(z_j|X)}{\tilde P(Zj,X)}\right)+\text{const}
\end{equation}$$
Now if we make $\tilde{P}(Z_j,X)$ a probability mass function by including a normalization constant, (such that $\tilde{P}(Z_j,X)$ sums to 1 over all possible $Z_j$), this normalizing constant can be absorbed in to the constant term.
That will make this again a KL distance between $q(Z_j)$ and the the new distribution which is proportional to $\tilde{P}(Z_j,X)$. If the two distributions are equal, KL distance becomes zero and gives the minimum of our original objective.
Therefore, in order to minimize $D_{KL}(Q||P)$ wrt each $q(z_j)$ we want
$
\begin{equation}
q(z_j|X) \propto \tilde{P}(Z_j,X)
\end{equation}$
Since $\tilde{P}(Z_j,X) = \exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}$
$q(Z_j|X) = \dfrac{\exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}}{\sum\limits_{Z_j}\exp\{\mathbb{E}_{i\neq j}\left[\log P(Z,X)\right]\}}$ | Variational Bayes: Understanding Mean field approximation
Taking the equation from Wikipedia
$D_{KL}(Q||P) = \sum_\limits{z}Q(Z)\log\frac{Q(Z)}{P(Z,X)} +\log P(X)$
What we want is to minimize KL distance wrt $Q$ distribution.
Since $P(X)$ is independent of |
55,124 | Creating interaction terms for regression | In theory there is no problem creating an interaction by multiplying values of two variables (componentwise), but in practice there can be, depending on your software. Since this question concerns questionable software--a private package (coded by who knows who) or (perhaps worse) Excel, as suggested in another answer, this is a legitimate concern.
The problem is that unless the variables have a small range of absolute values near $1$, the sizes of the interaction can be so incommensurate with the sizes of the original variables that numerical problems arise in least squares procedures.
Numerical problems can be measured and detected through the sensitivity of a least squares solution to small perturbations in the explanatory variables. A standard measure is the condition number of the model matrix. It equals the ratio of its largest singular value to its smallest. The best possible value is $1$. It occurs when all explanatory variables are uncorrelated.
Because the square of the model matrix is involved in least squares solutions, its condition number is the square of that of the model matrix. For each power of ten in the condition number of the model matrix, you can therefore expect to lose two decimal digits of precision in the coefficient estimates. Since double-precision calculations carry a little less than 16 digits (and many computer least squares algorithms may only achieve 10 or 12 digits), expect to run into problems once the condition number exceeds 5 or so. Even worse, such problems might not be obvious. (They become obvious once the condition number exceeds 8, for then all precision is lost.)
Consider two variables with typical values around $a$ and $b$. Their product will have typical values near $ab$. When either $|a|$ or $|b|$ is large (compared to $1$), the product's values will be much larger than at least one of the variables. That can create large condition numbers.
Let me demonstrate with a simple example. As a control, consider three independent variables with values uniformly distributed between $10^9$ and $10^{10}$. They could be annual incomes of medium-sized corporations in Euros or dollars, for instance. In this simulated dataset of $200$ observations, their condition number comes out to $4.15$, which is fine.
When we replace the third variable with the product of the first two (their interaction), the condition number blows up to $1.56\times 10^{10}$. Because its square is greater than 20 powers of ten, that will wipe out all the information in double precision least squares calculations!.
If instead we rescale the first two variables to small values before computing their interaction, this simple expedient causes the condition number to drop all the way to $1.11$: no problem at all. Rescaling is the same as a change of units: for instance, re-express corporate incomes in billions of dollars instead of dollars. (Those experienced in data analysis reflexively express their data in appropriately small, commensurable units and so rarely run into such numerical problems.)
We can also compute the least squares fits in three ways: using the built-in procedure (for reference), using rescaled variables, and using the original values. In this example the latter fails with the error message
system is computationally singular: reciprocal condition number = 4.088e-21
Indeed, this number is the reciprocal square of $1.56 \times 10^{10}$, just as claimed above.
The moral is to scale (or standardize) your variables before creating interactions. Then you'll (usually) be fine.
Those familiar with good statistical computing packages may object that those packages routinely standardize the variables for internal calculations. That is only partially true. For instance, this does not take place in some (many?) of the (otherwise) sophisticated add-in regression packages to R, even though its workhorse built-in function lm is numerically robust.
This R code reproduces the results reported here. It gives you something to experiment further with if you like. Try it with x.max set to a small number to verify that all the approaches work. Then try it with x.max set to any value greater than $5\times 10^7$ or so. Interestingly, the preliminary scaling automatically costs you up to eight significant figures no matter what value xmax might have, large or small--but it doesn't fail when x.max gets large.
#
# Compute and round a condition number.
#
cn <- function(x) signif((function(x) max(x)/min(x)) (svd(x)$d), 3)
#
# Create innocuous independent variables.
#
n <- 200 # Number of observations
x.max <- 1e10 # Maximum values
set.seed(17)
x <- matrix(runif(3*n, x.max/10, x.max), ncol=3)
#
# The control case uses three (statistically) independent variables.
#
cat("Condition number for three variables is", cn(x))
#
# Instead make the third variable the product of the first two.
#
interact <- function(x) cbind(x[, 1:2], x[,1]*x[,2])
cat("Condition number for two variables + interaction is", cn(interact(x)))
#
# Demonstrate that standardizing the original two variables cures the problem.
#
cat("Condition number for two rescaled variables + interaction is",
cn(interact(scale(x, center=FALSE))))
#------------------------------------------------------------------------------#
#
# Compute the OLS fit with the built-in `lm` function (via QR decomposition).
#
y <- x %*% c(1,2,1) # E[Y] = 1*x[,1] + 2*x[,2] + error
u <- interact(x)
y.lm <- predict(lm(y ~ u - 1))
#
# Compute the OLS fit directly using scaled variables.
#
u.scale <- interact(scale(x, center=FALSE))
y.hat <- u.scale %*% solve(crossprod(u.scale), crossprod(u.scale, y))
cat ("RMSE error with scaled variables is", signif(sqrt(mean((y.lm-y.hat)^2)), 3))
#
# Compute the OLS fit directly with the original variables.
# This emulates many procedures, including those in Excel and some add-ins
# in `R`.
#
y.hat.direct <- u %*% solve(crossprod(u), crossprod(u, y))
cat("RMSE error with unscaled variables is", signif(mean((y.lm-y.hat.direct)^2), 3)) | Creating interaction terms for regression | In theory there is no problem creating an interaction by multiplying values of two variables (componentwise), but in practice there can be, depending on your software. Since this question concerns qu | Creating interaction terms for regression
In theory there is no problem creating an interaction by multiplying values of two variables (componentwise), but in practice there can be, depending on your software. Since this question concerns questionable software--a private package (coded by who knows who) or (perhaps worse) Excel, as suggested in another answer, this is a legitimate concern.
The problem is that unless the variables have a small range of absolute values near $1$, the sizes of the interaction can be so incommensurate with the sizes of the original variables that numerical problems arise in least squares procedures.
Numerical problems can be measured and detected through the sensitivity of a least squares solution to small perturbations in the explanatory variables. A standard measure is the condition number of the model matrix. It equals the ratio of its largest singular value to its smallest. The best possible value is $1$. It occurs when all explanatory variables are uncorrelated.
Because the square of the model matrix is involved in least squares solutions, its condition number is the square of that of the model matrix. For each power of ten in the condition number of the model matrix, you can therefore expect to lose two decimal digits of precision in the coefficient estimates. Since double-precision calculations carry a little less than 16 digits (and many computer least squares algorithms may only achieve 10 or 12 digits), expect to run into problems once the condition number exceeds 5 or so. Even worse, such problems might not be obvious. (They become obvious once the condition number exceeds 8, for then all precision is lost.)
Consider two variables with typical values around $a$ and $b$. Their product will have typical values near $ab$. When either $|a|$ or $|b|$ is large (compared to $1$), the product's values will be much larger than at least one of the variables. That can create large condition numbers.
Let me demonstrate with a simple example. As a control, consider three independent variables with values uniformly distributed between $10^9$ and $10^{10}$. They could be annual incomes of medium-sized corporations in Euros or dollars, for instance. In this simulated dataset of $200$ observations, their condition number comes out to $4.15$, which is fine.
When we replace the third variable with the product of the first two (their interaction), the condition number blows up to $1.56\times 10^{10}$. Because its square is greater than 20 powers of ten, that will wipe out all the information in double precision least squares calculations!.
If instead we rescale the first two variables to small values before computing their interaction, this simple expedient causes the condition number to drop all the way to $1.11$: no problem at all. Rescaling is the same as a change of units: for instance, re-express corporate incomes in billions of dollars instead of dollars. (Those experienced in data analysis reflexively express their data in appropriately small, commensurable units and so rarely run into such numerical problems.)
We can also compute the least squares fits in three ways: using the built-in procedure (for reference), using rescaled variables, and using the original values. In this example the latter fails with the error message
system is computationally singular: reciprocal condition number = 4.088e-21
Indeed, this number is the reciprocal square of $1.56 \times 10^{10}$, just as claimed above.
The moral is to scale (or standardize) your variables before creating interactions. Then you'll (usually) be fine.
Those familiar with good statistical computing packages may object that those packages routinely standardize the variables for internal calculations. That is only partially true. For instance, this does not take place in some (many?) of the (otherwise) sophisticated add-in regression packages to R, even though its workhorse built-in function lm is numerically robust.
This R code reproduces the results reported here. It gives you something to experiment further with if you like. Try it with x.max set to a small number to verify that all the approaches work. Then try it with x.max set to any value greater than $5\times 10^7$ or so. Interestingly, the preliminary scaling automatically costs you up to eight significant figures no matter what value xmax might have, large or small--but it doesn't fail when x.max gets large.
#
# Compute and round a condition number.
#
cn <- function(x) signif((function(x) max(x)/min(x)) (svd(x)$d), 3)
#
# Create innocuous independent variables.
#
n <- 200 # Number of observations
x.max <- 1e10 # Maximum values
set.seed(17)
x <- matrix(runif(3*n, x.max/10, x.max), ncol=3)
#
# The control case uses three (statistically) independent variables.
#
cat("Condition number for three variables is", cn(x))
#
# Instead make the third variable the product of the first two.
#
interact <- function(x) cbind(x[, 1:2], x[,1]*x[,2])
cat("Condition number for two variables + interaction is", cn(interact(x)))
#
# Demonstrate that standardizing the original two variables cures the problem.
#
cat("Condition number for two rescaled variables + interaction is",
cn(interact(scale(x, center=FALSE))))
#------------------------------------------------------------------------------#
#
# Compute the OLS fit with the built-in `lm` function (via QR decomposition).
#
y <- x %*% c(1,2,1) # E[Y] = 1*x[,1] + 2*x[,2] + error
u <- interact(x)
y.lm <- predict(lm(y ~ u - 1))
#
# Compute the OLS fit directly using scaled variables.
#
u.scale <- interact(scale(x, center=FALSE))
y.hat <- u.scale %*% solve(crossprod(u.scale), crossprod(u.scale, y))
cat ("RMSE error with scaled variables is", signif(sqrt(mean((y.lm-y.hat)^2)), 3))
#
# Compute the OLS fit directly with the original variables.
# This emulates many procedures, including those in Excel and some add-ins
# in `R`.
#
y.hat.direct <- u %*% solve(crossprod(u), crossprod(u, y))
cat("RMSE error with unscaled variables is", signif(mean((y.lm-y.hat.direct)^2), 3)) | Creating interaction terms for regression
In theory there is no problem creating an interaction by multiplying values of two variables (componentwise), but in practice there can be, depending on your software. Since this question concerns qu |
55,125 | Creating interaction terms for regression | Nothing wrong with using interaction term in the model, if theory suggests so. How to include interaction term in the model will depend on the software you are using.
In R, you can directly use interaction term using x1:x2 (only the product term) or x1*x2 (interaction as well as the main effects). In other softwares, like Excel, SPSS, STATA, EViews etc., you can always create a new variable for interaction term and use it in you model. | Creating interaction terms for regression | Nothing wrong with using interaction term in the model, if theory suggests so. How to include interaction term in the model will depend on the software you are using.
In R, you can directly use inter | Creating interaction terms for regression
Nothing wrong with using interaction term in the model, if theory suggests so. How to include interaction term in the model will depend on the software you are using.
In R, you can directly use interaction term using x1:x2 (only the product term) or x1*x2 (interaction as well as the main effects). In other softwares, like Excel, SPSS, STATA, EViews etc., you can always create a new variable for interaction term and use it in you model. | Creating interaction terms for regression
Nothing wrong with using interaction term in the model, if theory suggests so. How to include interaction term in the model will depend on the software you are using.
In R, you can directly use inter |
55,126 | Appropriate application of Poisson regression? | A couple of thoughts that may or may not help below. It's a bit hard for us to be helpful without seeing your actual data...
First of all, your data rather obviously does not follow a standard regression model form: observations are integer, residuals will certainly not be normally distributed, and so forth. This is a textbook case of where to use count data models.
So if you find that an OLS model performs better than a count data model, something seems to be badly broken.
I'd strongly suggest running lots of diagnostics for both models, which should show you ways in which you could improve either model. Plot your raw shots response against your predictors. Plot your predicted response against the predictors. Plot your residuals against the predictors, and plot actuals and residuals against predictions. Look at prediction distributions, even if you are most interested in point predictions - your bad point forecasts may be due to high and unmodeled variance. If so, you may want to think about this in the context of your specific application for predictions. Unfortunately, diagnostics for count data models are not as well researched as for OLS models.
The vignette "Regression models for count data in R" for the pscl package may be helpful here. Note that you can also fit OLS models with glm(), by specifying family = gaussian, which is actually the default, so you could directly use many of the summaries explained in the vignette.
You write in a comment that the fit is particularly bad for high or low av values. This to me suggests transforming av, for instance using splines. Look at Frank Harrell's Regression Modeling Strategies, which has just come out in a brand new second edition, with an accompanying R package called rms.
While we are discussing the model form, do you have information on the position your players play? A striker will have a much higher number of shots on goal (which I assume you are modeling, not simply shots and passes as such) than a defender.
I'm already happy you are using the RMSE and not the MAD, which would be minimized by the conditional median...
I am not overly surprised that the OLS model yields a better in-sample fit than a Poisson regression - after all, it has one more degree of freedom to play around with, namely the variance. To "even the playing field", as it were, you may want to offer your count data model another parameter, as well. For instance, try looking at a negative binomial regression. Or a hurdle or a zero-inflated model (see the vignette linked above).
In-sample fit is a notoriously misleading proxy for out-of-sample predictive accuracy, especially for models with varying numbers of predictors. Even with only 300 data points, I'd suggest you at least run cross-validation, say five-fold. This would also give you an idea of the variability of your prediction error. It's quite possible that your prediction error for both models will jump around erratically. | Appropriate application of Poisson regression? | A couple of thoughts that may or may not help below. It's a bit hard for us to be helpful without seeing your actual data...
First of all, your data rather obviously does not follow a standard regres | Appropriate application of Poisson regression?
A couple of thoughts that may or may not help below. It's a bit hard for us to be helpful without seeing your actual data...
First of all, your data rather obviously does not follow a standard regression model form: observations are integer, residuals will certainly not be normally distributed, and so forth. This is a textbook case of where to use count data models.
So if you find that an OLS model performs better than a count data model, something seems to be badly broken.
I'd strongly suggest running lots of diagnostics for both models, which should show you ways in which you could improve either model. Plot your raw shots response against your predictors. Plot your predicted response against the predictors. Plot your residuals against the predictors, and plot actuals and residuals against predictions. Look at prediction distributions, even if you are most interested in point predictions - your bad point forecasts may be due to high and unmodeled variance. If so, you may want to think about this in the context of your specific application for predictions. Unfortunately, diagnostics for count data models are not as well researched as for OLS models.
The vignette "Regression models for count data in R" for the pscl package may be helpful here. Note that you can also fit OLS models with glm(), by specifying family = gaussian, which is actually the default, so you could directly use many of the summaries explained in the vignette.
You write in a comment that the fit is particularly bad for high or low av values. This to me suggests transforming av, for instance using splines. Look at Frank Harrell's Regression Modeling Strategies, which has just come out in a brand new second edition, with an accompanying R package called rms.
While we are discussing the model form, do you have information on the position your players play? A striker will have a much higher number of shots on goal (which I assume you are modeling, not simply shots and passes as such) than a defender.
I'm already happy you are using the RMSE and not the MAD, which would be minimized by the conditional median...
I am not overly surprised that the OLS model yields a better in-sample fit than a Poisson regression - after all, it has one more degree of freedom to play around with, namely the variance. To "even the playing field", as it were, you may want to offer your count data model another parameter, as well. For instance, try looking at a negative binomial regression. Or a hurdle or a zero-inflated model (see the vignette linked above).
In-sample fit is a notoriously misleading proxy for out-of-sample predictive accuracy, especially for models with varying numbers of predictors. Even with only 300 data points, I'd suggest you at least run cross-validation, say five-fold. This would also give you an idea of the variability of your prediction error. It's quite possible that your prediction error for both models will jump around erratically. | Appropriate application of Poisson regression?
A couple of thoughts that may or may not help below. It's a bit hard for us to be helpful without seeing your actual data...
First of all, your data rather obviously does not follow a standard regres |
55,127 | Why should the roots of an ARMA (p,q) process be different? | If the roots are the same, the two cancel out. Consider an ARMA model of the form
$y_t = \frac{\Theta(B)}{\Phi(B)}\varepsilon_t$
that is both stationary and invertible.
Write the numerator and denominator characteristic polynomials as products of factors (i.e. factorize the polynomials). Divide through by a constant so the highest order coefficient in each is 1. Factorize the polynomials. If the numerator has a term like say $(B-a)$ and the denominator has the same term (which it will if they both share a root), then those two terms cancel out; this means that we could not distinguish a common root at $a$ from any other common root, at say $a'$. That is, the root at $a$ is not identifiable. You get the same fit as taking the common root to be $0$ - which is equivalent to simply omitting that term altogether.
Note that polynomials with real coefficients always have either linear factors or conjugate pairs of complex factors.
So imagine you have an ARMA(4,3) where the factored AR and MA polynomials are different, except that there's exactly one factor in each the same. Then the model is identical to an ARMA(3,2) model without that factor in either polynomial.
Similarly we can take any ARMA$(p,q)$ and construct an infinite sequence of models of order $(p+k,q+k),\,k=1,2,3,...$ that are identical to the ARMA$(p,q)$ in terms of the process (i.e. have $k$ factors that would cancel). Since the factors that cancel don't contribute anything (they yield the same infinite-MA), we take the model in its simplest terms, which means we must have no roots in common. | Why should the roots of an ARMA (p,q) process be different? | If the roots are the same, the two cancel out. Consider an ARMA model of the form
$y_t = \frac{\Theta(B)}{\Phi(B)}\varepsilon_t$
that is both stationary and invertible.
Write the numerator and denomin | Why should the roots of an ARMA (p,q) process be different?
If the roots are the same, the two cancel out. Consider an ARMA model of the form
$y_t = \frac{\Theta(B)}{\Phi(B)}\varepsilon_t$
that is both stationary and invertible.
Write the numerator and denominator characteristic polynomials as products of factors (i.e. factorize the polynomials). Divide through by a constant so the highest order coefficient in each is 1. Factorize the polynomials. If the numerator has a term like say $(B-a)$ and the denominator has the same term (which it will if they both share a root), then those two terms cancel out; this means that we could not distinguish a common root at $a$ from any other common root, at say $a'$. That is, the root at $a$ is not identifiable. You get the same fit as taking the common root to be $0$ - which is equivalent to simply omitting that term altogether.
Note that polynomials with real coefficients always have either linear factors or conjugate pairs of complex factors.
So imagine you have an ARMA(4,3) where the factored AR and MA polynomials are different, except that there's exactly one factor in each the same. Then the model is identical to an ARMA(3,2) model without that factor in either polynomial.
Similarly we can take any ARMA$(p,q)$ and construct an infinite sequence of models of order $(p+k,q+k),\,k=1,2,3,...$ that are identical to the ARMA$(p,q)$ in terms of the process (i.e. have $k$ factors that would cancel). Since the factors that cancel don't contribute anything (they yield the same infinite-MA), we take the model in its simplest terms, which means we must have no roots in common. | Why should the roots of an ARMA (p,q) process be different?
If the roots are the same, the two cancel out. Consider an ARMA model of the form
$y_t = \frac{\Theta(B)}{\Phi(B)}\varepsilon_t$
that is both stationary and invertible.
Write the numerator and denomin |
55,128 | Logistic Regression in R and How to deal with 0 and 1 | That is very strange advice, I am forced to wonder who in the world advanced it.
The correct way to fit a logistic regression leaves the zeros and ones alone, and determines the parameters that minimize the log likelihood function:
$$ f(\beta) = \sum_i y_i \log(p_i) + (1 - y_i) \log(1 - p_i) $$
Where $p_i$ is shorthand for
$$ p_i = \frac{e^{\beta \cdot x_i}}{1 + e^{\beta \cdot x_i} }$$
The exponents are vector dot products and $p_i$ is a function of the parameter vector $\beta$. The $y_i$s in this expression are either $0$ or $1$, and it's pleasant to notice that this causes each term to be equal to either
$$ \log(p_i) $$
or
$$ \log(1 - p_i) $$
Generally, yes, this expression is minimized using a method called iteratively re-weighted least squares, which is itself derived from Newton's classical method for minimizing non-linear functions.
R's glm function does exactly this. No response replacement in sight. | Logistic Regression in R and How to deal with 0 and 1 | That is very strange advice, I am forced to wonder who in the world advanced it.
The correct way to fit a logistic regression leaves the zeros and ones alone, and determines the parameters that minimi | Logistic Regression in R and How to deal with 0 and 1
That is very strange advice, I am forced to wonder who in the world advanced it.
The correct way to fit a logistic regression leaves the zeros and ones alone, and determines the parameters that minimize the log likelihood function:
$$ f(\beta) = \sum_i y_i \log(p_i) + (1 - y_i) \log(1 - p_i) $$
Where $p_i$ is shorthand for
$$ p_i = \frac{e^{\beta \cdot x_i}}{1 + e^{\beta \cdot x_i} }$$
The exponents are vector dot products and $p_i$ is a function of the parameter vector $\beta$. The $y_i$s in this expression are either $0$ or $1$, and it's pleasant to notice that this causes each term to be equal to either
$$ \log(p_i) $$
or
$$ \log(1 - p_i) $$
Generally, yes, this expression is minimized using a method called iteratively re-weighted least squares, which is itself derived from Newton's classical method for minimizing non-linear functions.
R's glm function does exactly this. No response replacement in sight. | Logistic Regression in R and How to deal with 0 and 1
That is very strange advice, I am forced to wonder who in the world advanced it.
The correct way to fit a logistic regression leaves the zeros and ones alone, and determines the parameters that minimi |
55,129 | Is there any modality summary statistic? | Simple summary statistic? Not really.
Any sort of summary statistic? Yes, but I'm guessing it's a harder problem than you may expect (and that the answers given by these methods are less reliable than you want).
To help motivate the difficulty, consider the following histogram:
I believe looking at this plot, you would want your function to return 2 since there appears to be two clear modes. Note that this is the histogram built by R's hist function with default settings.
But what if we decide to make the histogram a little finer? Now consider if we want 20 bins instead of 7 on the same dataset.
Now maybe you want your function to tell you there are 4 peaks! What about 100 bins?
So many peaks! You can see that the number of modes in our histogram varies wildly with our choice of number of bins (btw: this is simulated data from four very distinct normals with 100 observations from each distribution. So the "true" answer is 4 modes. This would also be considered an easy example as the distributions are so distinct).
While you may think this is just an obvious result of how histograms work, this is a very standard situation in these problems: you have some sort of smoothing parameter (in the histogram context, this would be the number of bins) and as you differ your smoothing parameter, you get wildly different number of reported modes. In general, deciding what value of the smoothing parameter to use is non-trivial.
If you're still very interested, two topics to look into could be kernel density
estimation and Gaussian mixture modeling. But be warned that you should not expect to be able to reliably estimate the true number of modes in the population! | Is there any modality summary statistic? | Simple summary statistic? Not really.
Any sort of summary statistic? Yes, but I'm guessing it's a harder problem than you may expect (and that the answers given by these methods are less reliable tha | Is there any modality summary statistic?
Simple summary statistic? Not really.
Any sort of summary statistic? Yes, but I'm guessing it's a harder problem than you may expect (and that the answers given by these methods are less reliable than you want).
To help motivate the difficulty, consider the following histogram:
I believe looking at this plot, you would want your function to return 2 since there appears to be two clear modes. Note that this is the histogram built by R's hist function with default settings.
But what if we decide to make the histogram a little finer? Now consider if we want 20 bins instead of 7 on the same dataset.
Now maybe you want your function to tell you there are 4 peaks! What about 100 bins?
So many peaks! You can see that the number of modes in our histogram varies wildly with our choice of number of bins (btw: this is simulated data from four very distinct normals with 100 observations from each distribution. So the "true" answer is 4 modes. This would also be considered an easy example as the distributions are so distinct).
While you may think this is just an obvious result of how histograms work, this is a very standard situation in these problems: you have some sort of smoothing parameter (in the histogram context, this would be the number of bins) and as you differ your smoothing parameter, you get wildly different number of reported modes. In general, deciding what value of the smoothing parameter to use is non-trivial.
If you're still very interested, two topics to look into could be kernel density
estimation and Gaussian mixture modeling. But be warned that you should not expect to be able to reliably estimate the true number of modes in the population! | Is there any modality summary statistic?
Simple summary statistic? Not really.
Any sort of summary statistic? Yes, but I'm guessing it's a harder problem than you may expect (and that the answers given by these methods are less reliable tha |
55,130 | Lagrangian multiplier: role of the constraint sign | There is no sign restriction for the Lagrange multiplier of an equality constraint. Lagrange multipliers of inequality constraints do have a sign restriction.
You should really look at the Karush-Kuhn-tucker conditions if you want to understand Lagrange multipliers https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions . Realistically, you will need to study a book or course notes. Here is a high quality book available as a free pdf from the author's website http://stanford.edu/~boyd/cvxbook/ . If you can make it through chapter 5, you'll be in good shape. | Lagrangian multiplier: role of the constraint sign | There is no sign restriction for the Lagrange multiplier of an equality constraint. Lagrange multipliers of inequality constraints do have a sign restriction.
You should really look at the Karush-K | Lagrangian multiplier: role of the constraint sign
There is no sign restriction for the Lagrange multiplier of an equality constraint. Lagrange multipliers of inequality constraints do have a sign restriction.
You should really look at the Karush-Kuhn-tucker conditions if you want to understand Lagrange multipliers https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions . Realistically, you will need to study a book or course notes. Here is a high quality book available as a free pdf from the author's website http://stanford.edu/~boyd/cvxbook/ . If you can make it through chapter 5, you'll be in good shape. | Lagrangian multiplier: role of the constraint sign
There is no sign restriction for the Lagrange multiplier of an equality constraint. Lagrange multipliers of inequality constraints do have a sign restriction.
You should really look at the Karush-K |
55,131 | Should we report R-squared or adjusted R-squared in non-linear regression? | You are fitting multiple parameters in your model. (Usually, you fit one parameter for every variable, but your model is non-linear so that isn't the case, even though you have only one $X$ variable.) With every additional parameter, your model has the opportunity to fit the data better, even if that parameter shouldn't be fitted (e.g., if $b$, or $g$ are actually $1$). The adjusted $R^2$ statistic attempts to correct for that added flexibility.
$R^2$ doesn't really mean too much on its own. A low value may be appropriate (that's the amount of information that can legitimately be explained) or it may indicate a problem with lack of fit. A high value may indicate a particularly informative model, or one that is badly overfit. What constitutes a "low" or "high" $R^2$ will vary by subject matter. Etc. Thus, they are most useful in comparison. I gather you will fit the same model to multiple $Y$ variables, but if the $X$ variable and the model's functional form are the same each time, it won't make any difference whether you used $R^2$ or $R^2_{\rm adj}$ as long as you used the same one each time. As far as which should be reported in a paper, $R^2_{\rm adj}$ is probably ideal, but due to its comparative nature, whichever is more common in your field would be appropriate. | Should we report R-squared or adjusted R-squared in non-linear regression? | You are fitting multiple parameters in your model. (Usually, you fit one parameter for every variable, but your model is non-linear so that isn't the case, even though you have only one $X$ variable. | Should we report R-squared or adjusted R-squared in non-linear regression?
You are fitting multiple parameters in your model. (Usually, you fit one parameter for every variable, but your model is non-linear so that isn't the case, even though you have only one $X$ variable.) With every additional parameter, your model has the opportunity to fit the data better, even if that parameter shouldn't be fitted (e.g., if $b$, or $g$ are actually $1$). The adjusted $R^2$ statistic attempts to correct for that added flexibility.
$R^2$ doesn't really mean too much on its own. A low value may be appropriate (that's the amount of information that can legitimately be explained) or it may indicate a problem with lack of fit. A high value may indicate a particularly informative model, or one that is badly overfit. What constitutes a "low" or "high" $R^2$ will vary by subject matter. Etc. Thus, they are most useful in comparison. I gather you will fit the same model to multiple $Y$ variables, but if the $X$ variable and the model's functional form are the same each time, it won't make any difference whether you used $R^2$ or $R^2_{\rm adj}$ as long as you used the same one each time. As far as which should be reported in a paper, $R^2_{\rm adj}$ is probably ideal, but due to its comparative nature, whichever is more common in your field would be appropriate. | Should we report R-squared or adjusted R-squared in non-linear regression?
You are fitting multiple parameters in your model. (Usually, you fit one parameter for every variable, but your model is non-linear so that isn't the case, even though you have only one $X$ variable. |
55,132 | Should we report R-squared or adjusted R-squared in non-linear regression? | You should use neither of those. This is because neither $R^2$ nor adjusted $R^2$ (for handling multiple explanatory variables) are well defined for non-linear regression. If the purpose is for reporting the accuracy of the models, I would suggest to use cross validation errors, MSE, on a hold out test set instead. | Should we report R-squared or adjusted R-squared in non-linear regression? | You should use neither of those. This is because neither $R^2$ nor adjusted $R^2$ (for handling multiple explanatory variables) are well defined for non-linear regression. If the purpose is for report | Should we report R-squared or adjusted R-squared in non-linear regression?
You should use neither of those. This is because neither $R^2$ nor adjusted $R^2$ (for handling multiple explanatory variables) are well defined for non-linear regression. If the purpose is for reporting the accuracy of the models, I would suggest to use cross validation errors, MSE, on a hold out test set instead. | Should we report R-squared or adjusted R-squared in non-linear regression?
You should use neither of those. This is because neither $R^2$ nor adjusted $R^2$ (for handling multiple explanatory variables) are well defined for non-linear regression. If the purpose is for report |
55,133 | Frequentist statistics | It sounds like your objection is to the use of complete datasets, rather than an objection to the statistical methodology used on those datasets. While you are correct that datasets used in university courses are usually much cleaner than real-life datasets (in particular, they often do not have missing data), this is for pedagogical reasons --- the goal of using clean datasets is to focus attention on the statistical methods under consideration in the course, without adding additional complications.
Classical statistical methods (which you refer to as "frequentist" here) are capable of dealing with missing data using imputation methods. These methods are reasonably complicated and they would tend to side-track the analysis if used heavily in introductory statistical courses. Nevertheless, they can ---in principle--- be added on to any of the classical statistical methods. There is something of an "opening wedge" here to argue against classical methods and in favour of Bayesian methods. Multiple imputation can itself be regarded as a kind of numerical version of Bayesian analysis, and one could reasonably mount the argument that allowing multiple imputation is essentially admitting the reasoning behind Bayesian statistics. This is a complicated argument, but it is one you could investigate.
As to whether or not classical statistics makes strong assumptions, that depends on what models you are talking about and what you compare them to. At one extreme you can use non-parametric models, which have very few assumptions, and at the other extreme you can use highly specific parametric models, which may involve strong assumptions. Classical statistics is a broad field and it has models at varying levels of generality and with varying levels of detail in the assumptions.
There are a few tricky "paradoxes" in probability and statistics, and some of them are difficult to deal with without using Bayesian methods. The "shooting-room paradox" is particularly difficult to deal with without some kind of Bayesian prior specification over a set of difficult conditioning events (see e.g., Bartha and Hitchcock 1999). Challenges to the classical paradigm have often come from Bayesian statisticians who were unsatisfied with the ability of that paradigm to deal with tricky paradoxes, so this is also something you could investigate. | Frequentist statistics | It sounds like your objection is to the use of complete datasets, rather than an objection to the statistical methodology used on those datasets. While you are correct that datasets used in universit | Frequentist statistics
It sounds like your objection is to the use of complete datasets, rather than an objection to the statistical methodology used on those datasets. While you are correct that datasets used in university courses are usually much cleaner than real-life datasets (in particular, they often do not have missing data), this is for pedagogical reasons --- the goal of using clean datasets is to focus attention on the statistical methods under consideration in the course, without adding additional complications.
Classical statistical methods (which you refer to as "frequentist" here) are capable of dealing with missing data using imputation methods. These methods are reasonably complicated and they would tend to side-track the analysis if used heavily in introductory statistical courses. Nevertheless, they can ---in principle--- be added on to any of the classical statistical methods. There is something of an "opening wedge" here to argue against classical methods and in favour of Bayesian methods. Multiple imputation can itself be regarded as a kind of numerical version of Bayesian analysis, and one could reasonably mount the argument that allowing multiple imputation is essentially admitting the reasoning behind Bayesian statistics. This is a complicated argument, but it is one you could investigate.
As to whether or not classical statistics makes strong assumptions, that depends on what models you are talking about and what you compare them to. At one extreme you can use non-parametric models, which have very few assumptions, and at the other extreme you can use highly specific parametric models, which may involve strong assumptions. Classical statistics is a broad field and it has models at varying levels of generality and with varying levels of detail in the assumptions.
There are a few tricky "paradoxes" in probability and statistics, and some of them are difficult to deal with without using Bayesian methods. The "shooting-room paradox" is particularly difficult to deal with without some kind of Bayesian prior specification over a set of difficult conditioning events (see e.g., Bartha and Hitchcock 1999). Challenges to the classical paradigm have often come from Bayesian statisticians who were unsatisfied with the ability of that paradigm to deal with tricky paradoxes, so this is also something you could investigate. | Frequentist statistics
It sounds like your objection is to the use of complete datasets, rather than an objection to the statistical methodology used on those datasets. While you are correct that datasets used in universit |
55,134 | Frequentist statistics | As Ben mentioned there are often a variety of methods available to approach a problem (including missing data), each with a different set of assumptions (frequentist or not). I realize your question is asking about where frequentist methods will fail, but I will provide my rationale for why many adhere to frequentist methods even when presented with the opportunity to "go Bayesian." I'm not writing this to provoke anyone, I'm simply showing the thought process that some in your department may have so that you are prepared for a scientific discussion.
To the frequentist, population-level quantities (typically denoted by greek characters) are fixed and unknown because we are unable to sample the entire population. If we could sample the entire population we would know the population-level quantity of interest. In practice we have a limited sample from the population, and the only thing one can objectively describe is the operating characteristics of an estimation and testing procedure.
Understanding the long-run performance of the estimation and testing procedure is what gives the frequentist confidence in the conclusions drawn from a single experimental result. What the experimenter or anyone else subjectively believes before or after the experiment is irrelevant since this belief is not evidence of anything. Beliefs and opinions are not facts. If the frequentist has historical data ("prior knowledge") this can be incorporated in a meta-analysis through the likelihood and does not require the use of belief probabilities regarding parameters. If fixed population quantities are treated as random variables this can introduce bias in estimation and inference.
To the Bayesian, probability is axiomatic and measures the experimenter. The Bayesian interpretation of probability as a measure of belief is unfalsifiable $-$ it is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. It is a statement about the experimenter. Who can claim to know the experimenter's beliefs better than the experimenter? If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment.
A common example used to promote Bayesian statistics and discourage the use of p-values involves a screening test for cancer or COVID and a disease prevalence. Here is a LinkedIn article on the topic showing the internal contradictions of such an approach. Another common example used to promote Bayesian statistics and discourage the use of p-values involves incorporating "prior knowledge." As mentioned earlier, if performed objectively this "prior knowledge" is simply the likelihood from a historical study which can easily be incorporated through a frequentist meta-analysis. Other examples used to promote the Bayesian paradigm involve predictive inference. Such predictive inference is possible under the frequentist paradigm using predictive p-values and prediction intervals.
I find confidence curves to be a particularly useful way to visualize frequentist inference, analogous to Bayesian posterior distributions. Here are some threads that demonstrates this [1] [2]. | Frequentist statistics | As Ben mentioned there are often a variety of methods available to approach a problem (including missing data), each with a different set of assumptions (frequentist or not). I realize your question | Frequentist statistics
As Ben mentioned there are often a variety of methods available to approach a problem (including missing data), each with a different set of assumptions (frequentist or not). I realize your question is asking about where frequentist methods will fail, but I will provide my rationale for why many adhere to frequentist methods even when presented with the opportunity to "go Bayesian." I'm not writing this to provoke anyone, I'm simply showing the thought process that some in your department may have so that you are prepared for a scientific discussion.
To the frequentist, population-level quantities (typically denoted by greek characters) are fixed and unknown because we are unable to sample the entire population. If we could sample the entire population we would know the population-level quantity of interest. In practice we have a limited sample from the population, and the only thing one can objectively describe is the operating characteristics of an estimation and testing procedure.
Understanding the long-run performance of the estimation and testing procedure is what gives the frequentist confidence in the conclusions drawn from a single experimental result. What the experimenter or anyone else subjectively believes before or after the experiment is irrelevant since this belief is not evidence of anything. Beliefs and opinions are not facts. If the frequentist has historical data ("prior knowledge") this can be incorporated in a meta-analysis through the likelihood and does not require the use of belief probabilities regarding parameters. If fixed population quantities are treated as random variables this can introduce bias in estimation and inference.
To the Bayesian, probability is axiomatic and measures the experimenter. The Bayesian interpretation of probability as a measure of belief is unfalsifiable $-$ it is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment. It is a statement about the experimenter. Who can claim to know the experimenter's beliefs better than the experimenter? If the prior distribution is chosen in such a way that the posterior is dominated by the likelihood or is proportional to the likelihood, Bayesian belief is more objectively viewed as confidence based on frequency probability of the experiment.
A common example used to promote Bayesian statistics and discourage the use of p-values involves a screening test for cancer or COVID and a disease prevalence. Here is a LinkedIn article on the topic showing the internal contradictions of such an approach. Another common example used to promote Bayesian statistics and discourage the use of p-values involves incorporating "prior knowledge." As mentioned earlier, if performed objectively this "prior knowledge" is simply the likelihood from a historical study which can easily be incorporated through a frequentist meta-analysis. Other examples used to promote the Bayesian paradigm involve predictive inference. Such predictive inference is possible under the frequentist paradigm using predictive p-values and prediction intervals.
I find confidence curves to be a particularly useful way to visualize frequentist inference, analogous to Bayesian posterior distributions. Here are some threads that demonstrates this [1] [2]. | Frequentist statistics
As Ben mentioned there are often a variety of methods available to approach a problem (including missing data), each with a different set of assumptions (frequentist or not). I realize your question |
55,135 | Frequentist statistics | This post is about 2800 words long in order to handle the response to comments. It looks much larger due to the size of the graphics. About half the post in length is graphics. Nonetheless, a comment makes mention that with my edit, the whole is difficult to consume. So what I am doing is providing an outline and a restructuring to make it easier to know what to expect.
The first section is a brief defense of the use of Frequentist methods. All too often in these discussions people bash one tool for another. The second is a description of a game where Bayesian method guarantee the user of Frequentist methods takes a loss. The third section explains why that happens.
A DEFENSE OF FREQUENTISM
Pearson and Neyman originated statistics are optimal methods. Fisher's method of maximum likelihood is an optimal method. Likewise, Bayesian methods are optimal methods. So, given that they are all optimal in at least some circumstances, why prefer non-Bayesian methods to Bayesian ones?
First, if the assumptions are met, the sampling distribution is a real thing. If the null is true, the assumptions hold, the model is the correct model and if you could do things such as infinite repetition, then the sampling distribution is exactly the real distribution that nature would create. It would be a direct one-to-one mapping of the model to nature. Of course, you may have to wait an infinite amount of time to see it.
Second, non-Bayesian methods are often required by statute or regulation. Some accounting standards only are sensible with a non-Bayesian method. Although there are workarounds in the Bayesian world for handling a sharp null hypothesis, the only type of inferential method that can properly handle a hypothesis such as $H_0:\theta=k$ with $H_A:\theta\ne{k}$ is a non-Bayesian method. Additionally, non-Bayesian methods can have highly desired properties that are unavailable to the Bayesian user.
Frequentist methods provide a guaranteed maximum level of false positives. Simplifying that statement, they give you a guarantee of how often you will look like a fool. They also permit you to control against false negatives.
Additionally, Frequentist methods, generally, minimize the maximum amount of risk that you are facing. If you lack a true prior distribution, that is a wonderful thing.
As well, if you need to do transformations of your data for some reason, the method of maximum likelihood has wonderful invariance properties that usually are absent from a Bayesian method.
Problematically, Bayesian calculations are always person-specific. If I have a highly bigotted prior distribution, it can be the case that the data collected is too small to move it regardless of the true value. Frequentist methods work equally well, regardless of where the parameter sits. Bayesian calculations do not work equally well over the parameter space. They are best when the prior is a good prior and worst when the prior is far away.
Finally, Bayesian reasoning is always incomplete. It is inductive. For example, models built before relativity would always be wrong about things that relatively impacts. A Frequentist test of Newtonian models would have rejected the null in the edge cases such as the orbit of Mercury. That is complete reasoning. Newton is at least sometimes wrong. It is true you still lack a good model, but you know the old one is bad. Bayesian methods would rank models and the best model would be a bad model. Its reasoning is incomplete and one cannot know how it is wrong.
Now let us talk about when Bayesian methods are better than Frequentist methods. There are three places where that happens, except when it is required by some rule such as an accounting standard.
The first is when you are needing to update your beliefs or your organization's beliefs. Bayesian methods separate Bayesian inference from Bayesian actions. You can infer something and also do nothing about it. Sometimes we do not need to share an understanding of the world by agreeing on accepting a convention like a t-test. Sometimes I need to update what I think is happening.
The second is when real prior information exists but not in a form that would allow something like a meta-analysis. For example, people investing in riskier assets than bonds should anticipate receiving a higher rate of return than bonds. If you know the nominal interest rate on a bond of long enough duration, then you should anticipate that actors in the market are attempting to earn more. Your prior should reflect that is improbable that the center of location for stocks should be less than the return on bonds. Conversely, it is very probable that is greater, but not monumentally greater either. It would be surprising for a firm to be discounted in a competitive market to a 200% per year return.
The third reason is gambling. That is sort of my area of expertise. My area can be thought of as being one of two things. The first is the study of the price people require to defer consumption. The second would be the return required to cover a risk.
In the first version, buying a two-year-old a birthday present in order to see them smile next week is an example of that. It is a gamble. They may fall in love with the box and ignore the toy breaking our hearts and making them happy. In the second, we consider not only the raw outcome but the price of risk. In a competition to own or rid oneself of risk, prices form.
In a competitive circumstance, the second case and not the first, only Bayesian methods will work because non-Bayesian methods and some Bayesian methods are incoherent. A set of probabilities are incoherent if I can force a middleman such as a market maker or bookie to take a loss.
All Frequentist methods, at least some of the time, when used with a gamble can cause a bookie or market maker to take a loss. In some cases, the loss is total. The bookie will lose at every point in the sample space.
I have a set of a half-dozen exercises that I do for this and I will use one below. Even though the field of applied finance is Frequentist, it should not be. See the third section for the reason.
THE EXAMPLE
As you are a graduate statistics student, I will drop the story I usually tell around the example so that you can just do the math. In fact, this one is very simple. You can readily do this yourself.
Choose a rectangle in the first quadrant of a Cartesian plane such that no part of the rectangle touches either axis. For the purposes of making the problem computationally tractable, give yourself at least some distance from both axes and do not make it insanely large. You can create significant digit issues for yourself.
I usually use a rectangle where neither $x$ nor $y$ is less than 10 and nothing is greater than 100, although that choice is arbitrary.
Uniformly draw a single coordinate pair from that plane. All the actors know where the rectangle is at so you have a proper prior distribution with no surprises. This condition exists partly to ground the prior, but also because there exist cases where improper priors give rise to incoherent prices. As the point is only to show differences exist and not to go extensively into prior distributions, a simple grounding is used.
The region doesn't have to be a rectangle. If you hate yourself, make a region shaped like an outline of the Statue of Liberty. Choose a bizarre distribution over it if you feel like it. It might be unfair to Frequentist methods to choose a shape that is relatively narrow, particularly one with something like a donut hole in it.
On that rectangle will be placed a unit circle. There is nothing special about a circle, but unless you hate yourself, make it a circle that is small relative to the rectangle. If you make it too small, again, you could end up with significant digit issues.
You will be the bookie and I will be the player. You will use Frequentist methods and I will use Bayesian methods. I will pay an upfront lump sum fee to you to play the game. The reason is that a lump sum is a constant and will fall out of any calculations about profit maximization. Again, if you hate yourself, do something else.
You agree to accept any finite bet that I make, either short or long, at your stated prices. You also agree to use the risk-neutral measure. In other words, you will state fair Frequentist odds. Your sole source of profit is your fee, in expectation. We can assume that you have nearly limitless pocket depth compared to my meager purse.
The purpose of this illustration is to illustrate an example of how a violation of the converse of the Dutch Book Theorem, or the Dutch Book Theorem, assures bad outcomes. You can arbitrage any Frequentist pricing game, though not necessarily in this manner.
The unit circle is in a position unknown to either of us. The unit circle will emit forty points drawn uniformly over the circle. You will draw a line from the origin at $(0,0)$ through the minimum variance unbiased estimator of the center of the circle. The line is infinitely long so you will cut the disk into two pieces.
We will gamble whether the left side or the right side is bigger. Because the MVUE is guaranteed to be perfectly accurate by force of math, you will offer one-to-one odds for either the left side or the right side. How will I win?
As an aside, it doesn't matter if you convert this to polar coordinates or run a regression forcing the intercept through the origin. The same outcome ends up happening.
So first understand what a good and bad cut would look like.
In this case, the good cut is perfect. Every other cut is in some sense bad.
Of course, neither you nor I get to see the outline. We only get to see the points.
The Frequentist line passes through the MVUE. Since the distribution of errors are symmetric over the sample space, one-to-one odds should not make you nervous.
It should make you nervous, though. All of the information about the location of the disk comes from the data alone with the Frequentist method. That implies that the Bayesian has access to at least a trivial amount of extra information. So I should win at least on those rare happenings where the circle is very near to the edge and most of the points are outside the rectangle. So I have at least a very small advantage, though you can mitigate it by making the rectangle comparatively large.
That isn't the big issue here. To understand the big issue, draw a unit circle around the MVUE.
You now know, for sure, that the left, upper side is smaller than the opposite with perfect certainty. You can know this because some points are outside the implied circle. If I the Bayesian can take advantage of that, then I can win anytime the MVUE sits in an impossible place. Any Frequentist statistic can do that. Most commonly, it happens when the left side of a confidence interval sits in an impossible location as may happen when it is negative for values that can only be positive.
The Bayesian posterior is always within one unit of every single point in the data set. It is the grand intersection of all the putative possible circles drawn around every point. The green line is the approximation of the posterior, though I think the width of the green line might be a bit distorting. The black dot is the posterior mean and the red dot the MVUE.
The black circle is the Bayesian circle and the red circle the Frequentist one. In the iteration of this game that was used to make this example, I was guaranteed a win 48% of the time and won roughly 75% of the remaining time from the improved precision. If you make Kelly Bets for thirty rounds, you make about 128,000 times your initial pot.
Under Frequentist math, you expected to see this distribution of wins over thirty rounds.
The Bayesian player expects to win under this distribution.
Technical Aside
It is not sufficient for the MVUE to be outside the posterior. It must also be outside the marginal posterior distribution of the slope. There do exist circumstances where the Frequentist line is possible even though the Frequentist point is not. Imagine the Bayesian posterior as a cone from the origin. The MVUE can be outside the posterior but inside the cone. In that circumstance, you bet the Kelly Bet based on the better precision of the Bayesian method. Also, improper priors can also lead to incoherence.
Note On Images
The boxes in the images were to make the overall graphic look nice. It wasn't the boundary I actually used.
WHY THIS HAPPENS
I have a half-dozen of these examples related to market trading rules. Games like this are not that difficult to create once you notice that they exist. A cornucopia of real-world examples exists in finance. I would also like to thank you for asking the question because I have never been able to use the word cornucopia in a sentence before.
A commentator felt that the difference was due to allowing a higher level of information by removing the restriction on an estimator being unbiased. That is not the reason. I have a similar game that uses the maximum likelihood estimator and it generates the same type of result. I also have a game where the Bayesian estimator is a higher variance unbiased estimator and it also leads to guaranteed wins. The minimum variance unbiased estimator is precisely what it says it is. That does not also imply that it is coherent.
Non-Bayesian statistics, and some Bayesian statistics, are incoherent. If you place a gamble on them, then perfect arbitrage can be created at least some of the time. The reason is a bit obscure, unfortunately, and goes to foundations. The base issue has to do with the partitioning of sets in probability theory. Under Kolmogorov’s third axiom, where $E_i$ is an event in a countable sequence of disjoint sets $$\Pr\left(\cup_{i=1}^\infty{E_i}\right)=\sum_{i=1}^\infty\Pr(E_i),$$ we have at least a potential conflict with the Dutch Book Theorem. The third result of the Dutch Book Theorem is $$\Pr\left(\cup_{i=1}^N{E_i}\right)=\sum_{i=1}^N\Pr(E_i),N\in\mathbb{N}.$$ It turns out that there is a conflict.
If you need to gamble, then you need sets that are finitely but not countably additive. Furthermore, in most cases, you also need to use proper prior distributions.
Any Frequentist pricing where there is a knowledgeable competitor leads to arbitrage positions. It can take quite a while to figure out where it is at, but with effort it can be found. That includes asset allocation models when when $P=P(Q)$. Getting $Q$ wrong shifts the supply or demand curves and so gets $P$ wrong.
There is absolutely nothing wrong with the unbiased estimators in the example above. Unbiased estimators do throw away information, but they do so in a principled and intelligently designed manner. The fact that they produce impossible results in this example is a side-effect anyone using them should be indifferent to. They are designed so that all information comes from the data alone. That is the goal. It is unfair to compare them to a Bayesian estimator if your goal is to have all information come from the data. The goal here isn’t scientific; it is gambling. It is only about putting money at risk.
The estimator is only bad in the scientific sense because we have access to information from outside the data that the method cannot use. What if we were wrong and the Earth was round, angels do not push planets around and spirits do not come to us in our dreams? Sometimes not using outside knowledge protects science. In gambling, that is a bad idea. A horse that is a bad mudder is important information to include if it just rained, even if that is not in your data set.
This example is primarily to show that it can be done and produces uniquely differing results. Real prices often sit outside the dense region of the Bayesian estimation. Warren Buffet and Charlie Munger have been able to tell you that since before I was born, as was Graham and Dodd before them. They just were not approaching it in the framework of formal probability. I am.
It is the interpretation of probability that is the problem, not the bias or lack thereof. Always choose your method for fitness of purpose, not popularity. Our job is to do a good job, not be fashionable. | Frequentist statistics | This post is about 2800 words long in order to handle the response to comments. It looks much larger due to the size of the graphics. About half the post in length is graphics. Nonetheless, a comme | Frequentist statistics
This post is about 2800 words long in order to handle the response to comments. It looks much larger due to the size of the graphics. About half the post in length is graphics. Nonetheless, a comment makes mention that with my edit, the whole is difficult to consume. So what I am doing is providing an outline and a restructuring to make it easier to know what to expect.
The first section is a brief defense of the use of Frequentist methods. All too often in these discussions people bash one tool for another. The second is a description of a game where Bayesian method guarantee the user of Frequentist methods takes a loss. The third section explains why that happens.
A DEFENSE OF FREQUENTISM
Pearson and Neyman originated statistics are optimal methods. Fisher's method of maximum likelihood is an optimal method. Likewise, Bayesian methods are optimal methods. So, given that they are all optimal in at least some circumstances, why prefer non-Bayesian methods to Bayesian ones?
First, if the assumptions are met, the sampling distribution is a real thing. If the null is true, the assumptions hold, the model is the correct model and if you could do things such as infinite repetition, then the sampling distribution is exactly the real distribution that nature would create. It would be a direct one-to-one mapping of the model to nature. Of course, you may have to wait an infinite amount of time to see it.
Second, non-Bayesian methods are often required by statute or regulation. Some accounting standards only are sensible with a non-Bayesian method. Although there are workarounds in the Bayesian world for handling a sharp null hypothesis, the only type of inferential method that can properly handle a hypothesis such as $H_0:\theta=k$ with $H_A:\theta\ne{k}$ is a non-Bayesian method. Additionally, non-Bayesian methods can have highly desired properties that are unavailable to the Bayesian user.
Frequentist methods provide a guaranteed maximum level of false positives. Simplifying that statement, they give you a guarantee of how often you will look like a fool. They also permit you to control against false negatives.
Additionally, Frequentist methods, generally, minimize the maximum amount of risk that you are facing. If you lack a true prior distribution, that is a wonderful thing.
As well, if you need to do transformations of your data for some reason, the method of maximum likelihood has wonderful invariance properties that usually are absent from a Bayesian method.
Problematically, Bayesian calculations are always person-specific. If I have a highly bigotted prior distribution, it can be the case that the data collected is too small to move it regardless of the true value. Frequentist methods work equally well, regardless of where the parameter sits. Bayesian calculations do not work equally well over the parameter space. They are best when the prior is a good prior and worst when the prior is far away.
Finally, Bayesian reasoning is always incomplete. It is inductive. For example, models built before relativity would always be wrong about things that relatively impacts. A Frequentist test of Newtonian models would have rejected the null in the edge cases such as the orbit of Mercury. That is complete reasoning. Newton is at least sometimes wrong. It is true you still lack a good model, but you know the old one is bad. Bayesian methods would rank models and the best model would be a bad model. Its reasoning is incomplete and one cannot know how it is wrong.
Now let us talk about when Bayesian methods are better than Frequentist methods. There are three places where that happens, except when it is required by some rule such as an accounting standard.
The first is when you are needing to update your beliefs or your organization's beliefs. Bayesian methods separate Bayesian inference from Bayesian actions. You can infer something and also do nothing about it. Sometimes we do not need to share an understanding of the world by agreeing on accepting a convention like a t-test. Sometimes I need to update what I think is happening.
The second is when real prior information exists but not in a form that would allow something like a meta-analysis. For example, people investing in riskier assets than bonds should anticipate receiving a higher rate of return than bonds. If you know the nominal interest rate on a bond of long enough duration, then you should anticipate that actors in the market are attempting to earn more. Your prior should reflect that is improbable that the center of location for stocks should be less than the return on bonds. Conversely, it is very probable that is greater, but not monumentally greater either. It would be surprising for a firm to be discounted in a competitive market to a 200% per year return.
The third reason is gambling. That is sort of my area of expertise. My area can be thought of as being one of two things. The first is the study of the price people require to defer consumption. The second would be the return required to cover a risk.
In the first version, buying a two-year-old a birthday present in order to see them smile next week is an example of that. It is a gamble. They may fall in love with the box and ignore the toy breaking our hearts and making them happy. In the second, we consider not only the raw outcome but the price of risk. In a competition to own or rid oneself of risk, prices form.
In a competitive circumstance, the second case and not the first, only Bayesian methods will work because non-Bayesian methods and some Bayesian methods are incoherent. A set of probabilities are incoherent if I can force a middleman such as a market maker or bookie to take a loss.
All Frequentist methods, at least some of the time, when used with a gamble can cause a bookie or market maker to take a loss. In some cases, the loss is total. The bookie will lose at every point in the sample space.
I have a set of a half-dozen exercises that I do for this and I will use one below. Even though the field of applied finance is Frequentist, it should not be. See the third section for the reason.
THE EXAMPLE
As you are a graduate statistics student, I will drop the story I usually tell around the example so that you can just do the math. In fact, this one is very simple. You can readily do this yourself.
Choose a rectangle in the first quadrant of a Cartesian plane such that no part of the rectangle touches either axis. For the purposes of making the problem computationally tractable, give yourself at least some distance from both axes and do not make it insanely large. You can create significant digit issues for yourself.
I usually use a rectangle where neither $x$ nor $y$ is less than 10 and nothing is greater than 100, although that choice is arbitrary.
Uniformly draw a single coordinate pair from that plane. All the actors know where the rectangle is at so you have a proper prior distribution with no surprises. This condition exists partly to ground the prior, but also because there exist cases where improper priors give rise to incoherent prices. As the point is only to show differences exist and not to go extensively into prior distributions, a simple grounding is used.
The region doesn't have to be a rectangle. If you hate yourself, make a region shaped like an outline of the Statue of Liberty. Choose a bizarre distribution over it if you feel like it. It might be unfair to Frequentist methods to choose a shape that is relatively narrow, particularly one with something like a donut hole in it.
On that rectangle will be placed a unit circle. There is nothing special about a circle, but unless you hate yourself, make it a circle that is small relative to the rectangle. If you make it too small, again, you could end up with significant digit issues.
You will be the bookie and I will be the player. You will use Frequentist methods and I will use Bayesian methods. I will pay an upfront lump sum fee to you to play the game. The reason is that a lump sum is a constant and will fall out of any calculations about profit maximization. Again, if you hate yourself, do something else.
You agree to accept any finite bet that I make, either short or long, at your stated prices. You also agree to use the risk-neutral measure. In other words, you will state fair Frequentist odds. Your sole source of profit is your fee, in expectation. We can assume that you have nearly limitless pocket depth compared to my meager purse.
The purpose of this illustration is to illustrate an example of how a violation of the converse of the Dutch Book Theorem, or the Dutch Book Theorem, assures bad outcomes. You can arbitrage any Frequentist pricing game, though not necessarily in this manner.
The unit circle is in a position unknown to either of us. The unit circle will emit forty points drawn uniformly over the circle. You will draw a line from the origin at $(0,0)$ through the minimum variance unbiased estimator of the center of the circle. The line is infinitely long so you will cut the disk into two pieces.
We will gamble whether the left side or the right side is bigger. Because the MVUE is guaranteed to be perfectly accurate by force of math, you will offer one-to-one odds for either the left side or the right side. How will I win?
As an aside, it doesn't matter if you convert this to polar coordinates or run a regression forcing the intercept through the origin. The same outcome ends up happening.
So first understand what a good and bad cut would look like.
In this case, the good cut is perfect. Every other cut is in some sense bad.
Of course, neither you nor I get to see the outline. We only get to see the points.
The Frequentist line passes through the MVUE. Since the distribution of errors are symmetric over the sample space, one-to-one odds should not make you nervous.
It should make you nervous, though. All of the information about the location of the disk comes from the data alone with the Frequentist method. That implies that the Bayesian has access to at least a trivial amount of extra information. So I should win at least on those rare happenings where the circle is very near to the edge and most of the points are outside the rectangle. So I have at least a very small advantage, though you can mitigate it by making the rectangle comparatively large.
That isn't the big issue here. To understand the big issue, draw a unit circle around the MVUE.
You now know, for sure, that the left, upper side is smaller than the opposite with perfect certainty. You can know this because some points are outside the implied circle. If I the Bayesian can take advantage of that, then I can win anytime the MVUE sits in an impossible place. Any Frequentist statistic can do that. Most commonly, it happens when the left side of a confidence interval sits in an impossible location as may happen when it is negative for values that can only be positive.
The Bayesian posterior is always within one unit of every single point in the data set. It is the grand intersection of all the putative possible circles drawn around every point. The green line is the approximation of the posterior, though I think the width of the green line might be a bit distorting. The black dot is the posterior mean and the red dot the MVUE.
The black circle is the Bayesian circle and the red circle the Frequentist one. In the iteration of this game that was used to make this example, I was guaranteed a win 48% of the time and won roughly 75% of the remaining time from the improved precision. If you make Kelly Bets for thirty rounds, you make about 128,000 times your initial pot.
Under Frequentist math, you expected to see this distribution of wins over thirty rounds.
The Bayesian player expects to win under this distribution.
Technical Aside
It is not sufficient for the MVUE to be outside the posterior. It must also be outside the marginal posterior distribution of the slope. There do exist circumstances where the Frequentist line is possible even though the Frequentist point is not. Imagine the Bayesian posterior as a cone from the origin. The MVUE can be outside the posterior but inside the cone. In that circumstance, you bet the Kelly Bet based on the better precision of the Bayesian method. Also, improper priors can also lead to incoherence.
Note On Images
The boxes in the images were to make the overall graphic look nice. It wasn't the boundary I actually used.
WHY THIS HAPPENS
I have a half-dozen of these examples related to market trading rules. Games like this are not that difficult to create once you notice that they exist. A cornucopia of real-world examples exists in finance. I would also like to thank you for asking the question because I have never been able to use the word cornucopia in a sentence before.
A commentator felt that the difference was due to allowing a higher level of information by removing the restriction on an estimator being unbiased. That is not the reason. I have a similar game that uses the maximum likelihood estimator and it generates the same type of result. I also have a game where the Bayesian estimator is a higher variance unbiased estimator and it also leads to guaranteed wins. The minimum variance unbiased estimator is precisely what it says it is. That does not also imply that it is coherent.
Non-Bayesian statistics, and some Bayesian statistics, are incoherent. If you place a gamble on them, then perfect arbitrage can be created at least some of the time. The reason is a bit obscure, unfortunately, and goes to foundations. The base issue has to do with the partitioning of sets in probability theory. Under Kolmogorov’s third axiom, where $E_i$ is an event in a countable sequence of disjoint sets $$\Pr\left(\cup_{i=1}^\infty{E_i}\right)=\sum_{i=1}^\infty\Pr(E_i),$$ we have at least a potential conflict with the Dutch Book Theorem. The third result of the Dutch Book Theorem is $$\Pr\left(\cup_{i=1}^N{E_i}\right)=\sum_{i=1}^N\Pr(E_i),N\in\mathbb{N}.$$ It turns out that there is a conflict.
If you need to gamble, then you need sets that are finitely but not countably additive. Furthermore, in most cases, you also need to use proper prior distributions.
Any Frequentist pricing where there is a knowledgeable competitor leads to arbitrage positions. It can take quite a while to figure out where it is at, but with effort it can be found. That includes asset allocation models when when $P=P(Q)$. Getting $Q$ wrong shifts the supply or demand curves and so gets $P$ wrong.
There is absolutely nothing wrong with the unbiased estimators in the example above. Unbiased estimators do throw away information, but they do so in a principled and intelligently designed manner. The fact that they produce impossible results in this example is a side-effect anyone using them should be indifferent to. They are designed so that all information comes from the data alone. That is the goal. It is unfair to compare them to a Bayesian estimator if your goal is to have all information come from the data. The goal here isn’t scientific; it is gambling. It is only about putting money at risk.
The estimator is only bad in the scientific sense because we have access to information from outside the data that the method cannot use. What if we were wrong and the Earth was round, angels do not push planets around and spirits do not come to us in our dreams? Sometimes not using outside knowledge protects science. In gambling, that is a bad idea. A horse that is a bad mudder is important information to include if it just rained, even if that is not in your data set.
This example is primarily to show that it can be done and produces uniquely differing results. Real prices often sit outside the dense region of the Bayesian estimation. Warren Buffet and Charlie Munger have been able to tell you that since before I was born, as was Graham and Dodd before them. They just were not approaching it in the framework of formal probability. I am.
It is the interpretation of probability that is the problem, not the bias or lack thereof. Always choose your method for fitness of purpose, not popularity. Our job is to do a good job, not be fashionable. | Frequentist statistics
This post is about 2800 words long in order to handle the response to comments. It looks much larger due to the size of the graphics. About half the post in length is graphics. Nonetheless, a comme |
55,136 | Basic intuition about minimal sufficient statistic | Let the sample space be $\mathcal{X}$. Then a sufficient statistic $T$ can be seen as indexing a partition of $\mathcal{X}$, that is, $T(x)=T(y)$ iff (if and only if) $x,y$ belongs to the same element of the partition. A minimallly sufficient statistic is then giving a maximal reduction of the data. That is to say, if $T$ is minimally sufficient, then if we take the partition corresponding to $T$, take two distinct elements of that partition, and makes a new partition by replacing the two by their union, the resulting statistic is not longer sufficient. So, any other sufficient statistic, say $S$, which is not minimal, will have a partition which corresponds to a refinement of the partition of $T$, that is, every element of the partition of $T$ is a union of elements of the partition of $S$ (this becomes easier to understand if you make a drawing from my text!). So, when you know the value of $S$, you know in which element of the partition of $S$ that sample point belongs, and also in which element of the partition of $T$ that sample point belongs — since that partition is coarser. That is what it means when it says that $T$ is a function of every other sufficient statistic — every other sufficient statistic gives more information (or the same information) about the sample than what $T$ does.
Definition: A partition of $\mathcal{X}$ is a collection of subsets of $\mathcal{X}$ such that $\cup_{\alpha} \mathcal{X}_\alpha = \mathcal{X}$ and
$\mathcal{X}_\alpha \cap \mathcal{X}_\beta = \emptyset$ unless the two elements of the partition are identical, that is , $\alpha=\beta$. | Basic intuition about minimal sufficient statistic | Let the sample space be $\mathcal{X}$. Then a sufficient statistic $T$ can be seen as indexing a partition of $\mathcal{X}$, that is, $T(x)=T(y)$ iff (if and only if) $x,y$ belongs to the same element | Basic intuition about minimal sufficient statistic
Let the sample space be $\mathcal{X}$. Then a sufficient statistic $T$ can be seen as indexing a partition of $\mathcal{X}$, that is, $T(x)=T(y)$ iff (if and only if) $x,y$ belongs to the same element of the partition. A minimallly sufficient statistic is then giving a maximal reduction of the data. That is to say, if $T$ is minimally sufficient, then if we take the partition corresponding to $T$, take two distinct elements of that partition, and makes a new partition by replacing the two by their union, the resulting statistic is not longer sufficient. So, any other sufficient statistic, say $S$, which is not minimal, will have a partition which corresponds to a refinement of the partition of $T$, that is, every element of the partition of $T$ is a union of elements of the partition of $S$ (this becomes easier to understand if you make a drawing from my text!). So, when you know the value of $S$, you know in which element of the partition of $S$ that sample point belongs, and also in which element of the partition of $T$ that sample point belongs — since that partition is coarser. That is what it means when it says that $T$ is a function of every other sufficient statistic — every other sufficient statistic gives more information (or the same information) about the sample than what $T$ does.
Definition: A partition of $\mathcal{X}$ is a collection of subsets of $\mathcal{X}$ such that $\cup_{\alpha} \mathcal{X}_\alpha = \mathcal{X}$ and
$\mathcal{X}_\alpha \cap \mathcal{X}_\beta = \emptyset$ unless the two elements of the partition are identical, that is , $\alpha=\beta$. | Basic intuition about minimal sufficient statistic
Let the sample space be $\mathcal{X}$. Then a sufficient statistic $T$ can be seen as indexing a partition of $\mathcal{X}$, that is, $T(x)=T(y)$ iff (if and only if) $x,y$ belongs to the same element |
55,137 | Best way to bin continuous data | You might try a regression tree with party as response and age as independent variable.
>temp <- rpart(Party ~ Age)
>plot(temp)
>text(temp)
The algorithm will find suitable places to split the Age variable, if these exist. If not, the tree won't grow past the root stage, which would tell you something. | Best way to bin continuous data | You might try a regression tree with party as response and age as independent variable.
>temp <- rpart(Party ~ Age)
>plot(temp)
>text(temp)
The algorithm will find suitable places to split the Age v | Best way to bin continuous data
You might try a regression tree with party as response and age as independent variable.
>temp <- rpart(Party ~ Age)
>plot(temp)
>text(temp)
The algorithm will find suitable places to split the Age variable, if these exist. If not, the tree won't grow past the root stage, which would tell you something. | Best way to bin continuous data
You might try a regression tree with party as response and age as independent variable.
>temp <- rpart(Party ~ Age)
>plot(temp)
>text(temp)
The algorithm will find suitable places to split the Age v |
55,138 | Best way to bin continuous data | (For the record, I agree with @dsaxton. But just to give you something, here is a quick demonstration of using LDA to optimally bin a continuous variable based on a factor.)
library(MASS)
Iris = iris[,c(1,5)]
model = lda(Species~Sepal.Length, Iris)
range(Iris$Sepal.Length) # [1] 4.3 7.9
cbind(seq(4, 8, .1),
predict(model, data.frame(Sepal.Length=seq(4, 8, .1)))$class)
# [,1] [,2]
# [1,] 4.0 1
# [2,] 4.1 1
# ...
# [15,] 5.4 1
# [16,] 5.5 2
# [17,] 5.6 2
# ...
# [23,] 6.2 2
# [24,] 6.3 3
# [25,] 6.4 3
# ...
# [41,] 8.0 3 | Best way to bin continuous data | (For the record, I agree with @dsaxton. But just to give you something, here is a quick demonstration of using LDA to optimally bin a continuous variable based on a factor.)
library(MASS)
Iris = i | Best way to bin continuous data
(For the record, I agree with @dsaxton. But just to give you something, here is a quick demonstration of using LDA to optimally bin a continuous variable based on a factor.)
library(MASS)
Iris = iris[,c(1,5)]
model = lda(Species~Sepal.Length, Iris)
range(Iris$Sepal.Length) # [1] 4.3 7.9
cbind(seq(4, 8, .1),
predict(model, data.frame(Sepal.Length=seq(4, 8, .1)))$class)
# [,1] [,2]
# [1,] 4.0 1
# [2,] 4.1 1
# ...
# [15,] 5.4 1
# [16,] 5.5 2
# [17,] 5.6 2
# ...
# [23,] 6.2 2
# [24,] 6.3 3
# [25,] 6.4 3
# ...
# [41,] 8.0 3 | Best way to bin continuous data
(For the record, I agree with @dsaxton. But just to give you something, here is a quick demonstration of using LDA to optimally bin a continuous variable based on a factor.)
library(MASS)
Iris = i |
55,139 | How to plot algorithm runtime for huge input set? | Continuing the comment theme, you should find an explanation for your outliers to know whether to include them or segregate them. I notice that the outliers are oddly clumped. An outlier for one input set row seems to often go along with an outlier for the same row in another input set.
Regarding your graph, once you've worked out the outliers issue, box plots may be a viable option. Here's a version with outliers, but using a log transform (which may be appropriate if there is a multiplicative aspect to the algorithm).
That loses the within-row correlations. To emphasize those you could use the same marker symbol for values in the same row (until you run out of symbols!). Another approach is to a parallel coordinates plot, where every row is represented by a connected line.
Finally, don't feel obligated to summarize all your findings in a single plot.
Btw, if you post more data, please use a computer-friendly format like CSV or JSON. | How to plot algorithm runtime for huge input set? | Continuing the comment theme, you should find an explanation for your outliers to know whether to include them or segregate them. I notice that the outliers are oddly clumped. An outlier for one input | How to plot algorithm runtime for huge input set?
Continuing the comment theme, you should find an explanation for your outliers to know whether to include them or segregate them. I notice that the outliers are oddly clumped. An outlier for one input set row seems to often go along with an outlier for the same row in another input set.
Regarding your graph, once you've worked out the outliers issue, box plots may be a viable option. Here's a version with outliers, but using a log transform (which may be appropriate if there is a multiplicative aspect to the algorithm).
That loses the within-row correlations. To emphasize those you could use the same marker symbol for values in the same row (until you run out of symbols!). Another approach is to a parallel coordinates plot, where every row is represented by a connected line.
Finally, don't feel obligated to summarize all your findings in a single plot.
Btw, if you post more data, please use a computer-friendly format like CSV or JSON. | How to plot algorithm runtime for huge input set?
Continuing the comment theme, you should find an explanation for your outliers to know whether to include them or segregate them. I notice that the outliers are oddly clumped. An outlier for one input |
55,140 | How to plot algorithm runtime for huge input set? | The minimum values for each input set might be the most informative. A lot of nuisance factors can slow down your benchmark, but very few can cause the code to run faster The docs for the python benchmarking module timeit say:
It’s tempting to calculate mean and standard deviation from the result vector and report these. However, this is not very useful. In a typical case, the lowest value gives a lower bound for how fast your machine can run the given code snippet; higher values in the result vector are typically not caused by variability in Python’s speed, but by other processes interfering with your timing accuracy. So the min() of the result is probably the only number you should be interested in. After that, you should look at the entire vector and apply common sense rather than statistics.
You could then turn to extreme value theory to get a more rigorous estimate of the fastest that your code can possibly run. This, however, assumes that your code and data are fixed. In other words, you'd repeat this process for $X_1, X_2 \text{ and } X_3$, and then try to do some inference on the results (e.g., compare credible intervals or something).
As an alternative, I like @xan's suggestion of plotting on a log scale. Given the domain, it might be more appropriate to use $\log_2$ instead of $\log_{10}$--each increment then corresponds to doubling the running time and might match up with some theoretical analysis of your algorithm. | How to plot algorithm runtime for huge input set? | The minimum values for each input set might be the most informative. A lot of nuisance factors can slow down your benchmark, but very few can cause the code to run faster The docs for the python bench | How to plot algorithm runtime for huge input set?
The minimum values for each input set might be the most informative. A lot of nuisance factors can slow down your benchmark, but very few can cause the code to run faster The docs for the python benchmarking module timeit say:
It’s tempting to calculate mean and standard deviation from the result vector and report these. However, this is not very useful. In a typical case, the lowest value gives a lower bound for how fast your machine can run the given code snippet; higher values in the result vector are typically not caused by variability in Python’s speed, but by other processes interfering with your timing accuracy. So the min() of the result is probably the only number you should be interested in. After that, you should look at the entire vector and apply common sense rather than statistics.
You could then turn to extreme value theory to get a more rigorous estimate of the fastest that your code can possibly run. This, however, assumes that your code and data are fixed. In other words, you'd repeat this process for $X_1, X_2 \text{ and } X_3$, and then try to do some inference on the results (e.g., compare credible intervals or something).
As an alternative, I like @xan's suggestion of plotting on a log scale. Given the domain, it might be more appropriate to use $\log_2$ instead of $\log_{10}$--each increment then corresponds to doubling the running time and might match up with some theoretical analysis of your algorithm. | How to plot algorithm runtime for huge input set?
The minimum values for each input set might be the most informative. A lot of nuisance factors can slow down your benchmark, but very few can cause the code to run faster The docs for the python bench |
55,141 | How to plot algorithm runtime for huge input set? | I have a strong suspicion that your raw data are not normally distributed, given that your data are right-skewed and they cannot possibly be skewed left (bounded by zero). As you mentioned, assuming normality here results in error estimates which extend below zero, which I believe is inappropriate. You may want to consider possible transformations of your raw data. There is a good CV post here and a SixSigma page here that may help you begin to thoughtfully consider these alternate approaches.
The question about representing data may resolve itself a bit more once your data meet your assumptions about, for example, normality.
Good luck! | How to plot algorithm runtime for huge input set? | I have a strong suspicion that your raw data are not normally distributed, given that your data are right-skewed and they cannot possibly be skewed left (bounded by zero). As you mentioned, assuming n | How to plot algorithm runtime for huge input set?
I have a strong suspicion that your raw data are not normally distributed, given that your data are right-skewed and they cannot possibly be skewed left (bounded by zero). As you mentioned, assuming normality here results in error estimates which extend below zero, which I believe is inappropriate. You may want to consider possible transformations of your raw data. There is a good CV post here and a SixSigma page here that may help you begin to thoughtfully consider these alternate approaches.
The question about representing data may resolve itself a bit more once your data meet your assumptions about, for example, normality.
Good luck! | How to plot algorithm runtime for huge input set?
I have a strong suspicion that your raw data are not normally distributed, given that your data are right-skewed and they cannot possibly be skewed left (bounded by zero). As you mentioned, assuming n |
55,142 | How to plot algorithm runtime for huge input set? | For my part, I would find the following most intuitive and illustrative: A stack of three plots, each corresponding to a subset, showing estimated density curves of both algorithms. (Example below, but note that they're simple pdfs, rather than estimated curves.)
To add more detail, you could include vertical lines demarcating your quantiles of choice. (Median, deciles, etc.) | How to plot algorithm runtime for huge input set? | For my part, I would find the following most intuitive and illustrative: A stack of three plots, each corresponding to a subset, showing estimated density curves of both algorithms. (Example below, bu | How to plot algorithm runtime for huge input set?
For my part, I would find the following most intuitive and illustrative: A stack of three plots, each corresponding to a subset, showing estimated density curves of both algorithms. (Example below, but note that they're simple pdfs, rather than estimated curves.)
To add more detail, you could include vertical lines demarcating your quantiles of choice. (Median, deciles, etc.) | How to plot algorithm runtime for huge input set?
For my part, I would find the following most intuitive and illustrative: A stack of three plots, each corresponding to a subset, showing estimated density curves of both algorithms. (Example below, bu |
55,143 | topic similarity semantic PMI between two words wikipedia | You might compute PMI using Wikipedia, as following:
1) Using Lucene to index a Wikipedia dump
2) Using Lucene API, it is straightforward to get:
The number (N1) of documents containing word1 and the number (N2) of documents containing word2. So, Prob(word1) = (N1 + 1) / N and Prob(word2) = (N2 + 1) / N, where N is the total number of documents in Wikipedia and "1" in the formulas is used for avoiding zero counts.
The number of times (N3) both words appear in a document together. You can also set a strong constraint so that the two words appear inside a 10-word (or 20-word) window context. Similarly, Prob(word1, word2) = (N3 + 1) / N.
We have: PMI(word1, word2) = Log(Prob(word1, word2) / (Prob(word1) * Prob(word2)))
Furthermore, I would suggest you to have a look at the recent WSDM 2015 paper "Exploring the Space of Topic Coherence Measures" and its associated toolkit Palmetto (https://github.com/AKSW/Palmetto) which implements the topic coherence calculations. Palmetto contains implementations of PMI and other topic coherence scores. | topic similarity semantic PMI between two words wikipedia | You might compute PMI using Wikipedia, as following:
1) Using Lucene to index a Wikipedia dump
2) Using Lucene API, it is straightforward to get:
The number (N1) of documents containing word1 and the | topic similarity semantic PMI between two words wikipedia
You might compute PMI using Wikipedia, as following:
1) Using Lucene to index a Wikipedia dump
2) Using Lucene API, it is straightforward to get:
The number (N1) of documents containing word1 and the number (N2) of documents containing word2. So, Prob(word1) = (N1 + 1) / N and Prob(word2) = (N2 + 1) / N, where N is the total number of documents in Wikipedia and "1" in the formulas is used for avoiding zero counts.
The number of times (N3) both words appear in a document together. You can also set a strong constraint so that the two words appear inside a 10-word (or 20-word) window context. Similarly, Prob(word1, word2) = (N3 + 1) / N.
We have: PMI(word1, word2) = Log(Prob(word1, word2) / (Prob(word1) * Prob(word2)))
Furthermore, I would suggest you to have a look at the recent WSDM 2015 paper "Exploring the Space of Topic Coherence Measures" and its associated toolkit Palmetto (https://github.com/AKSW/Palmetto) which implements the topic coherence calculations. Palmetto contains implementations of PMI and other topic coherence scores. | topic similarity semantic PMI between two words wikipedia
You might compute PMI using Wikipedia, as following:
1) Using Lucene to index a Wikipedia dump
2) Using Lucene API, it is straightforward to get:
The number (N1) of documents containing word1 and the |
55,144 | Categorical Predictors and categorical responses | You use logistic regression. All these forms of regression/ANOVA depend only on the nature of the dependent variable. ANOVA is the same thing as linear regression. So, here are starting places for various types of DV
Continuous, unbounded response with normal errors: Linear regression/ANOVA
Binary, categorical or ordinal response: Logistic regression of one type or another
Count response: Poisson or negative binomial regression
Time to event response: Survival methods, probably Cox model to start | Categorical Predictors and categorical responses | You use logistic regression. All these forms of regression/ANOVA depend only on the nature of the dependent variable. ANOVA is the same thing as linear regression. So, here are starting places for v | Categorical Predictors and categorical responses
You use logistic regression. All these forms of regression/ANOVA depend only on the nature of the dependent variable. ANOVA is the same thing as linear regression. So, here are starting places for various types of DV
Continuous, unbounded response with normal errors: Linear regression/ANOVA
Binary, categorical or ordinal response: Logistic regression of one type or another
Count response: Poisson or negative binomial regression
Time to event response: Survival methods, probably Cox model to start | Categorical Predictors and categorical responses
You use logistic regression. All these forms of regression/ANOVA depend only on the nature of the dependent variable. ANOVA is the same thing as linear regression. So, here are starting places for v |
55,145 | Categorical Predictors and categorical responses | First: independent of the predictors, if your response has
2 classes (binary), we would usually use a logistic regression
2 classes, we are speaking about a multinomial regression.
Second: regardless of what regression model you use (linear, logistic, multinomial), if you have categorical predictors, most software packages offer the choice between showing
Regression tables (which include tests for the individual predictors, i.e. n-1 tests for a categorical predictor with n classes), and
ANOVA, which tests for the overall significance of a predictor (i.e. test for an influence of all classes of a predictor at once).
Hence, regression tables and ANOVA have the same model underlying, but apply a different test. You can always do both. It's your choice what you want.
In R, if you are doing a logistic regression, you can do an ANOVA as you would do for the linear model (with the catch discussed here Choice between Type-I, Type-II, or Type-III ANOVA). You have to check to what extent ANOVA functions are implemented for the various multinomial regressions options in R. If ANOVA is not available, a simple substitute for a specific hypothesis would be a likelihood ratio test. | Categorical Predictors and categorical responses | First: independent of the predictors, if your response has
2 classes (binary), we would usually use a logistic regression
2 classes, we are speaking about a multinomial regression.
Second: regard | Categorical Predictors and categorical responses
First: independent of the predictors, if your response has
2 classes (binary), we would usually use a logistic regression
2 classes, we are speaking about a multinomial regression.
Second: regardless of what regression model you use (linear, logistic, multinomial), if you have categorical predictors, most software packages offer the choice between showing
Regression tables (which include tests for the individual predictors, i.e. n-1 tests for a categorical predictor with n classes), and
ANOVA, which tests for the overall significance of a predictor (i.e. test for an influence of all classes of a predictor at once).
Hence, regression tables and ANOVA have the same model underlying, but apply a different test. You can always do both. It's your choice what you want.
In R, if you are doing a logistic regression, you can do an ANOVA as you would do for the linear model (with the catch discussed here Choice between Type-I, Type-II, or Type-III ANOVA). You have to check to what extent ANOVA functions are implemented for the various multinomial regressions options in R. If ANOVA is not available, a simple substitute for a specific hypothesis would be a likelihood ratio test. | Categorical Predictors and categorical responses
First: independent of the predictors, if your response has
2 classes (binary), we would usually use a logistic regression
2 classes, we are speaking about a multinomial regression.
Second: regard |
55,146 | How does one extract the final equation from glm poisson model? | The equation is
$$\log(\mu_i) = \beta_0 + \beta_1 x_i$$
where $\mu_i$ is the conditional expectation of $y_i$, $E(y | x)$, $\beta_0$ is the coefficient marked Intercept and $\beta_1$ the coefficient marked x. The $\log$ bit is the link function you specified. Hence to get actual predictions on the scale of your response data $y$, you need to apply the inverse of the link function (anti-log) to the both sides of the equation:
$$\mu_i = \exp(\beta_0 + \beta_1 x_i)$$
$\mu_i$ is then the predicted mean count given the value of x.
Print out the coefficients if you want them:
coef(m)
They may be a little more precise than the ones in the summary() output and so your Java code will be closer to the predictions given by predict().
The model doesn't look fantastic, despite the high linear correlation. There is bias throughout the range of x. Without knowing anything about the data, did you consider a model with $x$ and $x^2$? | How does one extract the final equation from glm poisson model? | The equation is
$$\log(\mu_i) = \beta_0 + \beta_1 x_i$$
where $\mu_i$ is the conditional expectation of $y_i$, $E(y | x)$, $\beta_0$ is the coefficient marked Intercept and $\beta_1$ the coefficient m | How does one extract the final equation from glm poisson model?
The equation is
$$\log(\mu_i) = \beta_0 + \beta_1 x_i$$
where $\mu_i$ is the conditional expectation of $y_i$, $E(y | x)$, $\beta_0$ is the coefficient marked Intercept and $\beta_1$ the coefficient marked x. The $\log$ bit is the link function you specified. Hence to get actual predictions on the scale of your response data $y$, you need to apply the inverse of the link function (anti-log) to the both sides of the equation:
$$\mu_i = \exp(\beta_0 + \beta_1 x_i)$$
$\mu_i$ is then the predicted mean count given the value of x.
Print out the coefficients if you want them:
coef(m)
They may be a little more precise than the ones in the summary() output and so your Java code will be closer to the predictions given by predict().
The model doesn't look fantastic, despite the high linear correlation. There is bias throughout the range of x. Without knowing anything about the data, did you consider a model with $x$ and $x^2$? | How does one extract the final equation from glm poisson model?
The equation is
$$\log(\mu_i) = \beta_0 + \beta_1 x_i$$
where $\mu_i$ is the conditional expectation of $y_i$, $E(y | x)$, $\beta_0$ is the coefficient marked Intercept and $\beta_1$ the coefficient m |
55,147 | How does one extract the final equation from glm poisson model? | This is great, and I have used it to verify part of a plot in which I am modeling the number of trees counted at certain elevations. However, I am using a zero-inflated Poisson model through zeroinfl() in pscl and the predicted values of zeroinfl() begin to deviate from the equation provided by Gavin after about 1000m in elevation. I assume this is because the probability of zero counts increases after about an elevation of 1000m, and so the zeroinfl() model begins to account for this increased zero count probability. Thats great and it does a good job of reflecting the actual observations, but I also need to know the equation for this line. What would be the zero-inflated Poisson version of this model? Something involving probabilities? Below is a plot in which the dots are the actual observation, the red is the predicted values based on zeroinfl() and the blue is the predicted valued based on the equation provided by Gavin and the coefficients of the zeroinfl() model. I am using a logarithmic representation of the y-axis to clarify the deviation. Included is a general representation of the code.
LogMega[,1] <- yjPower(LogMega[,1],0) #This is a log transform of the average density
m1 <- zeroinfl(Avg_dens~Z, data = LogMega) #the zero-inflated model
m2 <- 2.718^(coef(m1)[1]+(coef(m1)[2]*Z), data=LogMega) #the Poisson equation model
newdata1 <- expand.grid(LogMega[,n+3]) #create a table of the zeroinfl() predicted values
colnames(newdata1)<-"pred"
newdata1$resp <- predict(m1,newdata1)
p <- ggplot(LogMega,aes_string(x=names(LogMega)[n+3],y=LogMega[1]))
theme(
geom_line(data=newdata1, aes(x=pred,y=resp), size=1, color="red")+
geom_line(data=LogMega, aes(x=LogMega[,n+3], y=m2), size=1, color="blue")) | How does one extract the final equation from glm poisson model? | This is great, and I have used it to verify part of a plot in which I am modeling the number of trees counted at certain elevations. However, I am using a zero-inflated Poisson model through zeroinfl( | How does one extract the final equation from glm poisson model?
This is great, and I have used it to verify part of a plot in which I am modeling the number of trees counted at certain elevations. However, I am using a zero-inflated Poisson model through zeroinfl() in pscl and the predicted values of zeroinfl() begin to deviate from the equation provided by Gavin after about 1000m in elevation. I assume this is because the probability of zero counts increases after about an elevation of 1000m, and so the zeroinfl() model begins to account for this increased zero count probability. Thats great and it does a good job of reflecting the actual observations, but I also need to know the equation for this line. What would be the zero-inflated Poisson version of this model? Something involving probabilities? Below is a plot in which the dots are the actual observation, the red is the predicted values based on zeroinfl() and the blue is the predicted valued based on the equation provided by Gavin and the coefficients of the zeroinfl() model. I am using a logarithmic representation of the y-axis to clarify the deviation. Included is a general representation of the code.
LogMega[,1] <- yjPower(LogMega[,1],0) #This is a log transform of the average density
m1 <- zeroinfl(Avg_dens~Z, data = LogMega) #the zero-inflated model
m2 <- 2.718^(coef(m1)[1]+(coef(m1)[2]*Z), data=LogMega) #the Poisson equation model
newdata1 <- expand.grid(LogMega[,n+3]) #create a table of the zeroinfl() predicted values
colnames(newdata1)<-"pred"
newdata1$resp <- predict(m1,newdata1)
p <- ggplot(LogMega,aes_string(x=names(LogMega)[n+3],y=LogMega[1]))
theme(
geom_line(data=newdata1, aes(x=pred,y=resp), size=1, color="red")+
geom_line(data=LogMega, aes(x=LogMega[,n+3], y=m2), size=1, color="blue")) | How does one extract the final equation from glm poisson model?
This is great, and I have used it to verify part of a plot in which I am modeling the number of trees counted at certain elevations. However, I am using a zero-inflated Poisson model through zeroinfl( |
55,148 | How to do hypothesis testing in this case? | You only have a single sample, so when you call wilcox.test, that's not doing a Wilcoxon-Mann-Whitney, it's doing (as it tells you in the output!) a Wilcoxon signed-rank test.
That doesn't look to me to be directly relevant to the hypothesis in question. With additional assumptions (that don't hold) it could be relevant, but I don't think it's a suitable test as things stand.
Along the lines of the signed-rank test, you could do a sign test ... but that's going to be the same as the binomial proportions test I suggested in comments before.
response to followup question in comments:
A comparison of numeric grades might be addressed by
a two sample t-test (possibly with unpooled variance and Welch-Satterthwaite adjustment to df).
While the distribution is somewhat skew and discrete, the sample size is large enough that the t-distribution will be a reasonable approximation.
Alternatively a Wilcoxon-Mann-Whitney test might be used, as long as proper account is taken of the level of ties in the data because of the discreteness.
Finally, a permutation test based on a comparison of any statistic of interest might be used; indeed either of the two previously mentioned statistics can be used as the basis of a permutation test (whence the issue of skewness and discreteness are automatically dealt with). | How to do hypothesis testing in this case? | You only have a single sample, so when you call wilcox.test, that's not doing a Wilcoxon-Mann-Whitney, it's doing (as it tells you in the output!) a Wilcoxon signed-rank test.
That doesn't look to me | How to do hypothesis testing in this case?
You only have a single sample, so when you call wilcox.test, that's not doing a Wilcoxon-Mann-Whitney, it's doing (as it tells you in the output!) a Wilcoxon signed-rank test.
That doesn't look to me to be directly relevant to the hypothesis in question. With additional assumptions (that don't hold) it could be relevant, but I don't think it's a suitable test as things stand.
Along the lines of the signed-rank test, you could do a sign test ... but that's going to be the same as the binomial proportions test I suggested in comments before.
response to followup question in comments:
A comparison of numeric grades might be addressed by
a two sample t-test (possibly with unpooled variance and Welch-Satterthwaite adjustment to df).
While the distribution is somewhat skew and discrete, the sample size is large enough that the t-distribution will be a reasonable approximation.
Alternatively a Wilcoxon-Mann-Whitney test might be used, as long as proper account is taken of the level of ties in the data because of the discreteness.
Finally, a permutation test based on a comparison of any statistic of interest might be used; indeed either of the two previously mentioned statistics can be used as the basis of a permutation test (whence the issue of skewness and discreteness are automatically dealt with). | How to do hypothesis testing in this case?
You only have a single sample, so when you call wilcox.test, that's not doing a Wilcoxon-Mann-Whitney, it's doing (as it tells you in the output!) a Wilcoxon signed-rank test.
That doesn't look to me |
55,149 | How to do hypothesis testing in this case? | By your data 17 of 27 have less than 55 marks, hence failed. This is 63% with a 95% confidence interval of 42.3% to 80.6%. Hence, the hypothesis that MORE THAN 50% will fail with this teaching methodology is still not proven (the confidence interval is going across 50% meaning that the true rate could be LESS THAN 50%). | How to do hypothesis testing in this case? | By your data 17 of 27 have less than 55 marks, hence failed. This is 63% with a 95% confidence interval of 42.3% to 80.6%. Hence, the hypothesis that MORE THAN 50% will fail with this teaching methodo | How to do hypothesis testing in this case?
By your data 17 of 27 have less than 55 marks, hence failed. This is 63% with a 95% confidence interval of 42.3% to 80.6%. Hence, the hypothesis that MORE THAN 50% will fail with this teaching methodology is still not proven (the confidence interval is going across 50% meaning that the true rate could be LESS THAN 50%). | How to do hypothesis testing in this case?
By your data 17 of 27 have less than 55 marks, hence failed. This is 63% with a 95% confidence interval of 42.3% to 80.6%. Hence, the hypothesis that MORE THAN 50% will fail with this teaching methodo |
55,150 | use standard deviation as predictor? | Going by your picture, it looks like what you are proposing is to stratify your data into layers depending on the value of $Y$, then compute the standard deviation of $X$ in each layer, then use that computed group standard deviation as a predictor.
This leaks the true value of $Y$ into your predictors. Of course your model is more accurate, you've essentially allowed it to memorize the value of $Y$ by giving it a dictionary whose "words" are $\sigma(X)$ and whose "definitions" are $Y$.
Consider what happens if you are given a new dataset with only the values of $X$, and you want to make a prediction of $Y$. How are you going to use your new $\sigma(X)$ predictor in this situation? You need to know $Y$ to stratify $X$ and compute $\sigma(X)$ for each group!
Now, if you can define your groups without reference to $Y$, that's a different thing.
ok, what i don't understand still is the "leaking the value of Y into your predictors" because in the end every model uses the Y values to calculate its coefficients, so what's the difference?
You're completely correct, all regressions use the values of $Y$ to calculate their coefficients. Writing out the dependencies in detail, this looks like
$$ Y = \beta_0(X, Y) + \beta_1(X, Y) X_1 + \cdots + \beta_n(X, Y) X_N $$
each coefficient is a function of $Y$ (more precisely, the values of $Y$ in the training data), but each predictor is not. In your case, you are creating a predictor that is a function of both $X$ and $Y$. This is what I mean by "leaks the true value of Y into your predictors".
But let's assume that my method is not valid. However it gives me the smallest errors in a cross validation procedure for prediction.
Yes, it is not surprising that your cross validation error is lower. Unfortunately, your application of cross validation is incorrect. The correct procedure would be this
Split your data into in fold and out of fold pairs.
For each pair, compute your features using only the in fold data.
Make predictions on the out of fold data using only the values of $X$, or features that are functions of $X$.
Average the out of fold error rates of these predictions.
Your procedure violates the second and third bullet points. I'd recommend taking a look at the section of Chapter 7 in The Elements of Statistical Learning titled The Wrong and Right Way to Do Cross-validation.
my stat teacher used to tell us "if you can create a predictor which is better, you don't need to explain people how you created it"
I don't mean to contradict your teacher without full context, but as stated, that is dubious advice. | use standard deviation as predictor? | Going by your picture, it looks like what you are proposing is to stratify your data into layers depending on the value of $Y$, then compute the standard deviation of $X$ in each layer, then use that | use standard deviation as predictor?
Going by your picture, it looks like what you are proposing is to stratify your data into layers depending on the value of $Y$, then compute the standard deviation of $X$ in each layer, then use that computed group standard deviation as a predictor.
This leaks the true value of $Y$ into your predictors. Of course your model is more accurate, you've essentially allowed it to memorize the value of $Y$ by giving it a dictionary whose "words" are $\sigma(X)$ and whose "definitions" are $Y$.
Consider what happens if you are given a new dataset with only the values of $X$, and you want to make a prediction of $Y$. How are you going to use your new $\sigma(X)$ predictor in this situation? You need to know $Y$ to stratify $X$ and compute $\sigma(X)$ for each group!
Now, if you can define your groups without reference to $Y$, that's a different thing.
ok, what i don't understand still is the "leaking the value of Y into your predictors" because in the end every model uses the Y values to calculate its coefficients, so what's the difference?
You're completely correct, all regressions use the values of $Y$ to calculate their coefficients. Writing out the dependencies in detail, this looks like
$$ Y = \beta_0(X, Y) + \beta_1(X, Y) X_1 + \cdots + \beta_n(X, Y) X_N $$
each coefficient is a function of $Y$ (more precisely, the values of $Y$ in the training data), but each predictor is not. In your case, you are creating a predictor that is a function of both $X$ and $Y$. This is what I mean by "leaks the true value of Y into your predictors".
But let's assume that my method is not valid. However it gives me the smallest errors in a cross validation procedure for prediction.
Yes, it is not surprising that your cross validation error is lower. Unfortunately, your application of cross validation is incorrect. The correct procedure would be this
Split your data into in fold and out of fold pairs.
For each pair, compute your features using only the in fold data.
Make predictions on the out of fold data using only the values of $X$, or features that are functions of $X$.
Average the out of fold error rates of these predictions.
Your procedure violates the second and third bullet points. I'd recommend taking a look at the section of Chapter 7 in The Elements of Statistical Learning titled The Wrong and Right Way to Do Cross-validation.
my stat teacher used to tell us "if you can create a predictor which is better, you don't need to explain people how you created it"
I don't mean to contradict your teacher without full context, but as stated, that is dubious advice. | use standard deviation as predictor?
Going by your picture, it looks like what you are proposing is to stratify your data into layers depending on the value of $Y$, then compute the standard deviation of $X$ in each layer, then use that |
55,151 | use standard deviation as predictor? | I understood agenis to say that, in creating the std dev, he wanted to base them on bucketed X values, not Y values. If he were asking about bucketing the Y values, then Matthew Drury would be correct that this would "leak" Y into the predictors. In addition, agenis hasn't said whether or not there is a temporal dimension to the information. We're all assuming that there is not. If there is, then taking lags of Y would be appropriate and an adequate control for the issue of Y's "leakage" into the predictors. In addition, any temporal relationship would open up new classes of modeling options from nonlinear diffusion models to the many flavors of "Box-Jenkins" type methods.
There is still lots of room to play around here.
Simply bucketing the X values, e.g., creating 10 mutually exclusive groupings on X, amounts to a kind of poor-man's approach to kernel density analysis. Based on the chart and given the convex slope of the curve, one can see that these new groupings rapidly decline in their predictive power across the range of X wrt Y. Given that, it may well be that fitting 2 or 3 splines would provide a better fit than the main effects model as proposed.
If one chooses to bucket X, a consideration worth exploring is using the within bucket coefficient of variation instead of the std dev. The CV is the ratio of the std dev to the mean (times 100) and would result in a metric that is invariant and comparable across the levels of X. Why does this matter? Take two stock prices as an example. Stock 1 has an average price of 500 and a std dev of 100 while stock 2 has an average price of 50 and a std dev of 20. Which stock is more volatile? You can't look at the std devs in and of themselves to answer this question since they are scale dependent. The CV for stock 1 is 20 (100/500*100=20) and for stock 2 it is 40. Therefore and despite a smaller std dev, stock 2 has more inherent volatility than stock 1. To me, the advantages of a metric like this over a scale dependent std dev are clear.
Another possibility would be not to bucket X and retain its continuously distributed nature with a transformation. For instance, it could be that, again based on the chart, the relationship between X and Y is exponential. Depending on the magnitude or scale of X, exponentiation could quickly result in byte overflow (values so large they don't fit into the numeric formatting). Given that risk, transforming X first with, e.g., a natural log function and then taking the exponent would be a workaround. Other transformations of X that retain its continuous nature and compress its PDF (probability density function, i.e., its tail) are also possible. There are literally dozens, if not more, transformations available in the literature. There is a book devoted to cataloguing mathematical transformations, although I forget the title.
All of the suggestions made so far involve linear functions and models based on X. Models nonlinear in the parameters are also possible but could be hairy in terms of both specification and interpretation.
At the end of the day, the question becomes one of the relative importance of prediction vs substantive interpretation of the model results. If the focus is simply on prediction, then a "black box" model that fits the data but is opaque in meaning is permissible. If strategic insights into the relationship between X and Y is the goal, then keeping things at the level, not just of the analyst, but the analyst's audience, is imperative. In this latter instance, highly technical solutions are to be avoided since it's almost certainly the case that the audience will be comprised of the technically semi-literate, at best, with a strong skew to technical illiteracy. It's every analyst's worst nightmare to be explaining something to an innumerate audience where they are the only person in the room who understands what they're talking about.
Of course, breaking the data into separate test and holdout samples (or k-folds) to evaluate the model fit "out-of-sample" and control for overfitting is mandated. | use standard deviation as predictor? | I understood agenis to say that, in creating the std dev, he wanted to base them on bucketed X values, not Y values. If he were asking about bucketing the Y values, then Matthew Drury would be correct | use standard deviation as predictor?
I understood agenis to say that, in creating the std dev, he wanted to base them on bucketed X values, not Y values. If he were asking about bucketing the Y values, then Matthew Drury would be correct that this would "leak" Y into the predictors. In addition, agenis hasn't said whether or not there is a temporal dimension to the information. We're all assuming that there is not. If there is, then taking lags of Y would be appropriate and an adequate control for the issue of Y's "leakage" into the predictors. In addition, any temporal relationship would open up new classes of modeling options from nonlinear diffusion models to the many flavors of "Box-Jenkins" type methods.
There is still lots of room to play around here.
Simply bucketing the X values, e.g., creating 10 mutually exclusive groupings on X, amounts to a kind of poor-man's approach to kernel density analysis. Based on the chart and given the convex slope of the curve, one can see that these new groupings rapidly decline in their predictive power across the range of X wrt Y. Given that, it may well be that fitting 2 or 3 splines would provide a better fit than the main effects model as proposed.
If one chooses to bucket X, a consideration worth exploring is using the within bucket coefficient of variation instead of the std dev. The CV is the ratio of the std dev to the mean (times 100) and would result in a metric that is invariant and comparable across the levels of X. Why does this matter? Take two stock prices as an example. Stock 1 has an average price of 500 and a std dev of 100 while stock 2 has an average price of 50 and a std dev of 20. Which stock is more volatile? You can't look at the std devs in and of themselves to answer this question since they are scale dependent. The CV for stock 1 is 20 (100/500*100=20) and for stock 2 it is 40. Therefore and despite a smaller std dev, stock 2 has more inherent volatility than stock 1. To me, the advantages of a metric like this over a scale dependent std dev are clear.
Another possibility would be not to bucket X and retain its continuously distributed nature with a transformation. For instance, it could be that, again based on the chart, the relationship between X and Y is exponential. Depending on the magnitude or scale of X, exponentiation could quickly result in byte overflow (values so large they don't fit into the numeric formatting). Given that risk, transforming X first with, e.g., a natural log function and then taking the exponent would be a workaround. Other transformations of X that retain its continuous nature and compress its PDF (probability density function, i.e., its tail) are also possible. There are literally dozens, if not more, transformations available in the literature. There is a book devoted to cataloguing mathematical transformations, although I forget the title.
All of the suggestions made so far involve linear functions and models based on X. Models nonlinear in the parameters are also possible but could be hairy in terms of both specification and interpretation.
At the end of the day, the question becomes one of the relative importance of prediction vs substantive interpretation of the model results. If the focus is simply on prediction, then a "black box" model that fits the data but is opaque in meaning is permissible. If strategic insights into the relationship between X and Y is the goal, then keeping things at the level, not just of the analyst, but the analyst's audience, is imperative. In this latter instance, highly technical solutions are to be avoided since it's almost certainly the case that the audience will be comprised of the technically semi-literate, at best, with a strong skew to technical illiteracy. It's every analyst's worst nightmare to be explaining something to an innumerate audience where they are the only person in the room who understands what they're talking about.
Of course, breaking the data into separate test and holdout samples (or k-folds) to evaluate the model fit "out-of-sample" and control for overfitting is mandated. | use standard deviation as predictor?
I understood agenis to say that, in creating the std dev, he wanted to base them on bucketed X values, not Y values. If he were asking about bucketing the Y values, then Matthew Drury would be correct |
55,152 | LASSO with two predictors | The two formulas
$$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(1)$$
and
$$\hat\beta_2=[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(2)$$
are inconsistent with the budget equation
$$\hat\beta_1+\hat\beta_2 = s$$
in the case where only one predictor is in the model. Note that I switched out your inequality with an equality because it is always better to use up your full budget to drive down some squared error. In the case where both predictors are in the model, this is consistent
$$\hat\beta_1 + \hat\beta_2 = s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 + s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = s$$
but for only one predictor it is not.
In the one predictor case, you just observe that since you have to use up your entire budget, it must be entirely spent on the one predictor in the model, so your working estimates are
$$\hat\beta_1 = s $$
and
$$\hat\beta_2 = 0 $$
This happens until the continuity equation is satisfied
$$ s = s/2 + (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 $$
after which the second parameter enters the model. There's a continuity equation for the second parameter here, but it is automatically satisfied because
$$ s/2 - (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 - (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = 0$$
So after this point, we can move on with your original set of equations and both predictors in the model.
It's been a while since I've revisited this paper, but it does seem to me that there are some errors of omission in the discussion of the two predictor case. I'll try to work through the whole discussion in detail and make sure I agree with myself.
I went through it and I do agree with myself. Here's the argument.
Tibshirani's three working equations in this section are:
$$ \hat\beta_j = (\hat\beta^o_j - \lambda)^+ \quad \text{for} \ j=1,2 $$
which I'll call the lambda equations and
$$ \hat\beta_1 + \hat\beta_2 = s$$
which I'll call the budget equation. He also makes the assumption that $\hat\beta^o_1 \geq \hat\beta^o_2 \geq 0$. We can analyze this case by case.
Case 1: $\hat\beta^o_2 \leq \hat\beta^o_1 \leq \lambda$
Here both lambda equations reduce to $\hat\beta_j = 0$. So $s = 0$ as well, and we have no budget.
Case 2: $\hat\beta^o_2 < \lambda \leq \hat\beta^o_1$
Here, one lambda equation is $\hat\beta_2 = 0$, so the beget equation reads $\hat\beta_1 = s$.
Case 3: $\lambda < \hat\beta^o_2 \leq \hat\beta^o_1$
Here both lambda equations are non-trivial, so they can be subtracted, canceling the $\lambda$s and resulting in
$$ \hat\beta_1 - \hat\beta_2 = \hat\beta^o_1 - \hat\beta^o_2 $$
This can be added to the budget equation to clear out $\hat\beta^o_2$ and get
$$ 2 \hat\beta_1 = s + \hat\beta^o_1 - \hat\beta^o_2 $$
Solving this gets equation (1), and then using the budget equation again gives equation (2). | LASSO with two predictors | The two formulas
$$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(1)$$
and
$$\hat\beta_2=[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(2)$$
are inconsist | LASSO with two predictors
The two formulas
$$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(1)$$
and
$$\hat\beta_2=[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(2)$$
are inconsistent with the budget equation
$$\hat\beta_1+\hat\beta_2 = s$$
in the case where only one predictor is in the model. Note that I switched out your inequality with an equality because it is always better to use up your full budget to drive down some squared error. In the case where both predictors are in the model, this is consistent
$$\hat\beta_1 + \hat\beta_2 = s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 + s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = s$$
but for only one predictor it is not.
In the one predictor case, you just observe that since you have to use up your entire budget, it must be entirely spent on the one predictor in the model, so your working estimates are
$$\hat\beta_1 = s $$
and
$$\hat\beta_2 = 0 $$
This happens until the continuity equation is satisfied
$$ s = s/2 + (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 $$
after which the second parameter enters the model. There's a continuity equation for the second parameter here, but it is automatically satisfied because
$$ s/2 - (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 - (\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2 = 0$$
So after this point, we can move on with your original set of equations and both predictors in the model.
It's been a while since I've revisited this paper, but it does seem to me that there are some errors of omission in the discussion of the two predictor case. I'll try to work through the whole discussion in detail and make sure I agree with myself.
I went through it and I do agree with myself. Here's the argument.
Tibshirani's three working equations in this section are:
$$ \hat\beta_j = (\hat\beta^o_j - \lambda)^+ \quad \text{for} \ j=1,2 $$
which I'll call the lambda equations and
$$ \hat\beta_1 + \hat\beta_2 = s$$
which I'll call the budget equation. He also makes the assumption that $\hat\beta^o_1 \geq \hat\beta^o_2 \geq 0$. We can analyze this case by case.
Case 1: $\hat\beta^o_2 \leq \hat\beta^o_1 \leq \lambda$
Here both lambda equations reduce to $\hat\beta_j = 0$. So $s = 0$ as well, and we have no budget.
Case 2: $\hat\beta^o_2 < \lambda \leq \hat\beta^o_1$
Here, one lambda equation is $\hat\beta_2 = 0$, so the beget equation reads $\hat\beta_1 = s$.
Case 3: $\lambda < \hat\beta^o_2 \leq \hat\beta^o_1$
Here both lambda equations are non-trivial, so they can be subtracted, canceling the $\lambda$s and resulting in
$$ \hat\beta_1 - \hat\beta_2 = \hat\beta^o_1 - \hat\beta^o_2 $$
This can be added to the budget equation to clear out $\hat\beta^o_2$ and get
$$ 2 \hat\beta_1 = s + \hat\beta^o_1 - \hat\beta^o_2 $$
Solving this gets equation (1), and then using the budget equation again gives equation (2). | LASSO with two predictors
The two formulas
$$\hat\beta_1=[s/2+(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(1)$$
and
$$\hat\beta_2=[s/2-(\hat\beta^{(ols)}_1-\hat\beta^{(ols)}_2)/2]^+\qquad\qquad(2)$$
are inconsist |
55,153 | How to detect noisy datasets (bias and variance trade-off) | When noise is "large" then learning is not pointless, but it's "expensive" in some sense. For instance, you know the expression "house always wins". It means that the odds favor the casino against the gambler. However, the odds can be very close to 1:1, they may only so slightly be tilted towards the "house", e.g. 0.5%. Hence, you may call the outcomes series very noisy in some cases, yet the casinos make a ton of money in a long run. So, the fact that the data is "noisy" doesn't mean in isolation that the learning will be pointless or useless or unprofitable. | How to detect noisy datasets (bias and variance trade-off) | When noise is "large" then learning is not pointless, but it's "expensive" in some sense. For instance, you know the expression "house always wins". It means that the odds favor the casino against the | How to detect noisy datasets (bias and variance trade-off)
When noise is "large" then learning is not pointless, but it's "expensive" in some sense. For instance, you know the expression "house always wins". It means that the odds favor the casino against the gambler. However, the odds can be very close to 1:1, they may only so slightly be tilted towards the "house", e.g. 0.5%. Hence, you may call the outcomes series very noisy in some cases, yet the casinos make a ton of money in a long run. So, the fact that the data is "noisy" doesn't mean in isolation that the learning will be pointless or useless or unprofitable. | How to detect noisy datasets (bias and variance trade-off)
When noise is "large" then learning is not pointless, but it's "expensive" in some sense. For instance, you know the expression "house always wins". It means that the odds favor the casino against the |
55,154 | Is it possible for an expected value not to exist? [duplicate] | You can find the answer on Wikipedia http://en.wikipedia.org/wiki/Cauchy_distribution. However, you are right, the integral for $E(X)$ does not converge, hence it means that $E(X)$ is undefined. Whereas for $E(X^2)$ it evaluates to infinity, e.g. $E(X^2)=\infty$. Therefore, in this case $Var(X) = E(X^2)-E(X)^2$ is undefined as well, though the second moment exists and is infinite. | Is it possible for an expected value not to exist? [duplicate] | You can find the answer on Wikipedia http://en.wikipedia.org/wiki/Cauchy_distribution. However, you are right, the integral for $E(X)$ does not converge, hence it means that $E(X)$ is undefined. Where | Is it possible for an expected value not to exist? [duplicate]
You can find the answer on Wikipedia http://en.wikipedia.org/wiki/Cauchy_distribution. However, you are right, the integral for $E(X)$ does not converge, hence it means that $E(X)$ is undefined. Whereas for $E(X^2)$ it evaluates to infinity, e.g. $E(X^2)=\infty$. Therefore, in this case $Var(X) = E(X^2)-E(X)^2$ is undefined as well, though the second moment exists and is infinite. | Is it possible for an expected value not to exist? [duplicate]
You can find the answer on Wikipedia http://en.wikipedia.org/wiki/Cauchy_distribution. However, you are right, the integral for $E(X)$ does not converge, hence it means that $E(X)$ is undefined. Where |
55,155 | LASSO and related path algorithms | The Lasso is the solution to
$$\hat{\beta} \in argmin_\beta \frac{1}{2n}\Vert y-X\beta\Vert_2^2 + \lambda\Vert\beta\Vert_1$$
Evidently, $\hat{\beta}$ also depends on $\lambda$, so really, we could write this dependence explicitly: $\hat{\beta}(\lambda)$. Thus, we can interpret $\hat{\beta}$ as a function $\lambda\mapsto\hat{\beta}(\lambda)$, whose domain is $[0,\infty)$. The "solution path" of the Lasso is precisely this function: For each $\lambda\in[0,\infty)$, there is a vector $\hat{\beta}(\lambda)$ that solves the optimization problem above. When $\lambda=0$, you start out at the OLS solution, and as you increase $\lambda$, the Lasso solution changes to become more and more sparse, until ultimately $\lambda=+\infty$ and $\hat{\beta}(+\infty) = 0$.
Many approximate algorithms for the Lasso solve this optimization problem at some discrete values $\lambda_1,\ldots,\lambda_L$. This is simple and efficient, but it does not provide the full spectrum of possible solutions, since there are infinitely many possible choices of $\lambda$.
LARS, and other related algorithms, on the other hand, provide the full solution path for all possible $\lambda$ (i.e. infinitely many solutions). This why they are commonly referred to as "path algorithms" since they provide a parametrized solution "path" for all $\lambda$.
Note: To understand the use of the word "path", think of parametric equations and lines from calculus e.g. $f(t) = (x(t), y(t))$. As $t$ varies, $f(t)$ plots a path in space. For the Lasso, replace $t$ with $\lambda$ and we have $\hat{\beta}(\lambda) = (\hat{\beta}_1(\lambda),\ldots,\hat{\beta}_p(\lambda))$. | LASSO and related path algorithms | The Lasso is the solution to
$$\hat{\beta} \in argmin_\beta \frac{1}{2n}\Vert y-X\beta\Vert_2^2 + \lambda\Vert\beta\Vert_1$$
Evidently, $\hat{\beta}$ also depends on $\lambda$, so really, we could wr | LASSO and related path algorithms
The Lasso is the solution to
$$\hat{\beta} \in argmin_\beta \frac{1}{2n}\Vert y-X\beta\Vert_2^2 + \lambda\Vert\beta\Vert_1$$
Evidently, $\hat{\beta}$ also depends on $\lambda$, so really, we could write this dependence explicitly: $\hat{\beta}(\lambda)$. Thus, we can interpret $\hat{\beta}$ as a function $\lambda\mapsto\hat{\beta}(\lambda)$, whose domain is $[0,\infty)$. The "solution path" of the Lasso is precisely this function: For each $\lambda\in[0,\infty)$, there is a vector $\hat{\beta}(\lambda)$ that solves the optimization problem above. When $\lambda=0$, you start out at the OLS solution, and as you increase $\lambda$, the Lasso solution changes to become more and more sparse, until ultimately $\lambda=+\infty$ and $\hat{\beta}(+\infty) = 0$.
Many approximate algorithms for the Lasso solve this optimization problem at some discrete values $\lambda_1,\ldots,\lambda_L$. This is simple and efficient, but it does not provide the full spectrum of possible solutions, since there are infinitely many possible choices of $\lambda$.
LARS, and other related algorithms, on the other hand, provide the full solution path for all possible $\lambda$ (i.e. infinitely many solutions). This why they are commonly referred to as "path algorithms" since they provide a parametrized solution "path" for all $\lambda$.
Note: To understand the use of the word "path", think of parametric equations and lines from calculus e.g. $f(t) = (x(t), y(t))$. As $t$ varies, $f(t)$ plots a path in space. For the Lasso, replace $t$ with $\lambda$ and we have $\hat{\beta}(\lambda) = (\hat{\beta}_1(\lambda),\ldots,\hat{\beta}_p(\lambda))$. | LASSO and related path algorithms
The Lasso is the solution to
$$\hat{\beta} \in argmin_\beta \frac{1}{2n}\Vert y-X\beta\Vert_2^2 + \lambda\Vert\beta\Vert_1$$
Evidently, $\hat{\beta}$ also depends on $\lambda$, so really, we could wr |
55,156 | Standard error of the combination of estimated parameters | Usually when you estimate the model you get the covariance matrix of the parameter estimates. If you assume that the parameter estimates are normally distributed (a standard assumption for large samples and small samples with normal errors), then you have the correlation coefficient between parameter estimates, their means and standard deviations. So, you can use this to plug into Gaussian ratio distribution to get an answer to your question.
Look at this paper: Marsaglia, George. "Ratios of normal variables." Journal of Statistical Software 16.4 (2006): 1-10. | Standard error of the combination of estimated parameters | Usually when you estimate the model you get the covariance matrix of the parameter estimates. If you assume that the parameter estimates are normally distributed (a standard assumption for large sampl | Standard error of the combination of estimated parameters
Usually when you estimate the model you get the covariance matrix of the parameter estimates. If you assume that the parameter estimates are normally distributed (a standard assumption for large samples and small samples with normal errors), then you have the correlation coefficient between parameter estimates, their means and standard deviations. So, you can use this to plug into Gaussian ratio distribution to get an answer to your question.
Look at this paper: Marsaglia, George. "Ratios of normal variables." Journal of Statistical Software 16.4 (2006): 1-10. | Standard error of the combination of estimated parameters
Usually when you estimate the model you get the covariance matrix of the parameter estimates. If you assume that the parameter estimates are normally distributed (a standard assumption for large sampl |
55,157 | Standard error of the combination of estimated parameters | We usually use the exact same symbol to denote the obtained estimate of a parameter (a number) and the estimator we used, which is a random variable (a function). To distinguish, I will use the following notation:
True values of unknown paramters : $\alpha,\beta$
Obtained estimates from a specific sample: $\hat \alpha, \hat \beta$
Estimators used: $a, b$.
We are interested in the variance (and then the standard error), of a function of the estimators, $h[a, b]$. We do indeed say "standard error of the estimate" but this strictly speaking is wrong: estimates are fixed numbers, they do not have a variance or a standard deviation.
We can approximate $h[a, b]$ by a first-order Taylor expansion around the obtained estimates:
$$h[a, b] \approx h[\hat \alpha, \hat \beta]\; + \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (a - \hat \alpha)\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (b - \hat \beta)$$
Rearranging,
$$h[a, b] \approx \Big[ h[\hat \alpha, \hat \beta]\; - \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \alpha\;-\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \beta\Big]$$
$$+\;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b$$
Why the re-arrangement? Because, the terms in the big brackets are all fixed numbers. And fixed numbers do not have a variance, and, when they enter additively, they do not affect the variance of the terms that they do. So
$${\rm Var} \left(h[a, b]\right) \approx {\rm Var} \left(\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b\right)$$
In our case
$$h[a,b] = \frac {b}{1-a} \implies \frac {\partial h[a, b]}{\partial a} = \frac {b}{(1-a)^2} \implies \frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {\hat \beta}{(1-\hat \alpha)^2}$$
and
$$\frac {\partial h[a, b]}{\partial b} = \frac {1}{(1-a)} \implies \frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {1}{(1-\hat \alpha)}$$
Substituting, and using the standard formula for the variance of the sum of two random variables,
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \left(\frac {\hat \beta}{(1-\hat \alpha)^2}\right)^2\cdot {\rm Var}(a)\;+\;\left(\frac {1}{(1-\hat \alpha)}\right)^2\cdot {\rm Var}(b) \\+\; 2\frac {\hat \beta}{(1-\hat \alpha)^2}\frac {1}{(1-\hat \alpha)}{\rm Cov}(a,b)$$
or a bit more compactly
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \frac {\hat \beta^2{\rm Var}(a)}{(1-\hat \alpha)^4}\;+\;\frac {{\rm Var}(b)}{(1-\hat \alpha)^2} \;+\; \frac {2\hat \beta{\rm Cov}(a,b)}{(1-\hat \alpha)^3}$$
The variances and the covariance in the right hand side are unknown. But you have an estimate of them: the "standard errors" -squared-, and the covariance from the estimated covariance matrix obtained from the model. You plug these estimates into the last expression, together with the coefficient estimates themselves, and then you take the square root of the whole to arrive at an estimate for the magnitude you are interested in. Note the more than one source of approximation error here. | Standard error of the combination of estimated parameters | We usually use the exact same symbol to denote the obtained estimate of a parameter (a number) and the estimator we used, which is a random variable (a function). To distinguish, I will use the follow | Standard error of the combination of estimated parameters
We usually use the exact same symbol to denote the obtained estimate of a parameter (a number) and the estimator we used, which is a random variable (a function). To distinguish, I will use the following notation:
True values of unknown paramters : $\alpha,\beta$
Obtained estimates from a specific sample: $\hat \alpha, \hat \beta$
Estimators used: $a, b$.
We are interested in the variance (and then the standard error), of a function of the estimators, $h[a, b]$. We do indeed say "standard error of the estimate" but this strictly speaking is wrong: estimates are fixed numbers, they do not have a variance or a standard deviation.
We can approximate $h[a, b]$ by a first-order Taylor expansion around the obtained estimates:
$$h[a, b] \approx h[\hat \alpha, \hat \beta]\; + \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (a - \hat \alpha)\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot (b - \hat \beta)$$
Rearranging,
$$h[a, b] \approx \Big[ h[\hat \alpha, \hat \beta]\; - \;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \alpha\;-\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot \hat \beta\Big]$$
$$+\;\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b$$
Why the re-arrangement? Because, the terms in the big brackets are all fixed numbers. And fixed numbers do not have a variance, and, when they enter additively, they do not affect the variance of the terms that they do. So
$${\rm Var} \left(h[a, b]\right) \approx {\rm Var} \left(\frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot a\;+\;\frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}}\cdot b\right)$$
In our case
$$h[a,b] = \frac {b}{1-a} \implies \frac {\partial h[a, b]}{\partial a} = \frac {b}{(1-a)^2} \implies \frac {\partial h[a, b]}{\partial a}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {\hat \beta}{(1-\hat \alpha)^2}$$
and
$$\frac {\partial h[a, b]}{\partial b} = \frac {1}{(1-a)} \implies \frac {\partial h[a, b]}{\partial b}\Big|_{\{\hat \alpha, \hat \beta\}} = \frac {1}{(1-\hat \alpha)}$$
Substituting, and using the standard formula for the variance of the sum of two random variables,
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \left(\frac {\hat \beta}{(1-\hat \alpha)^2}\right)^2\cdot {\rm Var}(a)\;+\;\left(\frac {1}{(1-\hat \alpha)}\right)^2\cdot {\rm Var}(b) \\+\; 2\frac {\hat \beta}{(1-\hat \alpha)^2}\frac {1}{(1-\hat \alpha)}{\rm Cov}(a,b)$$
or a bit more compactly
$${\rm Var} \left(\frac {b}{1-a}\right) \approx \frac {\hat \beta^2{\rm Var}(a)}{(1-\hat \alpha)^4}\;+\;\frac {{\rm Var}(b)}{(1-\hat \alpha)^2} \;+\; \frac {2\hat \beta{\rm Cov}(a,b)}{(1-\hat \alpha)^3}$$
The variances and the covariance in the right hand side are unknown. But you have an estimate of them: the "standard errors" -squared-, and the covariance from the estimated covariance matrix obtained from the model. You plug these estimates into the last expression, together with the coefficient estimates themselves, and then you take the square root of the whole to arrive at an estimate for the magnitude you are interested in. Note the more than one source of approximation error here. | Standard error of the combination of estimated parameters
We usually use the exact same symbol to denote the obtained estimate of a parameter (a number) and the estimator we used, which is a random variable (a function). To distinguish, I will use the follow |
55,158 | Modelling a binary outcome when census interval varies | An alternative and slightly easier approach is to use a complementary log-log link (cloglog), which estimates the log-hazard rather than the log-odds of outcomes such as mortality. Copying from rpubs:
A very common situation in ecology (and elsewhere) is a survival/binary-outcome model where individuals (each measured once) differ in their exposure. The classical approach to this problem is to use a complementary log-log link. The complementary log-log or "cloglog" function is $C(\mu)=\log(−\log(1−\mu))$; its inverse is $\mu=C^{−1}(\eta)=1−\exp(−\exp(\eta))$. Thus if we expect mortality $\mu_0$ over a period $\Delta t=1$ and the linear predictor $\eta=C^{−1}(\mu_0)$ then
$$
C^{−1}(\eta+\log\Delta t)=(1−\exp(−\exp(\eta) \cdot \Delta t))
$$
Some algebra shows that this is equal to $1−(1−\mu_0)^{\Delta t}$, which is what we want.
The function $\exp(−\exp(x))$ is called the Gompertz function (it is also the CDF of the extreme value distribution), so fitting a model with this inverse-link function (i.e. fitting a cloglog link to the survival, rather than the mortality, probability) is also called a gompit (or extreme value) regression.
To use this approach in R, specify family=binomial(link="cloglog") and add a term of the form offset(log(exposure)) to the formula (alternatively, some modeling functions take offset as a separate argument). For example,
glm(surv~x1+x2+offset(log(exposure)),
family=binomial(link="cloglog"),
data=my_surv_data)
where exposure is the length of time for which a given individual is exposed to the possibility of dying/failing (e.g., census interval or time between observations or total observation time).
You may also want to consider checking the model where log(exposure) is included as a covariate rather than an offset - this makes the log-hazard have a $\beta_t \log(t)$ term, or equivalently makes the hazard proportional to $t^{\beta_t}$ rather than to $t$ (I believe this makes the survival distribution Weibull rather than exponential, but I haven't checked that conclusion carefully).
Advantages of using this approach rather than Schaffer's power-logistic method:
because the exposure time is incorporated in the offset rather than in the definition of the link function itself, R handles this a bit more gracefully (for example, it will be easier to generate predictions with different exposure times from the original data set).
it is slightly older and more widely used in statistics; Googling "cloglog logistic regression" or searching for cloglog on CrossValidated will bring you to more resources.
The only disadvantage I can think of off the top of my head is that people in the nest survival world are more used to Schaffer's method. For a large enough data set you might be able to tell which link actually fits the data better (e.g. fit with both approaches and compare AIC values), but in general I doubt there's very much difference. | Modelling a binary outcome when census interval varies | An alternative and slightly easier approach is to use a complementary log-log link (cloglog), which estimates the log-hazard rather than the log-odds of outcomes such as mortality. Copying from rpubs | Modelling a binary outcome when census interval varies
An alternative and slightly easier approach is to use a complementary log-log link (cloglog), which estimates the log-hazard rather than the log-odds of outcomes such as mortality. Copying from rpubs:
A very common situation in ecology (and elsewhere) is a survival/binary-outcome model where individuals (each measured once) differ in their exposure. The classical approach to this problem is to use a complementary log-log link. The complementary log-log or "cloglog" function is $C(\mu)=\log(−\log(1−\mu))$; its inverse is $\mu=C^{−1}(\eta)=1−\exp(−\exp(\eta))$. Thus if we expect mortality $\mu_0$ over a period $\Delta t=1$ and the linear predictor $\eta=C^{−1}(\mu_0)$ then
$$
C^{−1}(\eta+\log\Delta t)=(1−\exp(−\exp(\eta) \cdot \Delta t))
$$
Some algebra shows that this is equal to $1−(1−\mu_0)^{\Delta t}$, which is what we want.
The function $\exp(−\exp(x))$ is called the Gompertz function (it is also the CDF of the extreme value distribution), so fitting a model with this inverse-link function (i.e. fitting a cloglog link to the survival, rather than the mortality, probability) is also called a gompit (or extreme value) regression.
To use this approach in R, specify family=binomial(link="cloglog") and add a term of the form offset(log(exposure)) to the formula (alternatively, some modeling functions take offset as a separate argument). For example,
glm(surv~x1+x2+offset(log(exposure)),
family=binomial(link="cloglog"),
data=my_surv_data)
where exposure is the length of time for which a given individual is exposed to the possibility of dying/failing (e.g., census interval or time between observations or total observation time).
You may also want to consider checking the model where log(exposure) is included as a covariate rather than an offset - this makes the log-hazard have a $\beta_t \log(t)$ term, or equivalently makes the hazard proportional to $t^{\beta_t}$ rather than to $t$ (I believe this makes the survival distribution Weibull rather than exponential, but I haven't checked that conclusion carefully).
Advantages of using this approach rather than Schaffer's power-logistic method:
because the exposure time is incorporated in the offset rather than in the definition of the link function itself, R handles this a bit more gracefully (for example, it will be easier to generate predictions with different exposure times from the original data set).
it is slightly older and more widely used in statistics; Googling "cloglog logistic regression" or searching for cloglog on CrossValidated will bring you to more resources.
The only disadvantage I can think of off the top of my head is that people in the nest survival world are more used to Schaffer's method. For a large enough data set you might be able to tell which link actually fits the data better (e.g. fit with both approaches and compare AIC values), but in general I doubt there's very much difference. | Modelling a binary outcome when census interval varies
An alternative and slightly easier approach is to use a complementary log-log link (cloglog), which estimates the log-hazard rather than the log-odds of outcomes such as mortality. Copying from rpubs |
55,159 | Interpreting prior and posterior | Using the same approach, you can compute the posterior probability that the coin is fair. (Do the exercise!) What do you make of the result? | Interpreting prior and posterior | Using the same approach, you can compute the posterior probability that the coin is fair. (Do the exercise!) What do you make of the result? | Interpreting prior and posterior
Using the same approach, you can compute the posterior probability that the coin is fair. (Do the exercise!) What do you make of the result? | Interpreting prior and posterior
Using the same approach, you can compute the posterior probability that the coin is fair. (Do the exercise!) What do you make of the result? |
55,160 | Interpretation of continuous variable in dummy-continuous interaction | Yes, that is correct in your case. A good way to convince yourself of that statement follows.
Say you want to find the impact of the pollution level on the log of house prices.
$$ \dfrac{\partial \ ln(housePrice)} {\partial \ pollutionLevel} = \beta_1 + \beta_3 \times D_N $$
where the impact of the pollution level on the percentage change in house prices when there is no school nearby $(D_N=0)$ is simply $\beta_1$. | Interpretation of continuous variable in dummy-continuous interaction | Yes, that is correct in your case. A good way to convince yourself of that statement follows.
Say you want to find the impact of the pollution level on the log of house prices.
$$ \dfrac{\partial \ ln | Interpretation of continuous variable in dummy-continuous interaction
Yes, that is correct in your case. A good way to convince yourself of that statement follows.
Say you want to find the impact of the pollution level on the log of house prices.
$$ \dfrac{\partial \ ln(housePrice)} {\partial \ pollutionLevel} = \beta_1 + \beta_3 \times D_N $$
where the impact of the pollution level on the percentage change in house prices when there is no school nearby $(D_N=0)$ is simply $\beta_1$. | Interpretation of continuous variable in dummy-continuous interaction
Yes, that is correct in your case. A good way to convince yourself of that statement follows.
Say you want to find the impact of the pollution level on the log of house prices.
$$ \dfrac{\partial \ ln |
55,161 | Interpretation of continuous variable in dummy-continuous interaction | One way to generally look at this is via marginal effects as in @Giaco.Metrics' response. Another general technique is a distinction of cases.
For $D_N = 0$ (no school nearby, reference group), your equation simplifies to:
$\ln(housePrice) = \beta_1 \times pollutionLevel + u$,
i.e., you have intercept 0 and slope $\beta_1$ in the reference group.
For $D_N = 1$ (school nearby), you get
$\ln(housePrice) = \beta_1 \times pollutionLevel + \beta_2 + \beta_3 \times pollutionLevel + u\\ = \beta_2 + (\beta_1 + \beta_3) \times pollutionLevel + u$,
i.e., you have intercept $\beta_2$ and slope $\beta_1 + \beta_3$ in the school group. So $\beta_2$ is the difference in intercepts and $\beta_3$ the difference in slopes. | Interpretation of continuous variable in dummy-continuous interaction | One way to generally look at this is via marginal effects as in @Giaco.Metrics' response. Another general technique is a distinction of cases.
For $D_N = 0$ (no school nearby, reference group), your e | Interpretation of continuous variable in dummy-continuous interaction
One way to generally look at this is via marginal effects as in @Giaco.Metrics' response. Another general technique is a distinction of cases.
For $D_N = 0$ (no school nearby, reference group), your equation simplifies to:
$\ln(housePrice) = \beta_1 \times pollutionLevel + u$,
i.e., you have intercept 0 and slope $\beta_1$ in the reference group.
For $D_N = 1$ (school nearby), you get
$\ln(housePrice) = \beta_1 \times pollutionLevel + \beta_2 + \beta_3 \times pollutionLevel + u\\ = \beta_2 + (\beta_1 + \beta_3) \times pollutionLevel + u$,
i.e., you have intercept $\beta_2$ and slope $\beta_1 + \beta_3$ in the school group. So $\beta_2$ is the difference in intercepts and $\beta_3$ the difference in slopes. | Interpretation of continuous variable in dummy-continuous interaction
One way to generally look at this is via marginal effects as in @Giaco.Metrics' response. Another general technique is a distinction of cases.
For $D_N = 0$ (no school nearby, reference group), your e |
55,162 | Why do we need PCA whitening before feeding into autoencoder? | Natural images have a lot of variance/energy in low spatial frequency components and little variance/energy in high spatial frequency components*. When using squared Euclidean distance to evaluate the reconstruction of an autoencoder, this means that the network will focus on getting the low spatial frequencies right, since the error scales with the variance of the signal. Whitening normalizes the variances so that the network gets punished equally for errors in low and high spatial frequencies.
Whitening effectively changes the objective function. Let $C$ be the covariance of the inputs and $W$ be PCA whitening,
\begin{align}
C &= QDQ^\top, & W &= D^{-\frac{1}{2}}Q^\top.
\end{align}
Further, let $x$ be some input, $\hat x$ be the output of the autoencoder and $y = Wx$ be the whitened signal. Then
\begin{align}
||y - \hat y||_2^2
&= ||W x - W \hat x||_2^2 \\
&= (Wx - W\hat x)^\top (Wx - W\hat x) \\
&= (x - \hat x)^\top W^\top W (x - \hat x) \\
&= (x - \hat x)^\top C^{-1} (x - \hat x) \\
&= ||x - \hat x||_{C^{-1}}^2.
\end{align}
That is, by optimizing $\hat y$ instead of $\hat x$, we are effectively optimizing a particular Mahalanobis distance instead of standard Euclidean distance.
*see http://tdlc.ucsd.edu/images/facescheung.jpg for an illustrative example of spatial frequencies | Why do we need PCA whitening before feeding into autoencoder? | Natural images have a lot of variance/energy in low spatial frequency components and little variance/energy in high spatial frequency components*. When using squared Euclidean distance to evaluate the | Why do we need PCA whitening before feeding into autoencoder?
Natural images have a lot of variance/energy in low spatial frequency components and little variance/energy in high spatial frequency components*. When using squared Euclidean distance to evaluate the reconstruction of an autoencoder, this means that the network will focus on getting the low spatial frequencies right, since the error scales with the variance of the signal. Whitening normalizes the variances so that the network gets punished equally for errors in low and high spatial frequencies.
Whitening effectively changes the objective function. Let $C$ be the covariance of the inputs and $W$ be PCA whitening,
\begin{align}
C &= QDQ^\top, & W &= D^{-\frac{1}{2}}Q^\top.
\end{align}
Further, let $x$ be some input, $\hat x$ be the output of the autoencoder and $y = Wx$ be the whitened signal. Then
\begin{align}
||y - \hat y||_2^2
&= ||W x - W \hat x||_2^2 \\
&= (Wx - W\hat x)^\top (Wx - W\hat x) \\
&= (x - \hat x)^\top W^\top W (x - \hat x) \\
&= (x - \hat x)^\top C^{-1} (x - \hat x) \\
&= ||x - \hat x||_{C^{-1}}^2.
\end{align}
That is, by optimizing $\hat y$ instead of $\hat x$, we are effectively optimizing a particular Mahalanobis distance instead of standard Euclidean distance.
*see http://tdlc.ucsd.edu/images/facescheung.jpg for an illustrative example of spatial frequencies | Why do we need PCA whitening before feeding into autoencoder?
Natural images have a lot of variance/energy in low spatial frequency components and little variance/energy in high spatial frequency components*. When using squared Euclidean distance to evaluate the |
55,163 | Why do we need PCA whitening before feeding into autoencoder? | The tutorial only said "each variable comes from an IID Gaussian independent of the other features". Independence implies uncorrelation, but uncorrelation does not imply independence. Even if data get processed by PCA, features become uncorrelated, but it does not imply features become independent of each other. | Why do we need PCA whitening before feeding into autoencoder? | The tutorial only said "each variable comes from an IID Gaussian independent of the other features". Independence implies uncorrelation, but uncorrelation does not imply independence. Even if data get | Why do we need PCA whitening before feeding into autoencoder?
The tutorial only said "each variable comes from an IID Gaussian independent of the other features". Independence implies uncorrelation, but uncorrelation does not imply independence. Even if data get processed by PCA, features become uncorrelated, but it does not imply features become independent of each other. | Why do we need PCA whitening before feeding into autoencoder?
The tutorial only said "each variable comes from an IID Gaussian independent of the other features". Independence implies uncorrelation, but uncorrelation does not imply independence. Even if data get |
55,164 | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode | The models are equivalent. The misconception here is that overlapping confidence intervals do not mean you "failed to reject the null hypothesis." You do not compare the upper/lower bounds of confidence intervals and call it a day. Thomas' comment links1 to a good general explanation of why this is, though it doesn't directly apply to what's happening under the hood in a regression setting2.
Paul has linked to an explanation of how the lm t-statistic is calculated, and that test is a test of that coefficient against zero (e.g. $\beta_3 = 0$ and $\beta_4 = 0$). But you want to test $\beta_3 - \beta_4 = 0$.
Performing this test will then have the equivalent result to fit1 - there are quite a few R packages that do this type of test - the following uses car:
> library(car)
> linearHypothesis(fit2, "factor(am)0 = factor(am)1")
Linear hypothesis test
Hypothesis:
factor(am)0 - factor(am)1 = 0
Model 1: restricted model
Model 2: mpg ~ I(wt - mean(wt)) + I(qsec - mean(qsec)) + factor(am) -
1
Res.Df RSS Df Sum of Sq F Pr(>F)
1 29 195.46
2 28 169.29 1 26.178 4.3298 0.04672 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
For reference, this type of test (and more generalized versions of it) are called an absolutely silly number of different names depending on software, book, and field. (general linear hypothesis, linear contrasts, differences of least square means, regression Wald tests, more...), if you want to look up the theory and how the manual calculation is done.
1 imgur mirror of contents
2 This is because the estimated coefficients and their standard errors are not independent, and so you have to take that into account when running the test. If we just run a straight t-test (which is what the theory reduces down to in this case) without taking this into account with the given values results in the incorrect:
> delta = fit2$coefficients[4]-fit2$coefficients[3]
> deltstat = delta / sqrt(stderr[4]^2 + stderr[3]^2)
> delpval = 2*pt(deltstat, df=df.residual(fit2), lower.tail = FALSE)
> delpval
0.01968865 | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode | The models are equivalent. The misconception here is that overlapping confidence intervals do not mean you "failed to reject the null hypothesis." You do not compare the upper/lower bounds of confiden | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode
The models are equivalent. The misconception here is that overlapping confidence intervals do not mean you "failed to reject the null hypothesis." You do not compare the upper/lower bounds of confidence intervals and call it a day. Thomas' comment links1 to a good general explanation of why this is, though it doesn't directly apply to what's happening under the hood in a regression setting2.
Paul has linked to an explanation of how the lm t-statistic is calculated, and that test is a test of that coefficient against zero (e.g. $\beta_3 = 0$ and $\beta_4 = 0$). But you want to test $\beta_3 - \beta_4 = 0$.
Performing this test will then have the equivalent result to fit1 - there are quite a few R packages that do this type of test - the following uses car:
> library(car)
> linearHypothesis(fit2, "factor(am)0 = factor(am)1")
Linear hypothesis test
Hypothesis:
factor(am)0 - factor(am)1 = 0
Model 1: restricted model
Model 2: mpg ~ I(wt - mean(wt)) + I(qsec - mean(qsec)) + factor(am) -
1
Res.Df RSS Df Sum of Sq F Pr(>F)
1 29 195.46
2 28 169.29 1 26.178 4.3298 0.04672 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
For reference, this type of test (and more generalized versions of it) are called an absolutely silly number of different names depending on software, book, and field. (general linear hypothesis, linear contrasts, differences of least square means, regression Wald tests, more...), if you want to look up the theory and how the manual calculation is done.
1 imgur mirror of contents
2 This is because the estimated coefficients and their standard errors are not independent, and so you have to take that into account when running the test. If we just run a straight t-test (which is what the theory reduces down to in this case) without taking this into account with the given values results in the incorrect:
> delta = fit2$coefficients[4]-fit2$coefficients[3]
> deltstat = delta / sqrt(stderr[4]^2 + stderr[3]^2)
> delpval = 2*pt(deltstat, df=df.residual(fit2), lower.tail = FALSE)
> delpval
0.01968865 | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode
The models are equivalent. The misconception here is that overlapping confidence intervals do not mean you "failed to reject the null hypothesis." You do not compare the upper/lower bounds of confiden |
55,165 | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode | In these linear models, the null hypothesis is actually that there is no effect,i.e. the coefficient is equal to zero. Therefore, in the output of summary(lm()) the conclusion is that there is no significant difference between automatic and manual transmission. And this is in line with your observation that the confidence intervals overlap in your second model. See also
Interpretation of R's lm() output. | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode | In these linear models, the null hypothesis is actually that there is no effect,i.e. the coefficient is equal to zero. Therefore, in the output of summary(lm()) the conclusion is that there is no sign | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode
In these linear models, the null hypothesis is actually that there is no effect,i.e. the coefficient is equal to zero. Therefore, in the output of summary(lm()) the conclusion is that there is no significant difference between automatic and manual transmission. And this is in line with your observation that the confidence intervals overlap in your second model. See also
Interpretation of R's lm() output. | R mtcars dataset - linear regression of MPG in Auto and Manual transmission mode
In these linear models, the null hypothesis is actually that there is no effect,i.e. the coefficient is equal to zero. Therefore, in the output of summary(lm()) the conclusion is that there is no sign |
55,166 | Why does Pearson's r have a non-normal sampling distribution at high values of ρ? | This is just a short answer without mathematical details, but:
Because if your $r$ is high you get a non-symmetric distribution. For example, if the real correlation is $0.9$ you might, by chance, observe a sample correlation of $0.75$. But you will never observe a sample correlation of $1.05$. Generally many normal approximations (e.g. for binomial proportions) don't work well when the real value is close to the bounds of your statistic.
If you just test against $r = 0$ you don't need to worry about the sampling distribution being skewed due to high $r$. It becomes relevant when you want to test $r > 0.7$ (for example) or if you want to give a confidence interval for $r$. You always want to give confidence intervals. | Why does Pearson's r have a non-normal sampling distribution at high values of ρ? | This is just a short answer without mathematical details, but:
Because if your $r$ is high you get a non-symmetric distribution. For example, if the real correlation is $0.9$ you might, by chance, ob | Why does Pearson's r have a non-normal sampling distribution at high values of ρ?
This is just a short answer without mathematical details, but:
Because if your $r$ is high you get a non-symmetric distribution. For example, if the real correlation is $0.9$ you might, by chance, observe a sample correlation of $0.75$. But you will never observe a sample correlation of $1.05$. Generally many normal approximations (e.g. for binomial proportions) don't work well when the real value is close to the bounds of your statistic.
If you just test against $r = 0$ you don't need to worry about the sampling distribution being skewed due to high $r$. It becomes relevant when you want to test $r > 0.7$ (for example) or if you want to give a confidence interval for $r$. You always want to give confidence intervals. | Why does Pearson's r have a non-normal sampling distribution at high values of ρ?
This is just a short answer without mathematical details, but:
Because if your $r$ is high you get a non-symmetric distribution. For example, if the real correlation is $0.9$ you might, by chance, ob |
55,167 | Finding UMVUE of Bernoulli random variables | The idea is that you can start with any estimator of $(1-\theta)^2$, no matter how awful, provided it is unbiased. The Rao-Blackwell process will almost magically turn it into a uniformly minimum-variance unbiased estimator (UMVUE).
There are many ways to proceed. One fruitful idea is systematically to remove the complications in the expression "$(1-\theta)^2$". This leads to a sequence of questions:
How to find an unbiased estimator of $(1-\theta)^2$?
How to find an unbiased estimator of $1-\theta$?
How to find an unbiased estimator of $\theta$?
The answer to (3), at least, should be obvious: any of the $X_i$ will be an unbiased estimator because
$$\mathbb{E}_\theta(X_i) = \theta.$$
(It doesn't matter how you come up with this estimator: by guessing and checking (which often works), Maximum Likelihood, or whatever. ML, incidentally, often is not helpful because it tends to produce biased estimators. What is helpful is the extreme simplicity of working with a single observation rather than a whole bunch of them.)
Linearity of expectation tells us an answer to (2) would be any of the $1 - X_i$, because
$$\mathbb{E}_\theta(1-X_i) = 1 - \mathbb{E}_\theta(X_i) = 1-\theta.$$
Getting from this to an answer to (1) is the crux of the matter. At some point you will need to exploit the fact you have more than one independent realization of this Bernoulli variable, because it quickly becomes obvious that a single $0-1$ observation just can't tell you much. For instance, the square of $1-X_1$ won't work, because (since $(1-X_1)^2 = (1-X_1)$)
$$\mathbb{E}_\theta((1-X_1)^2) = 1-\theta.$$
What could be done with two of the observations, such as $X_1$ and $X_2$? A little thought might eventually suggest considering their product. Sure enough, because $X_1$ and $X_2$ are independent, their expectations multiply:
$$\mathbb{E}_\theta((1-X_1)(1-X_2)) = \mathbb{E}_\theta((1-X_1))\mathbb{E}_\theta((1-X_2)) = (1-\theta)(1-\theta)=(1-\theta)^2.$$
You're now good to go: apply the Rao-Blackwell process to the unbiased estimator $T=(1-X_1)(1-X_2)$ to obtain an UMVUE. (That is, find its expectation conditional on $\sum X_i$.) I'll stop here so that you can have the fun of discovering the answer for yourself: it's marvelous to see what kinds of formulas can emerge from this process.
To illustrate the calculation let's take the simpler case of three, rather than four, $X_i$. The sum $S=X_1+X_2+X_3$ counts how many of the $X_i$ equal $1$. Look at the four possibilities:
When $S=0$, all the $X_i=0$ and $T = 1$ constantly, whence $\mathbb{E}(T\,|\,S=0)=1$.
When $S=1$, there are three possible configurations of the $X_i$: $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. All are equally likely, giving each a chance of $1/3$. The value of $T$ is $0$ for the first two and $1$ for the last. Therefore
$$\mathbb{E}(T\,|\,S=1) = \left(\frac{1}{3}\right)\left(0\right)+\left(\frac{1}{3}\right)\left(0\right)+\left(\frac{1}{3}\right)\left(1\right) = \frac{1}{3}.$$
When $S=2$ or $S=3$, $T=0$ no matter what order the $X_i$ appear in, giving $0$ for the conditional expectation.
The Rao-Blackwellized version of $T$, then, is the estimator that associates with the sum $S$ the following guesses for $\theta$:
$$\tilde T(0)=1,\ \tilde T(1)=1/3,\ \tilde T(2)=\tilde T(3)=0.$$
As a check, the expectation of $\tilde T$ can be computed as
$$\eqalign{
\mathbb{E}(\tilde T) &= \Pr(S=0)\tilde{T}(0) + \Pr(S=1)\tilde{T}(1) + \Pr(S=2)\tilde{T}(2) + \Pr(S=3)\tilde{T}(3) \\
&= (1-\theta)^3 + \binom{3}{1}\theta(1-\theta)^2\left(1/3\right) + 0 + 0 \\
&= 1 - 3 \theta + 3 \theta^2 - \theta^3 + 3(1/3)(\theta - 2\theta^2 + \theta^2) \\
&= 1 - 2\theta + \theta^2 \\
&=(1-\theta)^2,
}$$
showing it is unbiased. A similar calculation will obtain its variance (which is useful to know, since it is supposed to be the smallest possible variance among unbiased estimators).
Note that these calculations required little more than applying the definition of expectation and computing binomial probabilities. | Finding UMVUE of Bernoulli random variables | The idea is that you can start with any estimator of $(1-\theta)^2$, no matter how awful, provided it is unbiased. The Rao-Blackwell process will almost magically turn it into a uniformly minimum-var | Finding UMVUE of Bernoulli random variables
The idea is that you can start with any estimator of $(1-\theta)^2$, no matter how awful, provided it is unbiased. The Rao-Blackwell process will almost magically turn it into a uniformly minimum-variance unbiased estimator (UMVUE).
There are many ways to proceed. One fruitful idea is systematically to remove the complications in the expression "$(1-\theta)^2$". This leads to a sequence of questions:
How to find an unbiased estimator of $(1-\theta)^2$?
How to find an unbiased estimator of $1-\theta$?
How to find an unbiased estimator of $\theta$?
The answer to (3), at least, should be obvious: any of the $X_i$ will be an unbiased estimator because
$$\mathbb{E}_\theta(X_i) = \theta.$$
(It doesn't matter how you come up with this estimator: by guessing and checking (which often works), Maximum Likelihood, or whatever. ML, incidentally, often is not helpful because it tends to produce biased estimators. What is helpful is the extreme simplicity of working with a single observation rather than a whole bunch of them.)
Linearity of expectation tells us an answer to (2) would be any of the $1 - X_i$, because
$$\mathbb{E}_\theta(1-X_i) = 1 - \mathbb{E}_\theta(X_i) = 1-\theta.$$
Getting from this to an answer to (1) is the crux of the matter. At some point you will need to exploit the fact you have more than one independent realization of this Bernoulli variable, because it quickly becomes obvious that a single $0-1$ observation just can't tell you much. For instance, the square of $1-X_1$ won't work, because (since $(1-X_1)^2 = (1-X_1)$)
$$\mathbb{E}_\theta((1-X_1)^2) = 1-\theta.$$
What could be done with two of the observations, such as $X_1$ and $X_2$? A little thought might eventually suggest considering their product. Sure enough, because $X_1$ and $X_2$ are independent, their expectations multiply:
$$\mathbb{E}_\theta((1-X_1)(1-X_2)) = \mathbb{E}_\theta((1-X_1))\mathbb{E}_\theta((1-X_2)) = (1-\theta)(1-\theta)=(1-\theta)^2.$$
You're now good to go: apply the Rao-Blackwell process to the unbiased estimator $T=(1-X_1)(1-X_2)$ to obtain an UMVUE. (That is, find its expectation conditional on $\sum X_i$.) I'll stop here so that you can have the fun of discovering the answer for yourself: it's marvelous to see what kinds of formulas can emerge from this process.
To illustrate the calculation let's take the simpler case of three, rather than four, $X_i$. The sum $S=X_1+X_2+X_3$ counts how many of the $X_i$ equal $1$. Look at the four possibilities:
When $S=0$, all the $X_i=0$ and $T = 1$ constantly, whence $\mathbb{E}(T\,|\,S=0)=1$.
When $S=1$, there are three possible configurations of the $X_i$: $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. All are equally likely, giving each a chance of $1/3$. The value of $T$ is $0$ for the first two and $1$ for the last. Therefore
$$\mathbb{E}(T\,|\,S=1) = \left(\frac{1}{3}\right)\left(0\right)+\left(\frac{1}{3}\right)\left(0\right)+\left(\frac{1}{3}\right)\left(1\right) = \frac{1}{3}.$$
When $S=2$ or $S=3$, $T=0$ no matter what order the $X_i$ appear in, giving $0$ for the conditional expectation.
The Rao-Blackwellized version of $T$, then, is the estimator that associates with the sum $S$ the following guesses for $\theta$:
$$\tilde T(0)=1,\ \tilde T(1)=1/3,\ \tilde T(2)=\tilde T(3)=0.$$
As a check, the expectation of $\tilde T$ can be computed as
$$\eqalign{
\mathbb{E}(\tilde T) &= \Pr(S=0)\tilde{T}(0) + \Pr(S=1)\tilde{T}(1) + \Pr(S=2)\tilde{T}(2) + \Pr(S=3)\tilde{T}(3) \\
&= (1-\theta)^3 + \binom{3}{1}\theta(1-\theta)^2\left(1/3\right) + 0 + 0 \\
&= 1 - 3 \theta + 3 \theta^2 - \theta^3 + 3(1/3)(\theta - 2\theta^2 + \theta^2) \\
&= 1 - 2\theta + \theta^2 \\
&=(1-\theta)^2,
}$$
showing it is unbiased. A similar calculation will obtain its variance (which is useful to know, since it is supposed to be the smallest possible variance among unbiased estimators).
Note that these calculations required little more than applying the definition of expectation and computing binomial probabilities. | Finding UMVUE of Bernoulli random variables
The idea is that you can start with any estimator of $(1-\theta)^2$, no matter how awful, provided it is unbiased. The Rao-Blackwell process will almost magically turn it into a uniformly minimum-var |
55,168 | Definition of $X_t$ in the context of Stochastic process and Time Series | Parameters in a statistical sense are not realizations of a random variable:
A statistical parameter is a parameter that indexes a family of
probability distributions. It can be regarded as a numerical
characteristic of a population or a statistical model.
So $T$ will simply be some parameter space (for stochastic processes typically an interval in $\mathbb{R}$).
Usually, a statistician's first association on reading the word "parameter" is to want to estimate and/or do inference on them, for instance
for the mean or variance of a normal distribution we have sampled from (the parameter space is $\mathbb{R}\times\mathbb{R}_{>0}$)
or for regression coefficients (the parameter space is $\mathbb{R}^{p+1}$ if you have $p$ regressors and an intercept).
However, it does not need to be the case that we necessarily should want to estimate or do inference. I have a hard time imagining a use case where we would like to estimate the time $t\in T$ on which observations in a stochastic process were sampled. However, you do have a somewhat similar question in dealing with mixture models, where you do think about deducing which of multiple component densities a particular observation came from (although I still have never seen anyone do inference on this - usually you just try to understand the entire mixture).
In any case, the $t\in T$ does satisfy the condition of "indexing a family of probability distributions", namely the $X_t$, and so it is a bona fide parameter. Of course each $X_t$ may have additional parameters that we do want to estimate or infer, e.g., in (G)ARCH modeling.
(Incidentally, in stochastic processes, $X_t$ usually denotes a random variable, namely the process at time $t$, rather than an outcome at time $t$ - it sounds a bit like you are conflating the two.) | Definition of $X_t$ in the context of Stochastic process and Time Series | Parameters in a statistical sense are not realizations of a random variable:
A statistical parameter is a parameter that indexes a family of
probability distributions. It can be regarded as a numer | Definition of $X_t$ in the context of Stochastic process and Time Series
Parameters in a statistical sense are not realizations of a random variable:
A statistical parameter is a parameter that indexes a family of
probability distributions. It can be regarded as a numerical
characteristic of a population or a statistical model.
So $T$ will simply be some parameter space (for stochastic processes typically an interval in $\mathbb{R}$).
Usually, a statistician's first association on reading the word "parameter" is to want to estimate and/or do inference on them, for instance
for the mean or variance of a normal distribution we have sampled from (the parameter space is $\mathbb{R}\times\mathbb{R}_{>0}$)
or for regression coefficients (the parameter space is $\mathbb{R}^{p+1}$ if you have $p$ regressors and an intercept).
However, it does not need to be the case that we necessarily should want to estimate or do inference. I have a hard time imagining a use case where we would like to estimate the time $t\in T$ on which observations in a stochastic process were sampled. However, you do have a somewhat similar question in dealing with mixture models, where you do think about deducing which of multiple component densities a particular observation came from (although I still have never seen anyone do inference on this - usually you just try to understand the entire mixture).
In any case, the $t\in T$ does satisfy the condition of "indexing a family of probability distributions", namely the $X_t$, and so it is a bona fide parameter. Of course each $X_t$ may have additional parameters that we do want to estimate or infer, e.g., in (G)ARCH modeling.
(Incidentally, in stochastic processes, $X_t$ usually denotes a random variable, namely the process at time $t$, rather than an outcome at time $t$ - it sounds a bit like you are conflating the two.) | Definition of $X_t$ in the context of Stochastic process and Time Series
Parameters in a statistical sense are not realizations of a random variable:
A statistical parameter is a parameter that indexes a family of
probability distributions. It can be regarded as a numer |
55,169 | Definition of $X_t$ in the context of Stochastic process and Time Series | The term parameter can different meanings in different settings. In statistics it is a usually a property of random variable which we want to estimate. In mathematics it is a simply a property of a mathematical object which is not constant. The stochastic processes literature usually follows mathematical conventions, so saying that $t$ is a parameter is perfectly fine.
The terminology is loosely related to terminology in calculus. When $t$ is a whole number we label it index, but when it is real number, i.e. continuous, we say parameter.
The beauty of stochastic process definition is that index set $T$ can be really anything. As long as finite dimensional distribution of a process satisfy conditions of Kolmogorov's theorem, $\{X_t, t\in T\}$ is an existing mathematical object. The parameter $t$ can be a natural number, a real number, a vector, or a set. | Definition of $X_t$ in the context of Stochastic process and Time Series | The term parameter can different meanings in different settings. In statistics it is a usually a property of random variable which we want to estimate. In mathematics it is a simply a property of a ma | Definition of $X_t$ in the context of Stochastic process and Time Series
The term parameter can different meanings in different settings. In statistics it is a usually a property of random variable which we want to estimate. In mathematics it is a simply a property of a mathematical object which is not constant. The stochastic processes literature usually follows mathematical conventions, so saying that $t$ is a parameter is perfectly fine.
The terminology is loosely related to terminology in calculus. When $t$ is a whole number we label it index, but when it is real number, i.e. continuous, we say parameter.
The beauty of stochastic process definition is that index set $T$ can be really anything. As long as finite dimensional distribution of a process satisfy conditions of Kolmogorov's theorem, $\{X_t, t\in T\}$ is an existing mathematical object. The parameter $t$ can be a natural number, a real number, a vector, or a set. | Definition of $X_t$ in the context of Stochastic process and Time Series
The term parameter can different meanings in different settings. In statistics it is a usually a property of random variable which we want to estimate. In mathematics it is a simply a property of a ma |
55,170 | Is the cumulative incidence function just an inverted Kaplan-Meier survival curve? | If you're talking about the cumulative incidence function that arises from a Kaplan-Meier estimator, then it's just $1 - S(t)$ where $t$ is time. In R:
library(survival)
fit <- survfit(Surv(time, status) ~ x, data = aml)
plot(fit) # the standard survival curve
plot(fit, fun="event") # the cumulative incidence curve
But, CIF is more commonly used in other contexts, for example in competing risk analysis using cause-specific Cox regression. There, the CIF is not necessarily equal to $1-S(t)$ because it takes into account the competing risk, estimating the risk of an event happening given survival up to time $t$ and also that a competing risk hasn't occurred up to time $t$. | Is the cumulative incidence function just an inverted Kaplan-Meier survival curve? | If you're talking about the cumulative incidence function that arises from a Kaplan-Meier estimator, then it's just $1 - S(t)$ where $t$ is time. In R:
library(survival)
fit <- survfit(Surv(time, stat | Is the cumulative incidence function just an inverted Kaplan-Meier survival curve?
If you're talking about the cumulative incidence function that arises from a Kaplan-Meier estimator, then it's just $1 - S(t)$ where $t$ is time. In R:
library(survival)
fit <- survfit(Surv(time, status) ~ x, data = aml)
plot(fit) # the standard survival curve
plot(fit, fun="event") # the cumulative incidence curve
But, CIF is more commonly used in other contexts, for example in competing risk analysis using cause-specific Cox regression. There, the CIF is not necessarily equal to $1-S(t)$ because it takes into account the competing risk, estimating the risk of an event happening given survival up to time $t$ and also that a competing risk hasn't occurred up to time $t$. | Is the cumulative incidence function just an inverted Kaplan-Meier survival curve?
If you're talking about the cumulative incidence function that arises from a Kaplan-Meier estimator, then it's just $1 - S(t)$ where $t$ is time. In R:
library(survival)
fit <- survfit(Surv(time, stat |
55,171 | Estimating conditional effect of logistic regression | You can include terms with fixed coefficients using an offset. Technically, an offset is a predictor with coefficient fixed at 1, so you will first need to create a new variable that has the linear combination of the $X$'s with coefficients estimated from the first model.
Model for first data set:
$$logit(P(Y=1)) = \beta_1X_1 + \dots + \beta_kX_k,$$
giving estimates $\hat\beta_i$.
Model for second data set:
$$logit(P(Y=1)) = \gamma Z + 1\cdot(\hat\beta_1X_1 + \dots + \hat\beta_kX_k),$$
giving estimate of $\hat\gamma$ with standard errors and all the inference.
Most software for generalized linear models (such as logistic regression) has a way to include an offset. It is most commonly used for Poisson regression, but as you can see, it can be useful for other situations as well.
Here is how this would work in R:
set.seed(456724)
# first data set
dd1 <- data.frame(X1=rnorm(50), X2=rnorm(50), Z=rnorm(50))
dd1$Y <- with(dd1, rbinom(50, size=1, p=1/(1+exp(-2-X1+2*X2-Z))))
# first model fitted with only X1 and X2
mod1 <- glm(Y ~ X1 + X2, family="binomial", data=dd1)
# second data set
dd2 <- data.frame(X1=rnorm(50), X2=rnorm(50), Z=rnorm(50))
dd2$Y <- with(dd2, rbinom(50, size=1, p=1/(1+exp(-2-X1+2*X2-Z))))
# linear predictor based on mod1
dd2$pred1 <- predict(mod1, newdata=dd2, type = "link")
# use X1-X2 based predictor as offset
mod2 <- glm(Y ~ Z+ offset(pred1), data=dd2, family="binomial")
summary(mod2)
The output is:
Call:
glm(formula = Y ~ Z + offset(pred1), family = "binomial", data = dd2)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.6432 -0.5135 0.2310 0.4945 1.5744
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.205 0.440 -0.466 0.6413
Z 1.070 0.418 2.560 0.0105 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 38.598 on 49 degrees of freedom
Residual deviance: 30.536 on 48 degrees of freedom
AIC: 34.536
You can't really see the presence of the offset here, but for comparison, here is the output without an offset term:
Call:
glm(formula = Y ~ Z, family = "binomial", data = dd2)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1183 -1.1191 0.5753 0.8910 1.6962
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.0911 0.3767 2.896 0.00377 **
Z 0.9369 0.3888 2.410 0.01597 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 62.687 on 49 degrees of freedom
Residual deviance: 54.906 on 48 degrees of freedom
AIC: 58.906
The estimates of the coefficient of $Z$ happen to be pretty close to each other, but you can see that the deviance is quite different, and the residual deviance is much lower when the additional predictor is included. | Estimating conditional effect of logistic regression | You can include terms with fixed coefficients using an offset. Technically, an offset is a predictor with coefficient fixed at 1, so you will first need to create a new variable that has the linear co | Estimating conditional effect of logistic regression
You can include terms with fixed coefficients using an offset. Technically, an offset is a predictor with coefficient fixed at 1, so you will first need to create a new variable that has the linear combination of the $X$'s with coefficients estimated from the first model.
Model for first data set:
$$logit(P(Y=1)) = \beta_1X_1 + \dots + \beta_kX_k,$$
giving estimates $\hat\beta_i$.
Model for second data set:
$$logit(P(Y=1)) = \gamma Z + 1\cdot(\hat\beta_1X_1 + \dots + \hat\beta_kX_k),$$
giving estimate of $\hat\gamma$ with standard errors and all the inference.
Most software for generalized linear models (such as logistic regression) has a way to include an offset. It is most commonly used for Poisson regression, but as you can see, it can be useful for other situations as well.
Here is how this would work in R:
set.seed(456724)
# first data set
dd1 <- data.frame(X1=rnorm(50), X2=rnorm(50), Z=rnorm(50))
dd1$Y <- with(dd1, rbinom(50, size=1, p=1/(1+exp(-2-X1+2*X2-Z))))
# first model fitted with only X1 and X2
mod1 <- glm(Y ~ X1 + X2, family="binomial", data=dd1)
# second data set
dd2 <- data.frame(X1=rnorm(50), X2=rnorm(50), Z=rnorm(50))
dd2$Y <- with(dd2, rbinom(50, size=1, p=1/(1+exp(-2-X1+2*X2-Z))))
# linear predictor based on mod1
dd2$pred1 <- predict(mod1, newdata=dd2, type = "link")
# use X1-X2 based predictor as offset
mod2 <- glm(Y ~ Z+ offset(pred1), data=dd2, family="binomial")
summary(mod2)
The output is:
Call:
glm(formula = Y ~ Z + offset(pred1), family = "binomial", data = dd2)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.6432 -0.5135 0.2310 0.4945 1.5744
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.205 0.440 -0.466 0.6413
Z 1.070 0.418 2.560 0.0105 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 38.598 on 49 degrees of freedom
Residual deviance: 30.536 on 48 degrees of freedom
AIC: 34.536
You can't really see the presence of the offset here, but for comparison, here is the output without an offset term:
Call:
glm(formula = Y ~ Z, family = "binomial", data = dd2)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.1183 -1.1191 0.5753 0.8910 1.6962
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.0911 0.3767 2.896 0.00377 **
Z 0.9369 0.3888 2.410 0.01597 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 62.687 on 49 degrees of freedom
Residual deviance: 54.906 on 48 degrees of freedom
AIC: 58.906
The estimates of the coefficient of $Z$ happen to be pretty close to each other, but you can see that the deviance is quite different, and the residual deviance is much lower when the additional predictor is included. | Estimating conditional effect of logistic regression
You can include terms with fixed coefficients using an offset. Technically, an offset is a predictor with coefficient fixed at 1, so you will first need to create a new variable that has the linear co |
55,172 | Why is the p-value for Cohen's $d$ not equal to the p-value of a t-test? | Cohen's d is a measure of the standardized difference in means.
(mean($X_1$)-mean ($X_2$))/sigma
So the null hypothesis tests whether this standardized difference is equal to zero. This is different from the original null hypothesis which tests whether the non-standardized difference in means is equal to zero.
It might help to see how Cohen's d can be calculated using t. You can find the details in the compute.es package documentation (http://cran.r-project.org/web/packages/compute.es/compute.es.pdf):
t = d$*$sqrt($n_1$*$n_2$/($n_1$ + $n_2$))
rearranging will give you d.
[Addition to the above:]
The d = 1.67 tells you that the difference between the two groups is about one and two-thirds of a standard deviation. The original p=0.05657 was calculated for the non-standardized difference in means. The t-statistics for this difference follows a 'central t-distribution' - that is, it is symmetric around 0. 'Central t-distributions' have one parameter: the degrees of freedom.
The difference with the effect size is that the t-statistic for it is non-centrally distributed (it is not symmetric around 0). 'Non-central t-distributions' have two parameters: the degrees of freedom and a 'non-centrality' parameter. Rather than getting into further details, you can find a straightforward introduction here:
Cumming, G. & Finch, S. (2001) A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 633-649.
They give a readable account of why the effect size has a non-central t-distribution on pages 549-551.
Also: you asked for an explanation of the p-value. P-values depend on the distribution of the test statistic which I've addressed in the above (the details of which are in the reference). Hopefully that helps! If you want an explanation of what a p-value is generally, then likely that would require a different post and has probably been asked elsewhere. | Why is the p-value for Cohen's $d$ not equal to the p-value of a t-test? | Cohen's d is a measure of the standardized difference in means.
(mean($X_1$)-mean ($X_2$))/sigma
So the null hypothesis tests whether this standardized difference is equal to zero. This is different | Why is the p-value for Cohen's $d$ not equal to the p-value of a t-test?
Cohen's d is a measure of the standardized difference in means.
(mean($X_1$)-mean ($X_2$))/sigma
So the null hypothesis tests whether this standardized difference is equal to zero. This is different from the original null hypothesis which tests whether the non-standardized difference in means is equal to zero.
It might help to see how Cohen's d can be calculated using t. You can find the details in the compute.es package documentation (http://cran.r-project.org/web/packages/compute.es/compute.es.pdf):
t = d$*$sqrt($n_1$*$n_2$/($n_1$ + $n_2$))
rearranging will give you d.
[Addition to the above:]
The d = 1.67 tells you that the difference between the two groups is about one and two-thirds of a standard deviation. The original p=0.05657 was calculated for the non-standardized difference in means. The t-statistics for this difference follows a 'central t-distribution' - that is, it is symmetric around 0. 'Central t-distributions' have one parameter: the degrees of freedom.
The difference with the effect size is that the t-statistic for it is non-centrally distributed (it is not symmetric around 0). 'Non-central t-distributions' have two parameters: the degrees of freedom and a 'non-centrality' parameter. Rather than getting into further details, you can find a straightforward introduction here:
Cumming, G. & Finch, S. (2001) A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 633-649.
They give a readable account of why the effect size has a non-central t-distribution on pages 549-551.
Also: you asked for an explanation of the p-value. P-values depend on the distribution of the test statistic which I've addressed in the above (the details of which are in the reference). Hopefully that helps! If you want an explanation of what a p-value is generally, then likely that would require a different post and has probably been asked elsewhere. | Why is the p-value for Cohen's $d$ not equal to the p-value of a t-test?
Cohen's d is a measure of the standardized difference in means.
(mean($X_1$)-mean ($X_2$))/sigma
So the null hypothesis tests whether this standardized difference is equal to zero. This is different |
55,173 | Some basic questions related to the moments of a probability distribution | Existence. If we have a random variable $x$ with the density function $f(x)$, then we know that $$\int_{-\infty}^\infty f(x) dx=1$$ rewrite this as follows $$1=\int_{-\infty}^\infty 1\cdot f(x) dx\ge |\int_{-\infty}^\infty(\cos tx + i \sin tx)\cdot f(x) dx|$$ simply because $$1\ge |\cos tx + i \sin tx|$$
Hence, characteristic function always exists $$E[e^{itx}]\equiv \int_{-\infty}^\infty e^{itx} f(x) dx$$
This is not a proof that a proper mathematician would accept, but you wanted an intution. Basically, $e^{iz}$ confines the outcome into $[-1,1]$, and we know already that the density function would produce a finite integral with even all 1s.
2.
Symmetry is easy.
Skew is defined as $\int_{-\infty}^\infty x^3 f(x)$, so if your density is symmetrical, i.e. $f(x)=f(-x)$, the skew has got to be zero because: $$\int_{-\infty}^0 -x^3 f(-x) dx= -\int_0^\infty x^3 f(x)dx$$
Note, @whuber's comment: skew=0 is necessary for symmetry, it's not sufficient though. You can easily construct asymmetric distribution with zero skew. All you need is to make it that the left side of 0 adds up to the right side, and the shapes of these sides don't have to be the same.
I'm assuming the mean is zero here, i.e. symmetry is around the origin, but it's not important, you can always re-center.
Kurtosis is defined as $$\frac{\int_{-\infty}^\infty x^4 f(x)dx}{(\int_{-\infty}^\infty x^2 f(x)dx)^2}$$. If this was a discrete case then the numerator would have terms like $x_i^4 p_i$, where $p_i$ probability of a value $x_i$. On the other hand the denominator would have terms like $x_i^4 p_i^2$. So, the tail values with power 4 will enter with weight $p_i$ in numerator, but with weight $p_i^2$ in the denominator. The heavy tail will make kurtosis larger.
@whuber noticed an error in my previous explanation of kurtosis. | Some basic questions related to the moments of a probability distribution | Existence. If we have a random variable $x$ with the density function $f(x)$, then we know that $$\int_{-\infty}^\infty f(x) dx=1$$ rewrite this as follows $$1=\int_{-\infty}^\infty 1\cdot f(x) dx\ge | Some basic questions related to the moments of a probability distribution
Existence. If we have a random variable $x$ with the density function $f(x)$, then we know that $$\int_{-\infty}^\infty f(x) dx=1$$ rewrite this as follows $$1=\int_{-\infty}^\infty 1\cdot f(x) dx\ge |\int_{-\infty}^\infty(\cos tx + i \sin tx)\cdot f(x) dx|$$ simply because $$1\ge |\cos tx + i \sin tx|$$
Hence, characteristic function always exists $$E[e^{itx}]\equiv \int_{-\infty}^\infty e^{itx} f(x) dx$$
This is not a proof that a proper mathematician would accept, but you wanted an intution. Basically, $e^{iz}$ confines the outcome into $[-1,1]$, and we know already that the density function would produce a finite integral with even all 1s.
2.
Symmetry is easy.
Skew is defined as $\int_{-\infty}^\infty x^3 f(x)$, so if your density is symmetrical, i.e. $f(x)=f(-x)$, the skew has got to be zero because: $$\int_{-\infty}^0 -x^3 f(-x) dx= -\int_0^\infty x^3 f(x)dx$$
Note, @whuber's comment: skew=0 is necessary for symmetry, it's not sufficient though. You can easily construct asymmetric distribution with zero skew. All you need is to make it that the left side of 0 adds up to the right side, and the shapes of these sides don't have to be the same.
I'm assuming the mean is zero here, i.e. symmetry is around the origin, but it's not important, you can always re-center.
Kurtosis is defined as $$\frac{\int_{-\infty}^\infty x^4 f(x)dx}{(\int_{-\infty}^\infty x^2 f(x)dx)^2}$$. If this was a discrete case then the numerator would have terms like $x_i^4 p_i$, where $p_i$ probability of a value $x_i$. On the other hand the denominator would have terms like $x_i^4 p_i^2$. So, the tail values with power 4 will enter with weight $p_i$ in numerator, but with weight $p_i^2$ in the denominator. The heavy tail will make kurtosis larger.
@whuber noticed an error in my previous explanation of kurtosis. | Some basic questions related to the moments of a probability distribution
Existence. If we have a random variable $x$ with the density function $f(x)$, then we know that $$\int_{-\infty}^\infty f(x) dx=1$$ rewrite this as follows $$1=\int_{-\infty}^\infty 1\cdot f(x) dx\ge |
55,174 | Biostatistics book for mathematician | In addition to the already recommended Frank Harrell's nice book (which I look forward to reading), I would like to share the following - and hopefully relevant - resources:
Book "Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models". The examples are in Stata, in case, if you care about that. Here's the Amazon link - by the way, this is the Hardcover edition, which is strangely cheaper than Paperback one.
Some interesting general and focused statistical recommended reading sources, shared by Vanderbilt University's Department of Biostatistics.
Information from selected biostatistics classes, shared by professor Ingo Ruczinski at Johns Hopkins University (Bloomberg School of Public Health's Biostatistics Department).
In regard to job search, recently I've run across several positions at Google (yes, that's right!) in bioinformatics, biostatistics and related areas. For example, see this position and this position (keep in mind that those positions are not located in Europe, but in Silicon Valley). | Biostatistics book for mathematician | In addition to the already recommended Frank Harrell's nice book (which I look forward to reading), I would like to share the following - and hopefully relevant - resources:
Book "Regression Methods | Biostatistics book for mathematician
In addition to the already recommended Frank Harrell's nice book (which I look forward to reading), I would like to share the following - and hopefully relevant - resources:
Book "Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models". The examples are in Stata, in case, if you care about that. Here's the Amazon link - by the way, this is the Hardcover edition, which is strangely cheaper than Paperback one.
Some interesting general and focused statistical recommended reading sources, shared by Vanderbilt University's Department of Biostatistics.
Information from selected biostatistics classes, shared by professor Ingo Ruczinski at Johns Hopkins University (Bloomberg School of Public Health's Biostatistics Department).
In regard to job search, recently I've run across several positions at Google (yes, that's right!) in bioinformatics, biostatistics and related areas. For example, see this position and this position (keep in mind that those positions are not located in Europe, but in Silicon Valley). | Biostatistics book for mathematician
In addition to the already recommended Frank Harrell's nice book (which I look forward to reading), I would like to share the following - and hopefully relevant - resources:
Book "Regression Methods |
55,175 | Biostatistics book for mathematician | you mention at the end of your Q that "I am thinking of selling myself as a biostatistician" in the industry in europe. In that case, it would make more sense to read the regulatory guidelines and books about the drug development process, and to sharpen your SAS skills and knowledge of cdisc, sdtm, adam. Large randomised controlled trials tend not to demand sophisticated, complex analyses. A pharma company will want a simple, cogent, unambiguous analysis to persuade the fda after all. They spend $$$ running trials which means issues such as missing data, which can complicate the analysis, are minimised (by data monitoring). There is also a fondness for convention in the industry because convention leads to efficiency (eg re-using sas code and much standardisation). Therefore you'd do better to read a simple book like pocock's book on clinical trials, and if you're ambitious then stephen senn's statistical issues in drug development. That would bring you up to speed. And spend the time you save reading points to consider documents on the EMA website: EMA guidelines | Biostatistics book for mathematician | you mention at the end of your Q that "I am thinking of selling myself as a biostatistician" in the industry in europe. In that case, it would make more sense to read the regulatory guidelines and boo | Biostatistics book for mathematician
you mention at the end of your Q that "I am thinking of selling myself as a biostatistician" in the industry in europe. In that case, it would make more sense to read the regulatory guidelines and books about the drug development process, and to sharpen your SAS skills and knowledge of cdisc, sdtm, adam. Large randomised controlled trials tend not to demand sophisticated, complex analyses. A pharma company will want a simple, cogent, unambiguous analysis to persuade the fda after all. They spend $$$ running trials which means issues such as missing data, which can complicate the analysis, are minimised (by data monitoring). There is also a fondness for convention in the industry because convention leads to efficiency (eg re-using sas code and much standardisation). Therefore you'd do better to read a simple book like pocock's book on clinical trials, and if you're ambitious then stephen senn's statistical issues in drug development. That would bring you up to speed. And spend the time you save reading points to consider documents on the EMA website: EMA guidelines | Biostatistics book for mathematician
you mention at the end of your Q that "I am thinking of selling myself as a biostatistician" in the industry in europe. In that case, it would make more sense to read the regulatory guidelines and boo |
55,176 | Biostatistics book for mathematician | You need to start with something on the applied side, like Frank Harrell: "Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis (Springer Series in Statistics) "
Then you can go deep into the underlying mathematics wih somethingh like PER KRAGH ANDERSEN and Ørnulf Borgan: "Statistical Models Based on Counting Processes (Springer Series in Statistics)"
and maybe supplement this with some book about R. | Biostatistics book for mathematician | You need to start with something on the applied side, like Frank Harrell: "Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis (Springer Seri | Biostatistics book for mathematician
You need to start with something on the applied side, like Frank Harrell: "Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis (Springer Series in Statistics) "
Then you can go deep into the underlying mathematics wih somethingh like PER KRAGH ANDERSEN and Ørnulf Borgan: "Statistical Models Based on Counting Processes (Springer Series in Statistics)"
and maybe supplement this with some book about R. | Biostatistics book for mathematician
You need to start with something on the applied side, like Frank Harrell: "Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis (Springer Seri |
55,177 | Should I find this big parameter suspicious? | If $X$ has a logistic distribution with location parameter $\mu$ and scale parameter $\sigma$
$$\newcommand{\e}{\mathrm{e}}f(x) = \frac{\exp\left(\frac{x-\mu}{\sigma}\right)}{\sigma \left[1+ \exp\left(\frac{x-\mu}{\sigma}\right)\right]^2}$$
then $Y=\log(X)$ has a log-logistic distribution
$$f(y) = \frac{ \frac{\sigma^{-1}}{\e^\mu}\cdot \left(\frac{y}{\e^\mu}\right)^{\sigma^{-1}-1}}{\left[1 + \left(\frac{y}{\e^\mu}\right)^{\sigma^-1}\right]^2}$$
whose scale is not $\mu$ but $\e^\mu$. Though $\sigma$ might as well be called the shape, a common parametrization uses scale $\beta=\e^\mu$ & shape $\alpha=\sigma^{-1}$
$$f(y) = \frac{\frac{\alpha}{\beta}\cdot \left(\frac{y}{\beta}\right)^{\alpha-1}}{\left[1 + \left(\frac{y}{\beta}\right)^{\alpha}\right]^2}$$
You're certainly not reporting an estimate of scale, as that also estimates the distribution median & 8.29 seems far too low. If you're reporting $\hat\mu=8.294636$ and $\hat\sigma=1.667393$, then $\hat\alpha=0.59973$ & $\hat\beta=4002.3$; remarkably close to what your colleague is reporting.†
Plotting the likelihood is always a good idea. Here it is from a simulated sample, but you can use the real one:
You can see at a glance whether the maximum found by an algorithm is plausible.
† This is more detective work than Statistics (you should've included details in the question), but findFn (from the sos package) suggests that the pllog function you're using is from the FAdist package. The documentation is rather vague:—
If Y is a random variable distributed according to a logistic
distribution (with location and scale parameters), then $X = exp(Y)$
has a log-logistic distribution with shape and scale parameters
corresponding to the scale and location parameteres [sic] of Y,
respectively.
"Corresponding to" gives the impression of having been chosen to avoid "equal to", but the code for pllog is
function (q, shape = 1, scale = 1, lower.tail = TRUE, log.p = FALSE)
{
Fx <- plogis(log(q), location = scale, scale = shape)
if (!lower.tail)
Fx <- 1 - Fx
if (log.p)
Fx <- log(Fx)
return(Fx)
}
So if you're using the related dllog density function
function (x, shape = 1, scale = 1, log = FALSE)
{
fx <- dlogis(log(x), location = scale, scale = shape, log = FALSE)/x
if (log)
return(log(fx))
else return(fx)
}
to find the maximum-likelihood it will indeed be $\hat\mu$ & $\hat\sigma$ you're calling "scale" & "shape". More tenuously, if your colleague doesn't have a handbook of distributions sitting on their desk, they may well have have looked up the log-logistic distribution in Wikipedia, where it's parametrized with shape $\alpha$ & scale $\beta$. Mystery solved! | Should I find this big parameter suspicious? | If $X$ has a logistic distribution with location parameter $\mu$ and scale parameter $\sigma$
$$\newcommand{\e}{\mathrm{e}}f(x) = \frac{\exp\left(\frac{x-\mu}{\sigma}\right)}{\sigma \left[1+ \exp\left | Should I find this big parameter suspicious?
If $X$ has a logistic distribution with location parameter $\mu$ and scale parameter $\sigma$
$$\newcommand{\e}{\mathrm{e}}f(x) = \frac{\exp\left(\frac{x-\mu}{\sigma}\right)}{\sigma \left[1+ \exp\left(\frac{x-\mu}{\sigma}\right)\right]^2}$$
then $Y=\log(X)$ has a log-logistic distribution
$$f(y) = \frac{ \frac{\sigma^{-1}}{\e^\mu}\cdot \left(\frac{y}{\e^\mu}\right)^{\sigma^{-1}-1}}{\left[1 + \left(\frac{y}{\e^\mu}\right)^{\sigma^-1}\right]^2}$$
whose scale is not $\mu$ but $\e^\mu$. Though $\sigma$ might as well be called the shape, a common parametrization uses scale $\beta=\e^\mu$ & shape $\alpha=\sigma^{-1}$
$$f(y) = \frac{\frac{\alpha}{\beta}\cdot \left(\frac{y}{\beta}\right)^{\alpha-1}}{\left[1 + \left(\frac{y}{\beta}\right)^{\alpha}\right]^2}$$
You're certainly not reporting an estimate of scale, as that also estimates the distribution median & 8.29 seems far too low. If you're reporting $\hat\mu=8.294636$ and $\hat\sigma=1.667393$, then $\hat\alpha=0.59973$ & $\hat\beta=4002.3$; remarkably close to what your colleague is reporting.†
Plotting the likelihood is always a good idea. Here it is from a simulated sample, but you can use the real one:
You can see at a glance whether the maximum found by an algorithm is plausible.
† This is more detective work than Statistics (you should've included details in the question), but findFn (from the sos package) suggests that the pllog function you're using is from the FAdist package. The documentation is rather vague:—
If Y is a random variable distributed according to a logistic
distribution (with location and scale parameters), then $X = exp(Y)$
has a log-logistic distribution with shape and scale parameters
corresponding to the scale and location parameteres [sic] of Y,
respectively.
"Corresponding to" gives the impression of having been chosen to avoid "equal to", but the code for pllog is
function (q, shape = 1, scale = 1, lower.tail = TRUE, log.p = FALSE)
{
Fx <- plogis(log(q), location = scale, scale = shape)
if (!lower.tail)
Fx <- 1 - Fx
if (log.p)
Fx <- log(Fx)
return(Fx)
}
So if you're using the related dllog density function
function (x, shape = 1, scale = 1, log = FALSE)
{
fx <- dlogis(log(x), location = scale, scale = shape, log = FALSE)/x
if (log)
return(log(fx))
else return(fx)
}
to find the maximum-likelihood it will indeed be $\hat\mu$ & $\hat\sigma$ you're calling "scale" & "shape". More tenuously, if your colleague doesn't have a handbook of distributions sitting on their desk, they may well have have looked up the log-logistic distribution in Wikipedia, where it's parametrized with shape $\alpha$ & scale $\beta$. Mystery solved! | Should I find this big parameter suspicious?
If $X$ has a logistic distribution with location parameter $\mu$ and scale parameter $\sigma$
$$\newcommand{\e}{\mathrm{e}}f(x) = \frac{\exp\left(\frac{x-\mu}{\sigma}\right)}{\sigma \left[1+ \exp\left |
55,178 | Derivation of cumulative Binomial Distribution expression | I think you'll find it difficult to prove, because it is not true unless $p = 0.5$.
Consider the simple case of $n = 1$, $r = 1$. Then:
$P(Y \geq r) = P(Y = 1) = p$
$P(Y \leq n - r) = P(Y = 0) = 1 - p$.
Starting with the LHS of your second equation, if you substitute $j = n - x$ and use ${n \choose j} = {n \choose {n-j}}$, you should be able to verify that
$P(Y \geq r) = \sum_{j=0}^{n-r} {n \choose j} (1-p)^j p^{n-j}$,
which is $P(X \leq n-r)$ if $X$ is a binomial random variable with success probability $(1-p)$.
This makes sense intuitively if you think of $Y$ as the number of successes and $X$ as the number of failures: if there are at least $r$ successes, there must be no more than $n-r$ failures. | Derivation of cumulative Binomial Distribution expression | I think you'll find it difficult to prove, because it is not true unless $p = 0.5$.
Consider the simple case of $n = 1$, $r = 1$. Then:
$P(Y \geq r) = P(Y = 1) = p$
$P(Y \leq n - r) = P(Y = 0) = 1 - p | Derivation of cumulative Binomial Distribution expression
I think you'll find it difficult to prove, because it is not true unless $p = 0.5$.
Consider the simple case of $n = 1$, $r = 1$. Then:
$P(Y \geq r) = P(Y = 1) = p$
$P(Y \leq n - r) = P(Y = 0) = 1 - p$.
Starting with the LHS of your second equation, if you substitute $j = n - x$ and use ${n \choose j} = {n \choose {n-j}}$, you should be able to verify that
$P(Y \geq r) = \sum_{j=0}^{n-r} {n \choose j} (1-p)^j p^{n-j}$,
which is $P(X \leq n-r)$ if $X$ is a binomial random variable with success probability $(1-p)$.
This makes sense intuitively if you think of $Y$ as the number of successes and $X$ as the number of failures: if there are at least $r$ successes, there must be no more than $n-r$ failures. | Derivation of cumulative Binomial Distribution expression
I think you'll find it difficult to prove, because it is not true unless $p = 0.5$.
Consider the simple case of $n = 1$, $r = 1$. Then:
$P(Y \geq r) = P(Y = 1) = p$
$P(Y \leq n - r) = P(Y = 0) = 1 - p |
55,179 | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the errors? | I wouldn't quite call this an assumption of the linear model. Instead, I would say that this is an assumption you are making when you interpret the results of a linear model in a particular way. In other words, when the stated condition holds, there may be multiple possible interpretations of which some are legitimate and some are not.
The assumption, "no correlation between between the explanatory variables and the errors" refers to the lack (or existence) of endogeneity. Standard methods for estimating a linear model from data (e.g., ordinary least squares and maximum likelihood estimation) force the errors to have mean $0$. This affects the coefficient estimates that result.
Consider a case where the level of a response variable, $Y$, is a function of three variables, $X_1,\ X_2,\ \& \ X_3$. Further, imagine that $X_2$ and $X_3$ are correlated, but you only include $X_1$ and $X_2$ in your model. (Perhaps you've never even heard of $X_3$ and no one has ever thought to use it to understand why certain values of $Y$ seem to occur in the world.) When a variable is not included in a model, its effects are collapsed into the error term. Thus, you now have a variable, $X_2$, that is correlated with the error term, in violation of the 'assumption' stated above.
So what is the result of this? Part of the effect of $X_3$ gets mixed in with the effect of $X_2$ in the model's estimate of the coefficient for $X_2$. That is, the violation of this assumption leads to coefficient estimates that are biased when considered to be estimates of the direct effects of the coefficients on the response. However, these are unbiased estimates of the marginal association between your variables and the response. To understand this more fully, it may help you to read my answer here: Estimating $b_1x_1+b_2x_2$ instead of $b_1x_1+b_2x_2+b_3x_3$. | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the | I wouldn't quite call this an assumption of the linear model. Instead, I would say that this is an assumption you are making when you interpret the results of a linear model in a particular way. In | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the errors?
I wouldn't quite call this an assumption of the linear model. Instead, I would say that this is an assumption you are making when you interpret the results of a linear model in a particular way. In other words, when the stated condition holds, there may be multiple possible interpretations of which some are legitimate and some are not.
The assumption, "no correlation between between the explanatory variables and the errors" refers to the lack (or existence) of endogeneity. Standard methods for estimating a linear model from data (e.g., ordinary least squares and maximum likelihood estimation) force the errors to have mean $0$. This affects the coefficient estimates that result.
Consider a case where the level of a response variable, $Y$, is a function of three variables, $X_1,\ X_2,\ \& \ X_3$. Further, imagine that $X_2$ and $X_3$ are correlated, but you only include $X_1$ and $X_2$ in your model. (Perhaps you've never even heard of $X_3$ and no one has ever thought to use it to understand why certain values of $Y$ seem to occur in the world.) When a variable is not included in a model, its effects are collapsed into the error term. Thus, you now have a variable, $X_2$, that is correlated with the error term, in violation of the 'assumption' stated above.
So what is the result of this? Part of the effect of $X_3$ gets mixed in with the effect of $X_2$ in the model's estimate of the coefficient for $X_2$. That is, the violation of this assumption leads to coefficient estimates that are biased when considered to be estimates of the direct effects of the coefficients on the response. However, these are unbiased estimates of the marginal association between your variables and the response. To understand this more fully, it may help you to read my answer here: Estimating $b_1x_1+b_2x_2$ instead of $b_1x_1+b_2x_2+b_3x_3$. | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the
I wouldn't quite call this an assumption of the linear model. Instead, I would say that this is an assumption you are making when you interpret the results of a linear model in a particular way. In |
55,180 | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the errors? | I find the phrase "no correlation" potentially misleading, because I have noticed that sometimes the word correlation is used narrowly, to refer to the covariance between two random variables and whether it is zero or not, and sometimes it is used more broadly, to indicate whether there exists (or not) some unspecified stochastic dependence between the two.
For the normal linear regression model, different conditions guarantee different properties of the estimator, deemed desirable ones. The model is
$$y_i = \mathbf x'\beta + u_i, \;\; i=1,...,n$$
and also in matrix notation (we will need them both)
$$\mathbf y = \mathbf X\beta + \mathbf u$$
The Ordinary Least-Squares estimator (which is also maximum likelihod under normality) is, in matrix notation,
$$\hat \beta = \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf y = \beta + \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u$$
To examine Unbiasedness we have
$$E(\hat \beta) = \beta + E\left[\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u\right]$$
and using the law of Iterated Expectations
$$E(\hat \beta) -\beta= E\left(E\left[\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u\mid\mathbf X\right] \right)$$
$$= E\left(\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'E\left[ \mathbf u\mid\mathbf X\right] \right)$$
So in order for the estimator to be unbiased we require $E\left[ \mathbf u\mid\mathbf X\right]=0$. Note that the condition requires that the conditional expected value of each $u_i$ conditional on all $\mathbf X$ (i.e. on all the random variables forming the sample, and not only those belonging to the $i$-th observation), is zero. This is not usually called "being uncorrelated", but rather "error is mean-independent of the regressors" or "regressors are strictly exogenous to the error term".
This specific property is needed for unbiasedness and consequently for the Gauss-Markov theorem to hold.
The property required for Consistency comes closer to being described verbally as "no correlation". For Consistency we require that
$${\rm plim} (\hat \beta-\beta) = {\rm plim}\left(\frac 1n\mathbf X' \mathbf X\right)^{-1}\cdot {\rm plim}\left(\frac 1n\mathbf X' \mathbf u\right) =0$$
Focusing on what interests us (and not on all regularity, etc conditions that must hold here) what we need is
$$\left(\frac 1n\mathbf X' \mathbf u\right) \xrightarrow{p} 0$$
If we write out explicitly the matrix product we will obtain a vector of $1$ column, with typical element
$$\frac 1n\sum_{i=1}^nX_{ki}u_i$$
where $X_k$ is the $k$-th regressor. As $n$ tends to infinity, and under some conditions the Law of Large Numbers will hold and we will have that
$$\frac 1n\sum_{i=1}^nX_{ki}u_i \rightarrow \frac 1n\sum_{i=1}^nE\big(X_{ki}u_i\big)$$
So for Consistency we require the (weaker) condition that $E\big(X_{ki}u_i\big) = 0, \forall k$, and for all $i$ separately. So we require that each regressor is contemporaneously orthogonal to the error term. The condition is weaker than the one for unbiasedness, because it does not require that, say, $E(X_{kj}u_i) =0$. In other words consistency may survive with a non-i.i.d sample. Also, $E\big(X_{ki}u_i\big) = 0$ does not imply the Strict Exogeneity condition.
Since moreover the error term is assumed zero-mean, "orthogonality" becomes equivalent to "no-correlation" (zero covariance), and this is why "no-correlation" comes closer to describing this condition (and so guarantees consistency) rather than the Strict Exogeneity condition that is required for unbiasedness. | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the | I find the phrase "no correlation" potentially misleading, because I have noticed that sometimes the word correlation is used narrowly, to refer to the covariance between two random variables and whet | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the errors?
I find the phrase "no correlation" potentially misleading, because I have noticed that sometimes the word correlation is used narrowly, to refer to the covariance between two random variables and whether it is zero or not, and sometimes it is used more broadly, to indicate whether there exists (or not) some unspecified stochastic dependence between the two.
For the normal linear regression model, different conditions guarantee different properties of the estimator, deemed desirable ones. The model is
$$y_i = \mathbf x'\beta + u_i, \;\; i=1,...,n$$
and also in matrix notation (we will need them both)
$$\mathbf y = \mathbf X\beta + \mathbf u$$
The Ordinary Least-Squares estimator (which is also maximum likelihod under normality) is, in matrix notation,
$$\hat \beta = \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf y = \beta + \left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u$$
To examine Unbiasedness we have
$$E(\hat \beta) = \beta + E\left[\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u\right]$$
and using the law of Iterated Expectations
$$E(\hat \beta) -\beta= E\left(E\left[\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X' \mathbf u\mid\mathbf X\right] \right)$$
$$= E\left(\left(\mathbf X' \mathbf X\right)^{-1}\mathbf X'E\left[ \mathbf u\mid\mathbf X\right] \right)$$
So in order for the estimator to be unbiased we require $E\left[ \mathbf u\mid\mathbf X\right]=0$. Note that the condition requires that the conditional expected value of each $u_i$ conditional on all $\mathbf X$ (i.e. on all the random variables forming the sample, and not only those belonging to the $i$-th observation), is zero. This is not usually called "being uncorrelated", but rather "error is mean-independent of the regressors" or "regressors are strictly exogenous to the error term".
This specific property is needed for unbiasedness and consequently for the Gauss-Markov theorem to hold.
The property required for Consistency comes closer to being described verbally as "no correlation". For Consistency we require that
$${\rm plim} (\hat \beta-\beta) = {\rm plim}\left(\frac 1n\mathbf X' \mathbf X\right)^{-1}\cdot {\rm plim}\left(\frac 1n\mathbf X' \mathbf u\right) =0$$
Focusing on what interests us (and not on all regularity, etc conditions that must hold here) what we need is
$$\left(\frac 1n\mathbf X' \mathbf u\right) \xrightarrow{p} 0$$
If we write out explicitly the matrix product we will obtain a vector of $1$ column, with typical element
$$\frac 1n\sum_{i=1}^nX_{ki}u_i$$
where $X_k$ is the $k$-th regressor. As $n$ tends to infinity, and under some conditions the Law of Large Numbers will hold and we will have that
$$\frac 1n\sum_{i=1}^nX_{ki}u_i \rightarrow \frac 1n\sum_{i=1}^nE\big(X_{ki}u_i\big)$$
So for Consistency we require the (weaker) condition that $E\big(X_{ki}u_i\big) = 0, \forall k$, and for all $i$ separately. So we require that each regressor is contemporaneously orthogonal to the error term. The condition is weaker than the one for unbiasedness, because it does not require that, say, $E(X_{kj}u_i) =0$. In other words consistency may survive with a non-i.i.d sample. Also, $E\big(X_{ki}u_i\big) = 0$ does not imply the Strict Exogeneity condition.
Since moreover the error term is assumed zero-mean, "orthogonality" becomes equivalent to "no-correlation" (zero covariance), and this is why "no-correlation" comes closer to describing this condition (and so guarantees consistency) rather than the Strict Exogeneity condition that is required for unbiasedness. | Is it an assumption of the normal linear model that explanatory variables are uncorrelated with the
I find the phrase "no correlation" potentially misleading, because I have noticed that sometimes the word correlation is used narrowly, to refer to the covariance between two random variables and whet |
55,181 | Endogeneity test instrumental variables | What you are looking at is formally known as the control function approach. When you run your first stage
$$x_3 = b_0 + b_1x_1 +b_2x_2 + b_3z + u$$
you basically split the variation in $x_3$ into exogenous variation (that comes from the exogenous and instrumental variables), and you leave the "bad" variation that is correlated with $e$ in your first regression.
You know that when you regress
$$y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \beta_3x_3 + e$$
some part of your endogenous variable is correlated with $e$, i.e. it is contained in the error term. This part is captured by $u$ in the first stage. So you can imagine that $e$ is a sort of composite error $e = \epsilon + u$ (formally this isn't the right way of making the point but it is intuitive). Therefore, if you regress
$$y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \beta_3x_3 + \rho u + e$$
there is no endogeneity problem anymore because the part of $x_3$ which is correlated with $e$ is not in this error term anymore because it is included in the regression as $u$.
If you run 2SLS instead, you will notice that the $\beta_3$ will have the exact same value as the one from the control function approach (see this related question and its answer). In essence your authors are restating the Hausman test. You know that the control function approach or 2SLS will give you consistent estimates. Therefore, if such estimates are not significantly different from the OLS estimates the bias in OLS cannot be big (under the assumption that the instrument is valid and strong). | Endogeneity test instrumental variables | What you are looking at is formally known as the control function approach. When you run your first stage
$$x_3 = b_0 + b_1x_1 +b_2x_2 + b_3z + u$$
you basically split the variation in $x_3$ into exog | Endogeneity test instrumental variables
What you are looking at is formally known as the control function approach. When you run your first stage
$$x_3 = b_0 + b_1x_1 +b_2x_2 + b_3z + u$$
you basically split the variation in $x_3$ into exogenous variation (that comes from the exogenous and instrumental variables), and you leave the "bad" variation that is correlated with $e$ in your first regression.
You know that when you regress
$$y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \beta_3x_3 + e$$
some part of your endogenous variable is correlated with $e$, i.e. it is contained in the error term. This part is captured by $u$ in the first stage. So you can imagine that $e$ is a sort of composite error $e = \epsilon + u$ (formally this isn't the right way of making the point but it is intuitive). Therefore, if you regress
$$y = \beta_0 + \beta_1x_1 + \beta_2 x_2 + \beta_3x_3 + \rho u + e$$
there is no endogeneity problem anymore because the part of $x_3$ which is correlated with $e$ is not in this error term anymore because it is included in the regression as $u$.
If you run 2SLS instead, you will notice that the $\beta_3$ will have the exact same value as the one from the control function approach (see this related question and its answer). In essence your authors are restating the Hausman test. You know that the control function approach or 2SLS will give you consistent estimates. Therefore, if such estimates are not significantly different from the OLS estimates the bias in OLS cannot be big (under the assumption that the instrument is valid and strong). | Endogeneity test instrumental variables
What you are looking at is formally known as the control function approach. When you run your first stage
$$x_3 = b_0 + b_1x_1 +b_2x_2 + b_3z + u$$
you basically split the variation in $x_3$ into exog |
55,182 | Which statistical test to use to test differences in multiple means (multiple populations) | If you want a multi-group analog of a t-test it sounds like you just want ANOVA (analysis of variance) or something similar to it. That's exactly what it's for - comparing group means.
Specifically, you seem to be asking for one-way analysis of variance.
Any decent statistics package does ANOVA.
If you don't want to assume normality (just as you would for a t-test), there are a variety of options that still allow a test of means (including permutation tests and GLMs), but if your samples are large, moderate non-normality won't impact things much.
There's also the issue of potential heteroskedasticity; in the normal case many packages offer an approximation via an adjustment to error degrees of freedom (Welch-Satterthwaite) that often performs quite well. If heteroskedasticity is related to mean, you may be better off looking at an ANOVA-like model fitted as a GLM.
However, if the clusters are generated by performing cluster analysis on data, the theory for t-tests, ANOVA, GLMs, permutation tests, etc no longer holds. None of the p-values would be correct | Which statistical test to use to test differences in multiple means (multiple populations) | If you want a multi-group analog of a t-test it sounds like you just want ANOVA (analysis of variance) or something similar to it. That's exactly what it's for - comparing group means.
Specifically, | Which statistical test to use to test differences in multiple means (multiple populations)
If you want a multi-group analog of a t-test it sounds like you just want ANOVA (analysis of variance) or something similar to it. That's exactly what it's for - comparing group means.
Specifically, you seem to be asking for one-way analysis of variance.
Any decent statistics package does ANOVA.
If you don't want to assume normality (just as you would for a t-test), there are a variety of options that still allow a test of means (including permutation tests and GLMs), but if your samples are large, moderate non-normality won't impact things much.
There's also the issue of potential heteroskedasticity; in the normal case many packages offer an approximation via an adjustment to error degrees of freedom (Welch-Satterthwaite) that often performs quite well. If heteroskedasticity is related to mean, you may be better off looking at an ANOVA-like model fitted as a GLM.
However, if the clusters are generated by performing cluster analysis on data, the theory for t-tests, ANOVA, GLMs, permutation tests, etc no longer holds. None of the p-values would be correct | Which statistical test to use to test differences in multiple means (multiple populations)
If you want a multi-group analog of a t-test it sounds like you just want ANOVA (analysis of variance) or something similar to it. That's exactly what it's for - comparing group means.
Specifically, |
55,183 | How to detect random clicks on a page? | I presume your intent is to identify the agent as human or random by choosing the one with the smaller chi-square (assuming you can get an expected human pattern), essentially treating it as a classification problem. If you are treating it as one, you might want to consider the costs of the two types of misclassification error.
Your approach might be sufficient (if you can get data for the human case), but I have some comments:
1) the same size of clicks is unlikely to be high enough to pick anything up with a chi-square especially on a 20x20; it would suggest you would need something well over 500 clicks, probably more like a couple of thousand. That's a lot of clicks.
2) if you instead want instead to treat it as a hypothesis testing problem then you'd need to either
a) only identify the agent as human if the behavior is inconsistent with random clicking. The problem with this is that it by default says you're a bot ... which if there's only a few clicks may classify a lot of humans as bots.
b) only act as if they aren't human when the clicking behavior is inconsistent with human behavior even when the match with random clicking is better, then you'd just check for consistency with a previously known human pattern --- the problem is different humans may exhibit different patterns
Again, if this is done with a chi-square, sample size may be a problem.
3) there's information in a sequence of clicks that may help to pin down non-randomness much better. That is, I suggest you consider instead the serial-dependence in human clicks compared to random clicking.
4) Perhaps even easier, you might consider the distribution of the average distance between consecutive clicks, compared to random clicking, or the distribution of time between clicks (e.g. a bot may click fast, or a bot may well have time between clicks unrelated to distance, where a human has to move a mouse-cursor from place to place, so may have distinctive relationship between time and distance). This is likely to show up an issue at relatively smaller sample sizes.
[It may also be that humans and robots have very different numbers of clicks. That might be useful]
5) If you do get data on humans, you might consider trying to identify some tell-tale characteristics, then use those to construct a good test statistic to apply to the random-case (i.e. something that rarely happens in random data but often happens in the human data).
6) if you do want the classification route, and you have a substantial amount of human data, it may help to do some sample splitting to use some of the data to try to identify some suitable characteristics that would better distinguish the two than the chi-square approach.
7) You may want to consider a Bayesian approach. | How to detect random clicks on a page? | I presume your intent is to identify the agent as human or random by choosing the one with the smaller chi-square (assuming you can get an expected human pattern), essentially treating it as a classif | How to detect random clicks on a page?
I presume your intent is to identify the agent as human or random by choosing the one with the smaller chi-square (assuming you can get an expected human pattern), essentially treating it as a classification problem. If you are treating it as one, you might want to consider the costs of the two types of misclassification error.
Your approach might be sufficient (if you can get data for the human case), but I have some comments:
1) the same size of clicks is unlikely to be high enough to pick anything up with a chi-square especially on a 20x20; it would suggest you would need something well over 500 clicks, probably more like a couple of thousand. That's a lot of clicks.
2) if you instead want instead to treat it as a hypothesis testing problem then you'd need to either
a) only identify the agent as human if the behavior is inconsistent with random clicking. The problem with this is that it by default says you're a bot ... which if there's only a few clicks may classify a lot of humans as bots.
b) only act as if they aren't human when the clicking behavior is inconsistent with human behavior even when the match with random clicking is better, then you'd just check for consistency with a previously known human pattern --- the problem is different humans may exhibit different patterns
Again, if this is done with a chi-square, sample size may be a problem.
3) there's information in a sequence of clicks that may help to pin down non-randomness much better. That is, I suggest you consider instead the serial-dependence in human clicks compared to random clicking.
4) Perhaps even easier, you might consider the distribution of the average distance between consecutive clicks, compared to random clicking, or the distribution of time between clicks (e.g. a bot may click fast, or a bot may well have time between clicks unrelated to distance, where a human has to move a mouse-cursor from place to place, so may have distinctive relationship between time and distance). This is likely to show up an issue at relatively smaller sample sizes.
[It may also be that humans and robots have very different numbers of clicks. That might be useful]
5) If you do get data on humans, you might consider trying to identify some tell-tale characteristics, then use those to construct a good test statistic to apply to the random-case (i.e. something that rarely happens in random data but often happens in the human data).
6) if you do want the classification route, and you have a substantial amount of human data, it may help to do some sample splitting to use some of the data to try to identify some suitable characteristics that would better distinguish the two than the chi-square approach.
7) You may want to consider a Bayesian approach. | How to detect random clicks on a page?
I presume your intent is to identify the agent as human or random by choosing the one with the smaller chi-square (assuming you can get an expected human pattern), essentially treating it as a classif |
55,184 | Transformation of variables (Metropolis Hastings) | You do not need the $\alpha$ since it is a parameter. The change of variables formula applies to the variable with respect to which you are "integrating". It is $x$ in your case. So MH is right to demand that you remove the excess factor.
So what you really have is:
$$
p(X|\alpha) = \frac{\exp(-\exp(\alpha))\exp(\alpha x)}{x!}
$$
had you applied some transformation to your $x$ variable - then the change of variables foremula should be used.
EDIT To understand what's going on, think of a normal RV $X \sim \mathcal{N}(\mu ,\sigma^2)$. So $p(X|\mu,\sigma^2)$ is the density. If you transform $\mu$ with any transformation $f$, you get the new variable is $Y \sim \mathcal{N}(f(\mu) ,\sigma^2)$ and no jacobian is necessary. I hope you agree (if not, I'll have to write more in tex...).
If you want $\mathcal{P}(X\in A|\alpha)$ you'd integrate $x$ and keep $\alpha$ fixed - that's what I mean when I say "integrate". Probability is all about integration, after all.
So in the end you have $p(x|\alpha)$ with no extra jacobian term. Then proceed as usual with bayes' rule etc and you'll get the "right" density. | Transformation of variables (Metropolis Hastings) | You do not need the $\alpha$ since it is a parameter. The change of variables formula applies to the variable with respect to which you are "integrating". It is $x$ in your case. So MH is right to dem | Transformation of variables (Metropolis Hastings)
You do not need the $\alpha$ since it is a parameter. The change of variables formula applies to the variable with respect to which you are "integrating". It is $x$ in your case. So MH is right to demand that you remove the excess factor.
So what you really have is:
$$
p(X|\alpha) = \frac{\exp(-\exp(\alpha))\exp(\alpha x)}{x!}
$$
had you applied some transformation to your $x$ variable - then the change of variables foremula should be used.
EDIT To understand what's going on, think of a normal RV $X \sim \mathcal{N}(\mu ,\sigma^2)$. So $p(X|\mu,\sigma^2)$ is the density. If you transform $\mu$ with any transformation $f$, you get the new variable is $Y \sim \mathcal{N}(f(\mu) ,\sigma^2)$ and no jacobian is necessary. I hope you agree (if not, I'll have to write more in tex...).
If you want $\mathcal{P}(X\in A|\alpha)$ you'd integrate $x$ and keep $\alpha$ fixed - that's what I mean when I say "integrate". Probability is all about integration, after all.
So in the end you have $p(x|\alpha)$ with no extra jacobian term. Then proceed as usual with bayes' rule etc and you'll get the "right" density. | Transformation of variables (Metropolis Hastings)
You do not need the $\alpha$ since it is a parameter. The change of variables formula applies to the variable with respect to which you are "integrating". It is $x$ in your case. So MH is right to dem |
55,185 | Transformation of variables (Metropolis Hastings) | Check your code, particularly the factors of N (number of data points) appearing in the likelihood. I find consistent results with the Jacobian factor (so the extra alpha) included in the log-posterior, and inconsistent inferences when I do not include the Jacobian (the opposite to what you say you are finding). The Jacobian comes in due to the prior, NOT the likelihood; in your analysis you implicitly assume a flat prior on lambda, so you need to translate this into a prior on alpha that contains the same assumptions (it will no longer be flat, of course) - the Jacobian factor does this translation for you. | Transformation of variables (Metropolis Hastings) | Check your code, particularly the factors of N (number of data points) appearing in the likelihood. I find consistent results with the Jacobian factor (so the extra alpha) included in the log-posterio | Transformation of variables (Metropolis Hastings)
Check your code, particularly the factors of N (number of data points) appearing in the likelihood. I find consistent results with the Jacobian factor (so the extra alpha) included in the log-posterior, and inconsistent inferences when I do not include the Jacobian (the opposite to what you say you are finding). The Jacobian comes in due to the prior, NOT the likelihood; in your analysis you implicitly assume a flat prior on lambda, so you need to translate this into a prior on alpha that contains the same assumptions (it will no longer be flat, of course) - the Jacobian factor does this translation for you. | Transformation of variables (Metropolis Hastings)
Check your code, particularly the factors of N (number of data points) appearing in the likelihood. I find consistent results with the Jacobian factor (so the extra alpha) included in the log-posterio |
55,186 | Example of dependence with zero covariance | Among spherically symmetric distributions, i.e. distributions of the form $f(\boldsymbol x) = \varphi(\|\boldsymbol x\|)$ where $f$ is the density with respect to Lebesgue measure, it can be shown that the coordinates always have zero correlation, essentially due to the fact that the distribution is invariant under orthogonal transformations. This family includes, for example, the multivariate normal distribution, but also things like the multivariate t distribution.
It can be shown that, in fact, the multivariate normal is the only spherically symmetric distribution with independent components. Now, spherical symmetry is in fact a very reasonable property that a statistician might expect a distribution to have in some settings. Hence, inferring independence of components from no correlation makes very strong assumptions about the tails of the distribution in this case!
See for example this paper. See this report for more on the properties of spherically symmetric distributions. | Example of dependence with zero covariance | Among spherically symmetric distributions, i.e. distributions of the form $f(\boldsymbol x) = \varphi(\|\boldsymbol x\|)$ where $f$ is the density with respect to Lebesgue measure, it can be shown tha | Example of dependence with zero covariance
Among spherically symmetric distributions, i.e. distributions of the form $f(\boldsymbol x) = \varphi(\|\boldsymbol x\|)$ where $f$ is the density with respect to Lebesgue measure, it can be shown that the coordinates always have zero correlation, essentially due to the fact that the distribution is invariant under orthogonal transformations. This family includes, for example, the multivariate normal distribution, but also things like the multivariate t distribution.
It can be shown that, in fact, the multivariate normal is the only spherically symmetric distribution with independent components. Now, spherical symmetry is in fact a very reasonable property that a statistician might expect a distribution to have in some settings. Hence, inferring independence of components from no correlation makes very strong assumptions about the tails of the distribution in this case!
See for example this paper. See this report for more on the properties of spherically symmetric distributions. | Example of dependence with zero covariance
Among spherically symmetric distributions, i.e. distributions of the form $f(\boldsymbol x) = \varphi(\|\boldsymbol x\|)$ where $f$ is the density with respect to Lebesgue measure, it can be shown tha |
55,187 | Example of dependence with zero covariance | The last row of the first image of https://en.wikipedia.org/wiki/Correlation_and_dependence provide exemple where the Pearson correlation is 0 but there is strong non-linear dependance between X and Y.
The relation $\rho=0$ is quite strong in general but easily obtainable considering discrete and / or symetrical distributions. That's why you usually end up with discrete / symmetrical distribution as exemples. The article provide an exemple of an assymetrical distribution (but discrete). A sum of three gaussian centered on the three points should yield the same result.
In the real life the symmetry could arise from the set-up of the problem as evoked above. More rarely it could arise from an assymetrical setup, that just happen to give a 0 correlation. | Example of dependence with zero covariance | The last row of the first image of https://en.wikipedia.org/wiki/Correlation_and_dependence provide exemple where the Pearson correlation is 0 but there is strong non-linear dependance between X and Y | Example of dependence with zero covariance
The last row of the first image of https://en.wikipedia.org/wiki/Correlation_and_dependence provide exemple where the Pearson correlation is 0 but there is strong non-linear dependance between X and Y.
The relation $\rho=0$ is quite strong in general but easily obtainable considering discrete and / or symetrical distributions. That's why you usually end up with discrete / symmetrical distribution as exemples. The article provide an exemple of an assymetrical distribution (but discrete). A sum of three gaussian centered on the three points should yield the same result.
In the real life the symmetry could arise from the set-up of the problem as evoked above. More rarely it could arise from an assymetrical setup, that just happen to give a 0 correlation. | Example of dependence with zero covariance
The last row of the first image of https://en.wikipedia.org/wiki/Correlation_and_dependence provide exemple where the Pearson correlation is 0 but there is strong non-linear dependance between X and Y |
55,188 | What is the deal with $p$-value when generating Pearson's $r$ correlation coefficient? | [Fixed/improved, based on the feedback from @Momo and @whuber]
I believe that in the context of regression the relationship between $p$-value and Pearson's correlation coefficient is the following: $p$-value can be interpreted as probability that correlation (coefficient), determined in a random sampling-based experiment, is the same or larger than the one, determined from the observed data, provided that the null hypothesis is true. In other words, I think that $p$-value in this context is related to hypothesis testing, where hypotheses themselves are correlation-based, as follows:
\begin{multline}
\shoveleft{H_0: \text{correlation (of the underlying data-generation process) is zero;}}\\
\shoveleft{H_A: \text{the correlation is not zero.}}
\end{multline}
Then, the situation IMHO boils down to the following traditional hypothesis testing interpretation. If $p$-value is small (less than arbitrarily selected significance level $\alpha$, usually equal to 0.05), then you can reject the null hypothesis ("determined correlation is statistically significant"), and, if $p$-value is greater than $\alpha$, than you fail to reject the null ("the correlation is not statistically significant").
In regard to a relationship between $p$-value and sample size $N$, the following formulae present the relationship in question in a mathematical form.
Fisher transformed test statistic of $r$ (aka $z$) is defined as $T(r) = artanh(r)$.
For a bivariate normal distribution, $z$'s standard error depends on sample size $N$, as follows:
\begin{align}
SE(T(r)) \approx \frac{1}{\sqrt{N - 3}}
\end{align}
Moreover, since the test statistic is approximately normal,
\begin{align}
\frac{T(r)}{SE(T(r))} \approx N(0,1) \text{ and } \lim_{N\to\infty} SE(T(r)) = 0
\end{align}
so the standard error in the denominator is getting increasingly smaller for increasingly larger $N$.
P.S. You may also find the following two answers relevant and useful: this and this. | What is the deal with $p$-value when generating Pearson's $r$ correlation coefficient? | [Fixed/improved, based on the feedback from @Momo and @whuber]
I believe that in the context of regression the relationship between $p$-value and Pearson's correlation coefficient is the following: $p | What is the deal with $p$-value when generating Pearson's $r$ correlation coefficient?
[Fixed/improved, based on the feedback from @Momo and @whuber]
I believe that in the context of regression the relationship between $p$-value and Pearson's correlation coefficient is the following: $p$-value can be interpreted as probability that correlation (coefficient), determined in a random sampling-based experiment, is the same or larger than the one, determined from the observed data, provided that the null hypothesis is true. In other words, I think that $p$-value in this context is related to hypothesis testing, where hypotheses themselves are correlation-based, as follows:
\begin{multline}
\shoveleft{H_0: \text{correlation (of the underlying data-generation process) is zero;}}\\
\shoveleft{H_A: \text{the correlation is not zero.}}
\end{multline}
Then, the situation IMHO boils down to the following traditional hypothesis testing interpretation. If $p$-value is small (less than arbitrarily selected significance level $\alpha$, usually equal to 0.05), then you can reject the null hypothesis ("determined correlation is statistically significant"), and, if $p$-value is greater than $\alpha$, than you fail to reject the null ("the correlation is not statistically significant").
In regard to a relationship between $p$-value and sample size $N$, the following formulae present the relationship in question in a mathematical form.
Fisher transformed test statistic of $r$ (aka $z$) is defined as $T(r) = artanh(r)$.
For a bivariate normal distribution, $z$'s standard error depends on sample size $N$, as follows:
\begin{align}
SE(T(r)) \approx \frac{1}{\sqrt{N - 3}}
\end{align}
Moreover, since the test statistic is approximately normal,
\begin{align}
\frac{T(r)}{SE(T(r))} \approx N(0,1) \text{ and } \lim_{N\to\infty} SE(T(r)) = 0
\end{align}
so the standard error in the denominator is getting increasingly smaller for increasingly larger $N$.
P.S. You may also find the following two answers relevant and useful: this and this. | What is the deal with $p$-value when generating Pearson's $r$ correlation coefficient?
[Fixed/improved, based on the feedback from @Momo and @whuber]
I believe that in the context of regression the relationship between $p$-value and Pearson's correlation coefficient is the following: $p |
55,189 | What is a Dirichlet prior | Let me try to respond your very last question about understanding the Dirichlet distribution, its relation to the Multinomial, and what I suspect is what you really would like to know is how this could be explained in an applied context, such as your genomics problem.
Now I am going to explain all this using my vague recall of haplogroup SNPs, which might be somewhat similar to your data:
So let's say I have this dataset inspired by this random NIH paper from Homo Sapiens Sapiens and I need to identify all the SNPs associated with this novel sub-sub-subclade of the haplogroup N that I believe exists but I don't know how many of the SNPs (or I guess, another population genetics term would be finding the linkage disequilibrium) there are in each of those 4 sub-sub-clades and what those SNPs are in this Y-DNA sample.
15121 caacagcctt cataggctat gtcctcccgt gaggccaaat atcattctga ggggccacag
15181 taattacaaa cttactatcc gccatcccat acattgggac agacctagtt caatgaatct
15241 gaggaggcta ctcagtagac agtcccaccc tcacacgatt ctttaccttt cacttcatct
15301 tgcccttcat tattgcagcc ctagcagcac tccacctcct attcttgcac gaaacgggat
....etc...etc...`
Given that we do not know the number of SNPs linked together that would fall into this sub-sub-subclade, or the probability of occurence of a SNP in this novel obscure genomic region I'm about to discover, I will treat those unknown parameters as random under the Bayesian paradigm:
I will actually estimate **the number of the linked SNPs ** to be associated with that sub-sub-subclade since I know that if more than 1% of a population does not carry the same nucleotide at a specific position in the DNA sequence, then this variation can be classified as a SNP.
By the "linked SNPs" I mean some unknown number of groups of SNP, such as let's say one possible group we are considering to be associated with this sub-sub-subclade genomic region of haplogroup N would be the group of SNPs for dopaminergic receptors x, y, and z, the other group - for the Serotonin 5-HT2A receptors, which are SNPs rs6311 and rs6313 and so on)
My other parameter will be estimating the expected number of times (denoted by parameter $k$, where $k\ge2$) the outcome SNP $i$ was observed over $N$ sampled nucleotides, where:
$$X_1,\dots,X_k, x_i \in (0,1)$$ a vector of random category counts
$$\sum_{i=1}^N x_i = 1$$,
parametrized by a pseudocount parameter $\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_k)$
Now, a minute of some math:
The commonly known probability distributions are related as it is clearly illustrated on the map of the Relationships Among Common Distributions, Adopted from Leemis(1986) from what I call "the Bible of Statistics" every statistician sees vivid dreams about before math stats exams in grad school, a.k.a. the 2nd edition of "Statistical Inference" by G.Casella & R.Berger, 2001. Cengage Learning.. Although even in the extended version of the map from the American Statistician the Dirichlet and Multinomial are not depicted, here is my audacious take on where they would be placed in the classical map #1:
Also, somewhere along those arrows would be another relevant distribution, the Categorical Distribution.
In a nutshell, you can derive or easily transform one of them into the other if they are pretty close on the map and come from the same family of distributions. Some of these are generalizations of other distributions hence, including such as Dirichlet, which is a generalization on the Beta distribution, i.e. Dirichlet generalized the Beta into multiple dimensions.
For this reason and so many others, Dirichlet distribution is the Conjugate Prior for Multinomial Distribution.
Now back to our SNPs problem:
The set up we are going to use for our problem is based on the Pólya urn model where we would sample with replacement $N$ strings of the 4-lettered nucleotide bases that show up in $k$ my-sub-sub-subclade-linked SNPs, where each of SNP of an observed nucleotide bases can be sampled with probabilities $p_1,\dots,p_k$.
Since we don't know how many and which SNPs fall into the the "linked SNPs for this sub-sub-subclade", we would assume those unknown SNPs that may or may not exist for this sub-sub-subclade are represented by a parameter $\alpha$, where he $\alpha_1,\dots,\alpha_k$ and those might actually be the pseudocounts you are referencing. Here is a nice response and a reference to some problems with this approach to the Dirichlet-Miltonomial. I would highly recommend checking Bioconductor or that package documentation because those pseudocounts can easily be just a simple method to convert different matrices while integrating very different distributions.
Now we estimate the parameters In Dirichlet-multinomial model and update them: $\alpha_1+n_1,\dots,\alpha_k+n_k$ eventually obtaining the posterior distributions and estimating the number of types of linked SNPs that fall into my novel sub-sub-clade with their probabilities to conclude how likely we are to see those groups in the sub-sub-clade.
P.S. I suspect the reason Dirichlet and Multinomial are so applicable to genomics and are used in some Bioinformatics packages in R is probably due to a lot of discretization in these types of dataset and also, traditional models such as the ones based on the Hardy-Weinberg Principle as they are mathematically a perfect candidate of some Binomial, Beta, Multinomial etc type of a setup because you are essentially estimating the frequencies of some counts in some discrete categories (although the Dirichlet does not require the parameters $\alpha$ to be integers.
P.P.S. From a very reduced set up above, I have omitted other important things to consider which can be found in this tutorial, such as Dirichlet Process and specifically, the partition step of some probability space $\Theta$ to find
$$\theta_k | \text{Prior distribution H over component parameters, } \theta_k \sim H$$ | What is a Dirichlet prior | Let me try to respond your very last question about understanding the Dirichlet distribution, its relation to the Multinomial, and what I suspect is what you really would like to know is how this coul | What is a Dirichlet prior
Let me try to respond your very last question about understanding the Dirichlet distribution, its relation to the Multinomial, and what I suspect is what you really would like to know is how this could be explained in an applied context, such as your genomics problem.
Now I am going to explain all this using my vague recall of haplogroup SNPs, which might be somewhat similar to your data:
So let's say I have this dataset inspired by this random NIH paper from Homo Sapiens Sapiens and I need to identify all the SNPs associated with this novel sub-sub-subclade of the haplogroup N that I believe exists but I don't know how many of the SNPs (or I guess, another population genetics term would be finding the linkage disequilibrium) there are in each of those 4 sub-sub-clades and what those SNPs are in this Y-DNA sample.
15121 caacagcctt cataggctat gtcctcccgt gaggccaaat atcattctga ggggccacag
15181 taattacaaa cttactatcc gccatcccat acattgggac agacctagtt caatgaatct
15241 gaggaggcta ctcagtagac agtcccaccc tcacacgatt ctttaccttt cacttcatct
15301 tgcccttcat tattgcagcc ctagcagcac tccacctcct attcttgcac gaaacgggat
....etc...etc...`
Given that we do not know the number of SNPs linked together that would fall into this sub-sub-subclade, or the probability of occurence of a SNP in this novel obscure genomic region I'm about to discover, I will treat those unknown parameters as random under the Bayesian paradigm:
I will actually estimate **the number of the linked SNPs ** to be associated with that sub-sub-subclade since I know that if more than 1% of a population does not carry the same nucleotide at a specific position in the DNA sequence, then this variation can be classified as a SNP.
By the "linked SNPs" I mean some unknown number of groups of SNP, such as let's say one possible group we are considering to be associated with this sub-sub-subclade genomic region of haplogroup N would be the group of SNPs for dopaminergic receptors x, y, and z, the other group - for the Serotonin 5-HT2A receptors, which are SNPs rs6311 and rs6313 and so on)
My other parameter will be estimating the expected number of times (denoted by parameter $k$, where $k\ge2$) the outcome SNP $i$ was observed over $N$ sampled nucleotides, where:
$$X_1,\dots,X_k, x_i \in (0,1)$$ a vector of random category counts
$$\sum_{i=1}^N x_i = 1$$,
parametrized by a pseudocount parameter $\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_k)$
Now, a minute of some math:
The commonly known probability distributions are related as it is clearly illustrated on the map of the Relationships Among Common Distributions, Adopted from Leemis(1986) from what I call "the Bible of Statistics" every statistician sees vivid dreams about before math stats exams in grad school, a.k.a. the 2nd edition of "Statistical Inference" by G.Casella & R.Berger, 2001. Cengage Learning.. Although even in the extended version of the map from the American Statistician the Dirichlet and Multinomial are not depicted, here is my audacious take on where they would be placed in the classical map #1:
Also, somewhere along those arrows would be another relevant distribution, the Categorical Distribution.
In a nutshell, you can derive or easily transform one of them into the other if they are pretty close on the map and come from the same family of distributions. Some of these are generalizations of other distributions hence, including such as Dirichlet, which is a generalization on the Beta distribution, i.e. Dirichlet generalized the Beta into multiple dimensions.
For this reason and so many others, Dirichlet distribution is the Conjugate Prior for Multinomial Distribution.
Now back to our SNPs problem:
The set up we are going to use for our problem is based on the Pólya urn model where we would sample with replacement $N$ strings of the 4-lettered nucleotide bases that show up in $k$ my-sub-sub-subclade-linked SNPs, where each of SNP of an observed nucleotide bases can be sampled with probabilities $p_1,\dots,p_k$.
Since we don't know how many and which SNPs fall into the the "linked SNPs for this sub-sub-subclade", we would assume those unknown SNPs that may or may not exist for this sub-sub-subclade are represented by a parameter $\alpha$, where he $\alpha_1,\dots,\alpha_k$ and those might actually be the pseudocounts you are referencing. Here is a nice response and a reference to some problems with this approach to the Dirichlet-Miltonomial. I would highly recommend checking Bioconductor or that package documentation because those pseudocounts can easily be just a simple method to convert different matrices while integrating very different distributions.
Now we estimate the parameters In Dirichlet-multinomial model and update them: $\alpha_1+n_1,\dots,\alpha_k+n_k$ eventually obtaining the posterior distributions and estimating the number of types of linked SNPs that fall into my novel sub-sub-clade with their probabilities to conclude how likely we are to see those groups in the sub-sub-clade.
P.S. I suspect the reason Dirichlet and Multinomial are so applicable to genomics and are used in some Bioinformatics packages in R is probably due to a lot of discretization in these types of dataset and also, traditional models such as the ones based on the Hardy-Weinberg Principle as they are mathematically a perfect candidate of some Binomial, Beta, Multinomial etc type of a setup because you are essentially estimating the frequencies of some counts in some discrete categories (although the Dirichlet does not require the parameters $\alpha$ to be integers.
P.P.S. From a very reduced set up above, I have omitted other important things to consider which can be found in this tutorial, such as Dirichlet Process and specifically, the partition step of some probability space $\Theta$ to find
$$\theta_k | \text{Prior distribution H over component parameters, } \theta_k \sim H$$ | What is a Dirichlet prior
Let me try to respond your very last question about understanding the Dirichlet distribution, its relation to the Multinomial, and what I suspect is what you really would like to know is how this coul |
55,190 | What is a Dirichlet prior | There is a good explanation in this presentation.
https://www.slideshare.net/g33ktalk/machine-learning-meetup-12182013
You can watch the whole presentation if you want (it is a good explanation of the Dirichlet distribution) but I think the slides will get the concept across pretty quickly.
Slides 32-35 Explains the mathematical process of the Dirichlet prior.
Slide 50-60 shows what is going on when the distribution updates and shows the prior. (It is easier to see it visually than explain it) This gets the general idea across
Slide 94-102 shows what happens to the whole system as updating occurs. This is the same concept as slide 50-60 but tracks what happens for each iteration. | What is a Dirichlet prior | There is a good explanation in this presentation.
https://www.slideshare.net/g33ktalk/machine-learning-meetup-12182013
You can watch the whole presentation if you want (it is a good explanation of th | What is a Dirichlet prior
There is a good explanation in this presentation.
https://www.slideshare.net/g33ktalk/machine-learning-meetup-12182013
You can watch the whole presentation if you want (it is a good explanation of the Dirichlet distribution) but I think the slides will get the concept across pretty quickly.
Slides 32-35 Explains the mathematical process of the Dirichlet prior.
Slide 50-60 shows what is going on when the distribution updates and shows the prior. (It is easier to see it visually than explain it) This gets the general idea across
Slide 94-102 shows what happens to the whole system as updating occurs. This is the same concept as slide 50-60 but tracks what happens for each iteration. | What is a Dirichlet prior
There is a good explanation in this presentation.
https://www.slideshare.net/g33ktalk/machine-learning-meetup-12182013
You can watch the whole presentation if you want (it is a good explanation of th |
55,191 | Comparing 2 time series in R | There are a number of possible models at a variety of levels of complexity. These include
(some are very closely related):
Time series regression with lagged variables
Lagged regression models. See also distributed lag models
Regression with autocorrelated errors
Transfer function modelling /lagged regression with autocorrelated errors
ARMAX models
Vector autoregressive models
State-space/dynamic linear models can incorporate both autocorrelated and regression components
Because your input series is 0/1 you may want to look at lagged regression with autocorrelated errors, but watch for seasonal and calendar effects (like holidays).
So simple-ish models might perhaps look something like
$\qquad\text{ Sales}_t = \phi_0+\phi_1\,\text{Sales}_{t-1} +\beta_3\,\text{job}_{t-3}+\beta_4\,\text{job}_{t-4}+\epsilon_t$
or perhaps something like
$\qquad\text{ Sales}_t = \alpha +\beta_3\,\text{job}_{t-3}+\beta_{12}\,\text{job}_{t-12}+\text{seasonal}_{t}+\eta_t$
where $\eta_t$ is in turn some ARMA model for the noise term (though you may well want more lags in there than just one) -- or a variety of other possibilities. [The seasonal term above doesn't have a parameter because it's likely to have several components, and so several parameters; consider it a placeholder for a model for that component of the data. Neither of those models are likely to be sufficient, they're just to get a general sense of what a simple model might look like]
You may also want to consider whether the binary job-status variable needs a model itself (if you want to forecast further than the smallest lag involving it, it may well be essential to at least consider whether there are any such effects there -- see transfer function models, but you have to consider the special nature of the binary variable)
Once you have an appropriate model for sales that captures the main features well, you can look as testing. You should have enough data (looks like several years) to hold some data out for out-of-sample model testing and validation. I'd start by considering the features of sales alone - is it stationary? Autocorrelated? Does it experience any seasonal/cyclical or calendar components? Are there other major drivers to consider?
Since you mention R, note that the function tslm in the package forecast can be handy for including seasonal or trend components in regression models.
A book that discusses nearly all of those topics is Shumway and Stoffer Time Series Analysis and its Applications (3rd ed is at Stoffer's page here). Another highly recommended text is Forecasting Principles and Practice, Hyndman and Athanasopoulos, here, which covers some of the things I mentioned (but not as many). | Comparing 2 time series in R | There are a number of possible models at a variety of levels of complexity. These include
(some are very closely related):
Time series regression with lagged variables
Lagged regression models. See | Comparing 2 time series in R
There are a number of possible models at a variety of levels of complexity. These include
(some are very closely related):
Time series regression with lagged variables
Lagged regression models. See also distributed lag models
Regression with autocorrelated errors
Transfer function modelling /lagged regression with autocorrelated errors
ARMAX models
Vector autoregressive models
State-space/dynamic linear models can incorporate both autocorrelated and regression components
Because your input series is 0/1 you may want to look at lagged regression with autocorrelated errors, but watch for seasonal and calendar effects (like holidays).
So simple-ish models might perhaps look something like
$\qquad\text{ Sales}_t = \phi_0+\phi_1\,\text{Sales}_{t-1} +\beta_3\,\text{job}_{t-3}+\beta_4\,\text{job}_{t-4}+\epsilon_t$
or perhaps something like
$\qquad\text{ Sales}_t = \alpha +\beta_3\,\text{job}_{t-3}+\beta_{12}\,\text{job}_{t-12}+\text{seasonal}_{t}+\eta_t$
where $\eta_t$ is in turn some ARMA model for the noise term (though you may well want more lags in there than just one) -- or a variety of other possibilities. [The seasonal term above doesn't have a parameter because it's likely to have several components, and so several parameters; consider it a placeholder for a model for that component of the data. Neither of those models are likely to be sufficient, they're just to get a general sense of what a simple model might look like]
You may also want to consider whether the binary job-status variable needs a model itself (if you want to forecast further than the smallest lag involving it, it may well be essential to at least consider whether there are any such effects there -- see transfer function models, but you have to consider the special nature of the binary variable)
Once you have an appropriate model for sales that captures the main features well, you can look as testing. You should have enough data (looks like several years) to hold some data out for out-of-sample model testing and validation. I'd start by considering the features of sales alone - is it stationary? Autocorrelated? Does it experience any seasonal/cyclical or calendar components? Are there other major drivers to consider?
Since you mention R, note that the function tslm in the package forecast can be handy for including seasonal or trend components in regression models.
A book that discusses nearly all of those topics is Shumway and Stoffer Time Series Analysis and its Applications (3rd ed is at Stoffer's page here). Another highly recommended text is Forecasting Principles and Practice, Hyndman and Athanasopoulos, here, which covers some of the things I mentioned (but not as many). | Comparing 2 time series in R
There are a number of possible models at a variety of levels of complexity. These include
(some are very closely related):
Time series regression with lagged variables
Lagged regression models. See |
55,192 | Comparing 2 time series in R | In addition to the very nice answer by @Glen_b, I would like to suggest some complementary information and resources on time series analysis (mostly in R!), which might be useful to you. Please find them in my related answers, as follows: on general time series analysis and on time series classification and clustering. Hope this will be helpful. | Comparing 2 time series in R | In addition to the very nice answer by @Glen_b, I would like to suggest some complementary information and resources on time series analysis (mostly in R!), which might be useful to you. Please find t | Comparing 2 time series in R
In addition to the very nice answer by @Glen_b, I would like to suggest some complementary information and resources on time series analysis (mostly in R!), which might be useful to you. Please find them in my related answers, as follows: on general time series analysis and on time series classification and clustering. Hope this will be helpful. | Comparing 2 time series in R
In addition to the very nice answer by @Glen_b, I would like to suggest some complementary information and resources on time series analysis (mostly in R!), which might be useful to you. Please find t |
55,193 | How to determine whether a variable is significant using Pr (>Chi) and Df? | This actually appears to be an analysis of deviance table, but the principle is the same, and people still call it an 'ANalysis Of VAriance' table. The table presents information about a series of sequentially nested model fits. In the first row is the null model (without any of the variables included). Each subsequent row adds another variable to the model and information about the changes is given. It is more typical to move in the opposite order (i.e., from the 'full' model on down, dropping one variable at a time), but this is inconsequential.
The columns might make more sense if they were presented in a different order. The fourth column contains a measure of goodness of fit ($-2\times\log\ {\rm likelihood}$); bear in mind that lower values imply a better fit and that the fit has to improve upon adding a variable whether that variable is relevant or not. The second column (Deviance) is displaying the difference between that model's -2*LL and the previous model's. Column three (Resid.Df) says how many residual degrees of freedom each model has. In the first column, you see listed the degrees of freedom associated with each variable, it is the difference between that model's residual degrees of freedom and the previous model's. The thing to realize here is that the difference between two nested models' -2*LL (i.e., the deviance), is distributed as a chi-squared variable with the degrees of freedom equal to the difference between the two models' residual degrees of freedoms. The probability of seeing a difference in -2*LL that large or larger, given the addition of a variable with that many degrees of freedom is displayed in the last column (Pr(>Chi)). Thus, having stipulated an $\alpha$ / type I error rate you feel you can live with, we can see if the improvement in model fit upon adding a variable is greater than we would expect by chance alone. | How to determine whether a variable is significant using Pr (>Chi) and Df? | This actually appears to be an analysis of deviance table, but the principle is the same, and people still call it an 'ANalysis Of VAriance' table. The table presents information about a series of se | How to determine whether a variable is significant using Pr (>Chi) and Df?
This actually appears to be an analysis of deviance table, but the principle is the same, and people still call it an 'ANalysis Of VAriance' table. The table presents information about a series of sequentially nested model fits. In the first row is the null model (without any of the variables included). Each subsequent row adds another variable to the model and information about the changes is given. It is more typical to move in the opposite order (i.e., from the 'full' model on down, dropping one variable at a time), but this is inconsequential.
The columns might make more sense if they were presented in a different order. The fourth column contains a measure of goodness of fit ($-2\times\log\ {\rm likelihood}$); bear in mind that lower values imply a better fit and that the fit has to improve upon adding a variable whether that variable is relevant or not. The second column (Deviance) is displaying the difference between that model's -2*LL and the previous model's. Column three (Resid.Df) says how many residual degrees of freedom each model has. In the first column, you see listed the degrees of freedom associated with each variable, it is the difference between that model's residual degrees of freedom and the previous model's. The thing to realize here is that the difference between two nested models' -2*LL (i.e., the deviance), is distributed as a chi-squared variable with the degrees of freedom equal to the difference between the two models' residual degrees of freedoms. The probability of seeing a difference in -2*LL that large or larger, given the addition of a variable with that many degrees of freedom is displayed in the last column (Pr(>Chi)). Thus, having stipulated an $\alpha$ / type I error rate you feel you can live with, we can see if the improvement in model fit upon adding a variable is greater than we would expect by chance alone. | How to determine whether a variable is significant using Pr (>Chi) and Df?
This actually appears to be an analysis of deviance table, but the principle is the same, and people still call it an 'ANalysis Of VAriance' table. The table presents information about a series of se |
55,194 | How to determine whether a variable is significant using Pr (>Chi) and Df? | You would most easily judge significance for one of those variables by checking whether the p-value was $\leq \alpha$, your previously chosen significance level.
This is true of essentially any hypothesis test, not just chi-square tests.
You haven't stated your significance level, so I can't talk about which of those coefficients are significant. | How to determine whether a variable is significant using Pr (>Chi) and Df? | You would most easily judge significance for one of those variables by checking whether the p-value was $\leq \alpha$, your previously chosen significance level.
This is true of essentially any hypoth | How to determine whether a variable is significant using Pr (>Chi) and Df?
You would most easily judge significance for one of those variables by checking whether the p-value was $\leq \alpha$, your previously chosen significance level.
This is true of essentially any hypothesis test, not just chi-square tests.
You haven't stated your significance level, so I can't talk about which of those coefficients are significant. | How to determine whether a variable is significant using Pr (>Chi) and Df?
You would most easily judge significance for one of those variables by checking whether the p-value was $\leq \alpha$, your previously chosen significance level.
This is true of essentially any hypoth |
55,195 | A guide to regularization strategies in regression | From The Elements of Statistical Learning, as suggested by goangit, section 3.6 is a one page discussion comparing selection and shrinkage methods which points to a paper by Frank and Friedman (1993) A Statistical View of Some Chemometrics Regression Tools. Section 5 of their paper (page 125) performed a Monte Carlo study comparing Ordinary Least Squares (OLS), Ridge Regression (RR), Principal Component Regression (PCR), Partial Least Squares (PLS) and Variable Subset Selection (VSS). They conclude that in terms of prediction accuracy, although all methods do outperform OLS, RR is superior under a variety of conditions (all those tested). VSS offers the lowest increase in accuracy with PCR and PLS not far behind RR. Section 8 is a brief discussion over the descriptive properties of the method. Although no formal conclusions are reachable since this is a subjective matter, they do say that VSS and PLS may have advantages if this is a goal of the study. Unfortunately, they do not include LASSO in their study. | A guide to regularization strategies in regression | From The Elements of Statistical Learning, as suggested by goangit, section 3.6 is a one page discussion comparing selection and shrinkage methods which points to a paper by Frank and Friedman (1993) | A guide to regularization strategies in regression
From The Elements of Statistical Learning, as suggested by goangit, section 3.6 is a one page discussion comparing selection and shrinkage methods which points to a paper by Frank and Friedman (1993) A Statistical View of Some Chemometrics Regression Tools. Section 5 of their paper (page 125) performed a Monte Carlo study comparing Ordinary Least Squares (OLS), Ridge Regression (RR), Principal Component Regression (PCR), Partial Least Squares (PLS) and Variable Subset Selection (VSS). They conclude that in terms of prediction accuracy, although all methods do outperform OLS, RR is superior under a variety of conditions (all those tested). VSS offers the lowest increase in accuracy with PCR and PLS not far behind RR. Section 8 is a brief discussion over the descriptive properties of the method. Although no formal conclusions are reachable since this is a subjective matter, they do say that VSS and PLS may have advantages if this is a goal of the study. Unfortunately, they do not include LASSO in their study. | A guide to regularization strategies in regression
From The Elements of Statistical Learning, as suggested by goangit, section 3.6 is a one page discussion comparing selection and shrinkage methods which points to a paper by Frank and Friedman (1993) |
55,196 | A guide to regularization strategies in regression | You could try Introduction to Statistical Learning by Gareth James et al. It's freely available, contains introductory-level review and discusion of all the topics you mention (in particular Chapter 6 deals with regularization), is well-supported by the ISLwR package in R and provides a gateway to the more advanced counterpart The Elements of Statistical Learning by Hastie et al. | A guide to regularization strategies in regression | You could try Introduction to Statistical Learning by Gareth James et al. It's freely available, contains introductory-level review and discusion of all the topics you mention (in particular Chapter 6 | A guide to regularization strategies in regression
You could try Introduction to Statistical Learning by Gareth James et al. It's freely available, contains introductory-level review and discusion of all the topics you mention (in particular Chapter 6 deals with regularization), is well-supported by the ISLwR package in R and provides a gateway to the more advanced counterpart The Elements of Statistical Learning by Hastie et al. | A guide to regularization strategies in regression
You could try Introduction to Statistical Learning by Gareth James et al. It's freely available, contains introductory-level review and discusion of all the topics you mention (in particular Chapter 6 |
55,197 | Initialize AR(p) process by using Arima.sim | You have the following system:
\begin{eqnarray}
\left\{
\begin{array}{lcl}
y_5 &=& 0.67 y_4 - 0.51 y_1 + \epsilon_5 \\
y_6 &=& 0.67 y_5 - 0.51 y_2 + \epsilon_6 \\
y_7 &=& 0.67 y_6 - 0.51 y_3 + \epsilon_7 \\
y_8 &=& 0.67 y_7 - 0.51 y_4 + \epsilon_8
\end{array}
\right.
\end{eqnarray}
with $\epsilon_t \sim NID(0, 1)$. In order to achieve $y_5=1,\, y_6=2,\, y_7=3\,$ and $y_8=4$, you can set the first eight innovations $\epsilon_t$ to zero and solve the system above for $y_1, y_2, y_3$ and $y_4$ given the desired values for $y_5, y_6, y_7$ and $y_8$. That is, the system to solve becomes, in matrix form:
\begin{eqnarray}
\left(
\begin{array}{cccc}
-0.51 & 0 & 0 & 0.67 \\
0 & -0.51 & 0 & 0 \\
0 & 0 & -0.51 & 0 \\
0 & 0 & 0 & -0.51
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\ y_2 \\ y_3 \\ y_4
\end{array}
\right) =
\left(
\begin{array}{c}
1 \\ 1.33 \\ 1.66 \\ 1.99
\end{array}
\right) \,.
\end{eqnarray}
This gives the solution: $y_1=-7.086890,\, y_2=-2.607843,\, y_3=-3.254902\;\,$ and $\,y_4=-3.901961$.
The vector $(1, 1.33, 1.66, 1.99)$ is obtained as follows:
the first element is $y_5$, for which we want the value $1$ ($\epsilon_5$ is set to zero); the second element is $-0.51y_2 = y_6 - 0.67y_5 = 2 - 0.67\times1 = 1.33$ (the desired value for $y_6$ is $2$ and $\epsilon_6$ is set to zero); the third element is $-0.51y_3 = y_7 - 0.67y_6 = 3-0.67\times 2 = 1.66$; and from the last equation $-0.51y_4 = y_8 - 0.67y_7 = 4 - 0.67\times 3 = 1.99$.
Now, upon this result, you should define the arguments n.start, start.innovand innov that are passed to arima.sim. A similar example is given here. For this case I couldn't figure out the right definition of these arguments, there may be something else done by arima.sim that I am overlooking. Nevertheless, you can still generate your data as follows:
set.seed(123)
x <- rep(0, 20)
x[1:4] <- c(-7.086890, -2.607843, -3.254902, -3.901961)
eps <- c(rep(0, 8), rnorm(12))
for (i in seq(5,20))
x[i] <- 0.67 * x[i-1] - 0.51 * x[i-4] + eps[i]
x[-seq(4)]
# [1] 1.000 2.000 3.000 4.000 1.610 -0.172 -0.086 -2.027 -2.050 0.429
# [11] 0.793 0.300 0.560 -0.290 0.626 0.626
After the auxiliary observations $y_1$ to $y_4$ that were found above, the series continues with the desired values $1, 2, 3, 4$. | Initialize AR(p) process by using Arima.sim | You have the following system:
\begin{eqnarray}
\left\{
\begin{array}{lcl}
y_5 &=& 0.67 y_4 - 0.51 y_1 + \epsilon_5 \\
y_6 &=& 0.67 y_5 - 0.51 y_2 + \epsilon_6 \\
y_7 &=& 0.67 y_6 - 0.51 y_3 + \epsil | Initialize AR(p) process by using Arima.sim
You have the following system:
\begin{eqnarray}
\left\{
\begin{array}{lcl}
y_5 &=& 0.67 y_4 - 0.51 y_1 + \epsilon_5 \\
y_6 &=& 0.67 y_5 - 0.51 y_2 + \epsilon_6 \\
y_7 &=& 0.67 y_6 - 0.51 y_3 + \epsilon_7 \\
y_8 &=& 0.67 y_7 - 0.51 y_4 + \epsilon_8
\end{array}
\right.
\end{eqnarray}
with $\epsilon_t \sim NID(0, 1)$. In order to achieve $y_5=1,\, y_6=2,\, y_7=3\,$ and $y_8=4$, you can set the first eight innovations $\epsilon_t$ to zero and solve the system above for $y_1, y_2, y_3$ and $y_4$ given the desired values for $y_5, y_6, y_7$ and $y_8$. That is, the system to solve becomes, in matrix form:
\begin{eqnarray}
\left(
\begin{array}{cccc}
-0.51 & 0 & 0 & 0.67 \\
0 & -0.51 & 0 & 0 \\
0 & 0 & -0.51 & 0 \\
0 & 0 & 0 & -0.51
\end{array}
\right)
\left(
\begin{array}{c}
y_1 \\ y_2 \\ y_3 \\ y_4
\end{array}
\right) =
\left(
\begin{array}{c}
1 \\ 1.33 \\ 1.66 \\ 1.99
\end{array}
\right) \,.
\end{eqnarray}
This gives the solution: $y_1=-7.086890,\, y_2=-2.607843,\, y_3=-3.254902\;\,$ and $\,y_4=-3.901961$.
The vector $(1, 1.33, 1.66, 1.99)$ is obtained as follows:
the first element is $y_5$, for which we want the value $1$ ($\epsilon_5$ is set to zero); the second element is $-0.51y_2 = y_6 - 0.67y_5 = 2 - 0.67\times1 = 1.33$ (the desired value for $y_6$ is $2$ and $\epsilon_6$ is set to zero); the third element is $-0.51y_3 = y_7 - 0.67y_6 = 3-0.67\times 2 = 1.66$; and from the last equation $-0.51y_4 = y_8 - 0.67y_7 = 4 - 0.67\times 3 = 1.99$.
Now, upon this result, you should define the arguments n.start, start.innovand innov that are passed to arima.sim. A similar example is given here. For this case I couldn't figure out the right definition of these arguments, there may be something else done by arima.sim that I am overlooking. Nevertheless, you can still generate your data as follows:
set.seed(123)
x <- rep(0, 20)
x[1:4] <- c(-7.086890, -2.607843, -3.254902, -3.901961)
eps <- c(rep(0, 8), rnorm(12))
for (i in seq(5,20))
x[i] <- 0.67 * x[i-1] - 0.51 * x[i-4] + eps[i]
x[-seq(4)]
# [1] 1.000 2.000 3.000 4.000 1.610 -0.172 -0.086 -2.027 -2.050 0.429
# [11] 0.793 0.300 0.560 -0.290 0.626 0.626
After the auxiliary observations $y_1$ to $y_4$ that were found above, the series continues with the desired values $1, 2, 3, 4$. | Initialize AR(p) process by using Arima.sim
You have the following system:
\begin{eqnarray}
\left\{
\begin{array}{lcl}
y_5 &=& 0.67 y_4 - 0.51 y_1 + \epsilon_5 \\
y_6 &=& 0.67 y_5 - 0.51 y_2 + \epsilon_6 \\
y_7 &=& 0.67 y_6 - 0.51 y_3 + \epsil |
55,198 | interrupted time series in R | One usually distinguishes two patterns of change (continuous with change in slope only, abrupt with shift plus change in slope) and whether or not the breakpoint is known.
If the breakpoint (= timing where the intervention became effective) were known, then the formula for the regression without intervation would be something like: y ~ time + .... And the continuous change in slope only would be: y ~ time + pmax(time - breakpoint, 0) + .... The shift that potentially changes both slope and intercept (i.e., might include an abrupt shift in addition to the slope change) is y ~ factor(time <= breakpoint) * time + ....
If you believe that the intervention became effective immediately, then you can do just fit the model without change and one of the models with change and carry out an anova().
However, usually it is not so clear when exactly such interventions become effective and then you need to estimate the breakpoint along with the intercept/slope. The segmented package is designed for models where only the slope changes and the strucchange package for models where all parameters change. Both packages are available from CRAN and have been described in several papers, see citation("segmented") and citation("strucchange"), respectively.
P.S.: Re-reading your question I realized that your slope might not just be a time trend but also a slope with respect to other regressors. In the explanations above you can also replace time with (an)other regressor(s) for your application. | interrupted time series in R | One usually distinguishes two patterns of change (continuous with change in slope only, abrupt with shift plus change in slope) and whether or not the breakpoint is known.
If the breakpoint (= timing | interrupted time series in R
One usually distinguishes two patterns of change (continuous with change in slope only, abrupt with shift plus change in slope) and whether or not the breakpoint is known.
If the breakpoint (= timing where the intervention became effective) were known, then the formula for the regression without intervation would be something like: y ~ time + .... And the continuous change in slope only would be: y ~ time + pmax(time - breakpoint, 0) + .... The shift that potentially changes both slope and intercept (i.e., might include an abrupt shift in addition to the slope change) is y ~ factor(time <= breakpoint) * time + ....
If you believe that the intervention became effective immediately, then you can do just fit the model without change and one of the models with change and carry out an anova().
However, usually it is not so clear when exactly such interventions become effective and then you need to estimate the breakpoint along with the intercept/slope. The segmented package is designed for models where only the slope changes and the strucchange package for models where all parameters change. Both packages are available from CRAN and have been described in several papers, see citation("segmented") and citation("strucchange"), respectively.
P.S.: Re-reading your question I realized that your slope might not just be a time trend but also a slope with respect to other regressors. In the explanations above you can also replace time with (an)other regressor(s) for your application. | interrupted time series in R
One usually distinguishes two patterns of change (continuous with change in slope only, abrupt with shift plus change in slope) and whether or not the breakpoint is known.
If the breakpoint (= timing |
55,199 | Expected proportion of the sample when bootstrapping | This is related to collision-counting in the birthday problem.
Imagine you walk into a room of $k$ people. The probability at least one shares a birthday with you is $q(k;n) = 1 - \left( \frac{n-1}{n} \right)^k$, where $n$ is the number of different birthday slots (days in the year).
The expected number you add to the total number of different birthdays in the room when you walk in is therefore $1-q(k;n)=\left( \frac{n-1}{n} \right)^k$
So by the law of iterated expectations, the expected number of different birthdays after $m$ people have entered is
$\sum_{i=1}^m \left( \frac{n-1}{n} \right)^{i-1} = \sum_{i=0}^{m-1} \left( \frac{n-1}{n} \right)^i$
This is sum to $m$ terms of a geometric series, which is straightforward:
$\hspace{2.3cm} = \frac{1- \left( \frac{n-1}{n} \right)^m}{1-\frac{n-1}{n}}=n\left[1- \left( \frac{n-1}{n} \right)^m\right]$
Check: at n=100, m=50 this gives $\approx$ 39.4994, while simulation gives:
> mean(replicate(10000,length(unique(sample(1:100,50,replace=TRUE)))))
[1] 39.4938
so that looks okay.
The expected fraction is then $\frac{1}{n}$th of that, $1- \left( \frac{n-1}{n} \right)^m$.
Note that if $n$ is large, $(1-\frac{1}{n})^n\approx e^{-1}$, so if $m$ is some value that's at least a large fraction of $n$, $(1-\frac{1}{n})^m\approx e^{-\frac{m}{n}}$, so we get that the expected number is approximately $n (1- e^{-\frac{m}{n}})$.
Let's try that approximation on the above example where $m=50$ and $n=100$: $100 (1-e^{-\frac{50}{100}})=100(1-e^{-\frac{1}{2}})\approx 39.347$, which is fairly close to the exact answer - for a given $m/n$ it improves with larger $n$.
So a quick and reasonably accurate approximation to the fraction is $(1- e^{-\frac{m}{n}})$.
Note that when $m=n$ this gives the usual "0.632" rule. | Expected proportion of the sample when bootstrapping | This is related to collision-counting in the birthday problem.
Imagine you walk into a room of $k$ people. The probability at least one shares a birthday with you is $q(k;n) = 1 - \left( \frac{n-1}{n | Expected proportion of the sample when bootstrapping
This is related to collision-counting in the birthday problem.
Imagine you walk into a room of $k$ people. The probability at least one shares a birthday with you is $q(k;n) = 1 - \left( \frac{n-1}{n} \right)^k$, where $n$ is the number of different birthday slots (days in the year).
The expected number you add to the total number of different birthdays in the room when you walk in is therefore $1-q(k;n)=\left( \frac{n-1}{n} \right)^k$
So by the law of iterated expectations, the expected number of different birthdays after $m$ people have entered is
$\sum_{i=1}^m \left( \frac{n-1}{n} \right)^{i-1} = \sum_{i=0}^{m-1} \left( \frac{n-1}{n} \right)^i$
This is sum to $m$ terms of a geometric series, which is straightforward:
$\hspace{2.3cm} = \frac{1- \left( \frac{n-1}{n} \right)^m}{1-\frac{n-1}{n}}=n\left[1- \left( \frac{n-1}{n} \right)^m\right]$
Check: at n=100, m=50 this gives $\approx$ 39.4994, while simulation gives:
> mean(replicate(10000,length(unique(sample(1:100,50,replace=TRUE)))))
[1] 39.4938
so that looks okay.
The expected fraction is then $\frac{1}{n}$th of that, $1- \left( \frac{n-1}{n} \right)^m$.
Note that if $n$ is large, $(1-\frac{1}{n})^n\approx e^{-1}$, so if $m$ is some value that's at least a large fraction of $n$, $(1-\frac{1}{n})^m\approx e^{-\frac{m}{n}}$, so we get that the expected number is approximately $n (1- e^{-\frac{m}{n}})$.
Let's try that approximation on the above example where $m=50$ and $n=100$: $100 (1-e^{-\frac{50}{100}})=100(1-e^{-\frac{1}{2}})\approx 39.347$, which is fairly close to the exact answer - for a given $m/n$ it improves with larger $n$.
So a quick and reasonably accurate approximation to the fraction is $(1- e^{-\frac{m}{n}})$.
Note that when $m=n$ this gives the usual "0.632" rule. | Expected proportion of the sample when bootstrapping
This is related to collision-counting in the birthday problem.
Imagine you walk into a room of $k$ people. The probability at least one shares a birthday with you is $q(k;n) = 1 - \left( \frac{n-1}{n |
55,200 | Expected proportion of the sample when bootstrapping | I would like to corroborate again the answer from @glen-b with some experimental results.
I wrote a function $p(n,m,k)$ that tells the exact probability (is not based on simulations) of getting exactly $k$ unique elements after choosing $m$ samples with replacement from a set of $n$ elements. Notice $p$ is a pdf on $k$.
For example, if choosing 600 from 600, the expected value of $k$ is close to $600*(1-e^{-1}) \approx = 379$. It's pdf is shown in blue.
Then, I used the function $p$ to compute the expected value of $k$ and plotted my results (orange) against the formula @glen-b provided (green). As expected, the outputs were exactly the same.
And here's the code. The simulation function is added as a reference. It is never used to compute $p$ or the expected $k$. The computation of $p$ is done in python 3 and is based on Dynamic Programming DP:
import numpy as np
from functools import lru_cache
import sys; sys.setrecursionlimit(5000000)
def simulation(n, m):
''' From n, choose m and count unique values'''
x = np.arange(n)
s = np.random.choice(x, n, replace=True)
k = len(np.unique(s))
return k
@lru_cache(None)
def p(n,m,k):
'''Prob. of simulation(n,m) = k. Requires 1<=n and 0<=k<=m'''
if k==0 or n==1 or k>m:
ans = int(m==k)
else:
ans = p(n,m-1,k) * k/n + p(n,m-1,k-1) * (n-k+1)/n
return ans
def expected_k(n):
'''Expected output of simulation(n,n) / n. DP version'''
K = np.arange(n+1)
P = np.array([p(n,n,k) for k in K])
return np.multiply(K,P).sum()/n
def glen_b_answer(n):
'''Expected output of simulation(n,n) / n. Closed formula'''
return 1 - (1-1/n)**n | Expected proportion of the sample when bootstrapping | I would like to corroborate again the answer from @glen-b with some experimental results.
I wrote a function $p(n,m,k)$ that tells the exact probability (is not based on simulations) of getting exactl | Expected proportion of the sample when bootstrapping
I would like to corroborate again the answer from @glen-b with some experimental results.
I wrote a function $p(n,m,k)$ that tells the exact probability (is not based on simulations) of getting exactly $k$ unique elements after choosing $m$ samples with replacement from a set of $n$ elements. Notice $p$ is a pdf on $k$.
For example, if choosing 600 from 600, the expected value of $k$ is close to $600*(1-e^{-1}) \approx = 379$. It's pdf is shown in blue.
Then, I used the function $p$ to compute the expected value of $k$ and plotted my results (orange) against the formula @glen-b provided (green). As expected, the outputs were exactly the same.
And here's the code. The simulation function is added as a reference. It is never used to compute $p$ or the expected $k$. The computation of $p$ is done in python 3 and is based on Dynamic Programming DP:
import numpy as np
from functools import lru_cache
import sys; sys.setrecursionlimit(5000000)
def simulation(n, m):
''' From n, choose m and count unique values'''
x = np.arange(n)
s = np.random.choice(x, n, replace=True)
k = len(np.unique(s))
return k
@lru_cache(None)
def p(n,m,k):
'''Prob. of simulation(n,m) = k. Requires 1<=n and 0<=k<=m'''
if k==0 or n==1 or k>m:
ans = int(m==k)
else:
ans = p(n,m-1,k) * k/n + p(n,m-1,k-1) * (n-k+1)/n
return ans
def expected_k(n):
'''Expected output of simulation(n,n) / n. DP version'''
K = np.arange(n+1)
P = np.array([p(n,n,k) for k in K])
return np.multiply(K,P).sum()/n
def glen_b_answer(n):
'''Expected output of simulation(n,n) / n. Closed formula'''
return 1 - (1-1/n)**n | Expected proportion of the sample when bootstrapping
I would like to corroborate again the answer from @glen-b with some experimental results.
I wrote a function $p(n,m,k)$ that tells the exact probability (is not based on simulations) of getting exactl |
Subsets and Splits