idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,501 | Cluster standard error _versus_ fixed effects | You adjust for clustering at the level at which your experimental treatment is assigned. If your treatment is randomized by day of application, cluster by day. If your treatment is randomized by region, cluster by region. | Cluster standard error _versus_ fixed effects | You adjust for clustering at the level at which your experimental treatment is assigned. If your treatment is randomized by day of application, cluster by day. If your treatment is randomized by regio | Cluster standard error _versus_ fixed effects
You adjust for clustering at the level at which your experimental treatment is assigned. If your treatment is randomized by day of application, cluster by day. If your treatment is randomized by region, cluster by region. | Cluster standard error _versus_ fixed effects
You adjust for clustering at the level at which your experimental treatment is assigned. If your treatment is randomized by day of application, cluster by day. If your treatment is randomized by regio |
54,502 | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the population mean? | It helps to distil your problem down to something simple and clear. When using Excel, this means:
Strip out unnecessary and duplicate material.
Use meaningful names for ranges and variables rather than cell references wherever possible.
Make examples small.
Draw pictures of the data.
To illustrate, let me share a spreadsheet I created long ago for exactly this purpose: to show, via simulation, how confidence intervals work. To start, here is the worksheet where the user sets parameter values and gives them meaningful names:
The simulation takes place in 100 columns of another worksheet. Here is a small piece of it; the remaining columns look similar.
How is it done? Let's look at the formulas:
From top to bottom, the first few rows:
Count the number of simulated values in the column.
Compute their standard deviation and then their standard error.
Compute the t-value for the specified confidence alpha.
This stuff is of little interest, so it is shown in normal text. The interesting stuff is in red, but that should be self-explanatory from the formulas. (The strange formula for Out? will become apparent in the plot below.) The green values show how to generate normal variates with given mean Mu and standard deviation Sigma. This is done by inverting the cumulative distribution, as computed (for Normal distributions) by NORMSINV.
Finally, these 100 columns drive a graphic that shows all 100 confidence intervals relative to the specified mean Mu and also visually indicates (via the spikes at the bottom) which intervals fail to cover the mean. This is done with a little graphical trick: the value of Out? determines how high the spikes should be; a value of 3.5 extends them into the bottom of the plot, whereas a value of 0 keeps them outside the plot. (These values are plotted on an invisible left hand axis, not on the right hand axis.)
In this instance it is immediately apparent that two intervals failed to cover the mean. (The sixth and 33rd, it looks like.)
Because each interval has a $1-\alpha$ = $1-0.95$ = $5$% chance of not covering the mean in this example, the count of intervals out of $100$ follows a Binomial$(100, .05)$ distribution. This distribution gives relatively high probability to counts between $1$ (with $3.1$% chance of occurring) and $9$ (with $3.5$% chance of occurring); the chance that the count will be outside this range is only $3.4$%. By repeatedly pressing the "recalculation" key (F9 on Windows), you can monitor these counts. With a macro it's easy to accumulate these counts over many simulations, then draw a histogram and perhaps even conduct a Chi-square test to verify that they do indeed follow the expected Binomial distribution. | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the populat | It helps to distil your problem down to something simple and clear. When using Excel, this means:
Strip out unnecessary and duplicate material.
Use meaningful names for ranges and variables rather t | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the population mean?
It helps to distil your problem down to something simple and clear. When using Excel, this means:
Strip out unnecessary and duplicate material.
Use meaningful names for ranges and variables rather than cell references wherever possible.
Make examples small.
Draw pictures of the data.
To illustrate, let me share a spreadsheet I created long ago for exactly this purpose: to show, via simulation, how confidence intervals work. To start, here is the worksheet where the user sets parameter values and gives them meaningful names:
The simulation takes place in 100 columns of another worksheet. Here is a small piece of it; the remaining columns look similar.
How is it done? Let's look at the formulas:
From top to bottom, the first few rows:
Count the number of simulated values in the column.
Compute their standard deviation and then their standard error.
Compute the t-value for the specified confidence alpha.
This stuff is of little interest, so it is shown in normal text. The interesting stuff is in red, but that should be self-explanatory from the formulas. (The strange formula for Out? will become apparent in the plot below.) The green values show how to generate normal variates with given mean Mu and standard deviation Sigma. This is done by inverting the cumulative distribution, as computed (for Normal distributions) by NORMSINV.
Finally, these 100 columns drive a graphic that shows all 100 confidence intervals relative to the specified mean Mu and also visually indicates (via the spikes at the bottom) which intervals fail to cover the mean. This is done with a little graphical trick: the value of Out? determines how high the spikes should be; a value of 3.5 extends them into the bottom of the plot, whereas a value of 0 keeps them outside the plot. (These values are plotted on an invisible left hand axis, not on the right hand axis.)
In this instance it is immediately apparent that two intervals failed to cover the mean. (The sixth and 33rd, it looks like.)
Because each interval has a $1-\alpha$ = $1-0.95$ = $5$% chance of not covering the mean in this example, the count of intervals out of $100$ follows a Binomial$(100, .05)$ distribution. This distribution gives relatively high probability to counts between $1$ (with $3.1$% chance of occurring) and $9$ (with $3.5$% chance of occurring); the chance that the count will be outside this range is only $3.4$%. By repeatedly pressing the "recalculation" key (F9 on Windows), you can monitor these counts. With a macro it's easy to accumulate these counts over many simulations, then draw a histogram and perhaps even conduct a Chi-square test to verify that they do indeed follow the expected Binomial distribution. | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the populat
It helps to distil your problem down to something simple and clear. When using Excel, this means:
Strip out unnecessary and duplicate material.
Use meaningful names for ranges and variables rather t |
54,503 | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the population mean? | In any case, you have to simulate an infinite number of samples to get the result you want.
Or rely on probability theory.
The probably that the confidence interval covers the mean is 0.95. If you do $n$ CI's, the number that cover the true mean will follow a binomial distribution $(n,p)$, with $p=0.95$.
So nothing is certain. | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the populat | In any case, you have to simulate an infinite number of samples to get the result you want.
Or rely on probability theory.
The probably that the confidence interval covers the mean is 0.95. If you do | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the population mean?
In any case, you have to simulate an infinite number of samples to get the result you want.
Or rely on probability theory.
The probably that the confidence interval covers the mean is 0.95. If you do $n$ CI's, the number that cover the true mean will follow a binomial distribution $(n,p)$, with $p=0.95$.
So nothing is certain. | Given a 95% confidence level, how do I demonstrate 95% of the intervals actually contain the populat
In any case, you have to simulate an infinite number of samples to get the result you want.
Or rely on probability theory.
The probably that the confidence interval covers the mean is 0.95. If you do |
54,504 | Correctness of regression with ARIMA errors model and coefficient interpretation issues | In any regression model, including a regression with ARMA errors, you must specify one less dummy variable than the number of categories. Intuitively, this is because if you know the value of 11 monthly dummy variables, then you know the value of the 12th. So it provides no new information.
There are two problems here. First, seasonality is confounded with the weather, so you cannot separate out their effects. Second, it is not possible to allocate a percentage contribution from each predictor unless the predictors are all orthogonal.
The plotting method for forecast objects shows the historical data and the forecasts along with prediction intervals. Look at the help file to see how to modify the plot to your own purposes. | Correctness of regression with ARIMA errors model and coefficient interpretation issues | In any regression model, including a regression with ARMA errors, you must specify one less dummy variable than the number of categories. Intuitively, this is because if you know the value of 11 month | Correctness of regression with ARIMA errors model and coefficient interpretation issues
In any regression model, including a regression with ARMA errors, you must specify one less dummy variable than the number of categories. Intuitively, this is because if you know the value of 11 monthly dummy variables, then you know the value of the 12th. So it provides no new information.
There are two problems here. First, seasonality is confounded with the weather, so you cannot separate out their effects. Second, it is not possible to allocate a percentage contribution from each predictor unless the predictors are all orthogonal.
The plotting method for forecast objects shows the historical data and the forecasts along with prediction intervals. Look at the help file to see how to modify the plot to your own purposes. | Correctness of regression with ARIMA errors model and coefficient interpretation issues
In any regression model, including a regression with ARMA errors, you must specify one less dummy variable than the number of categories. Intuitively, this is because if you know the value of 11 month |
54,505 | What does it mean to correlate residuals in SEM? | It means that the unexplained variance from two variables are correlated. One way of thinking of this is as a partial correlation.
Say we have two regression equations:
\begin{equation}
Y_1i=\beta1_1 \ X_i+\epsilon1_i
\end{equation}
\begin{equation}
Y_2i=\beta2_1 \ X_i+\epsilon2_i
\end{equation}
Both equations have an $\epsilon$ term. If you model that as two equations, that's fine. But what if you model it as one equation - do you want to assume that the $\epsilon$ terms are uncorrelated? If you do, then don't correlate them - as in, don't put estimate a correlation in the residual. Usually you don't, so you'd correlate the residuals.
An example: Say you want to look at the effect of age (in adults) on: speed at running 100m, speed at running 5 miles. I'd expect a negative relationship for both of these, but if you modeled them in one equation, you'd expect unexplained variance in 100m running speed to be correlated with 5 mile running speed, controlling for age - so the residuals are correlated.
You can also think of this in terms of latent variables - there are common causes of the residual for both 100m and 5 mile speeds, and hence you can hypothesize the existence of a latent (unmeasured) variable. | What does it mean to correlate residuals in SEM? | It means that the unexplained variance from two variables are correlated. One way of thinking of this is as a partial correlation.
Say we have two regression equations:
\begin{equation}
Y_1i=\beta1_1 | What does it mean to correlate residuals in SEM?
It means that the unexplained variance from two variables are correlated. One way of thinking of this is as a partial correlation.
Say we have two regression equations:
\begin{equation}
Y_1i=\beta1_1 \ X_i+\epsilon1_i
\end{equation}
\begin{equation}
Y_2i=\beta2_1 \ X_i+\epsilon2_i
\end{equation}
Both equations have an $\epsilon$ term. If you model that as two equations, that's fine. But what if you model it as one equation - do you want to assume that the $\epsilon$ terms are uncorrelated? If you do, then don't correlate them - as in, don't put estimate a correlation in the residual. Usually you don't, so you'd correlate the residuals.
An example: Say you want to look at the effect of age (in adults) on: speed at running 100m, speed at running 5 miles. I'd expect a negative relationship for both of these, but if you modeled them in one equation, you'd expect unexplained variance in 100m running speed to be correlated with 5 mile running speed, controlling for age - so the residuals are correlated.
You can also think of this in terms of latent variables - there are common causes of the residual for both 100m and 5 mile speeds, and hence you can hypothesize the existence of a latent (unmeasured) variable. | What does it mean to correlate residuals in SEM?
It means that the unexplained variance from two variables are correlated. One way of thinking of this is as a partial correlation.
Say we have two regression equations:
\begin{equation}
Y_1i=\beta1_1 |
54,506 | How do you prepare longitudinal data for survival analysis? | Here is a quick example that shows how to arrange the data in a similar context.
Consider the following data.
> dataWide
id time status
1 1 0.88820072 1
2 2 0.05562832 0
3 3 5.24113929 1
4 4 2.91370906 1
For example, individual 1 had an event at $t = 0.888$, and individual 3 had an event at $t = 5.241$.
For illustration, I take 3 time intervals: $[0, 1), [1, 2), [2, \infty)$.
In the long format, the same data set becomes:
> dataLong
id period tstart tstop status
1 1 1 0 0.88820072 1
2 2 1 0 0.05562832 0
3 3 1 0 1.00000000 0
4 3 2 1 2.00000000 0
5 3 3 2 5.24113929 1
6 4 1 0 1.00000000 0
7 4 2 1 2.00000000 0
8 4 3 2 2.91370906 1
For individual 1, the first period starts at $t = 0$ and ends at $t = 0.888$ in which he had an event (status = 1). Individual 3 had an event in period 3. Therefore status = 0 for period 1 (from $0$ to $1$) and for period 2 (from $1$ to $2$), and status = 1 in period 3 (from $2$ to $5.241$).
Depending on the format, the Kaplan Meier curve can be obtained as follows,
library(survival)
plot(survfit(Surv(time, status) ~ 1, data=dataWide),
conf.int=FALSE, mark.time=FALSE)
plot(survfit(Surv(tstart, tstop, status) ~ 1, data=dataLong),
conf.int=FALSE, mark.time=FALSE) | How do you prepare longitudinal data for survival analysis? | Here is a quick example that shows how to arrange the data in a similar context.
Consider the following data.
> dataWide
id time status
1 1 0.88820072 1
2 2 0.05562832 0
3 3 5.2 | How do you prepare longitudinal data for survival analysis?
Here is a quick example that shows how to arrange the data in a similar context.
Consider the following data.
> dataWide
id time status
1 1 0.88820072 1
2 2 0.05562832 0
3 3 5.24113929 1
4 4 2.91370906 1
For example, individual 1 had an event at $t = 0.888$, and individual 3 had an event at $t = 5.241$.
For illustration, I take 3 time intervals: $[0, 1), [1, 2), [2, \infty)$.
In the long format, the same data set becomes:
> dataLong
id period tstart tstop status
1 1 1 0 0.88820072 1
2 2 1 0 0.05562832 0
3 3 1 0 1.00000000 0
4 3 2 1 2.00000000 0
5 3 3 2 5.24113929 1
6 4 1 0 1.00000000 0
7 4 2 1 2.00000000 0
8 4 3 2 2.91370906 1
For individual 1, the first period starts at $t = 0$ and ends at $t = 0.888$ in which he had an event (status = 1). Individual 3 had an event in period 3. Therefore status = 0 for period 1 (from $0$ to $1$) and for period 2 (from $1$ to $2$), and status = 1 in period 3 (from $2$ to $5.241$).
Depending on the format, the Kaplan Meier curve can be obtained as follows,
library(survival)
plot(survfit(Surv(time, status) ~ 1, data=dataWide),
conf.int=FALSE, mark.time=FALSE)
plot(survfit(Surv(tstart, tstop, status) ~ 1, data=dataLong),
conf.int=FALSE, mark.time=FALSE) | How do you prepare longitudinal data for survival analysis?
Here is a quick example that shows how to arrange the data in a similar context.
Consider the following data.
> dataWide
id time status
1 1 0.88820072 1
2 2 0.05562832 0
3 3 5.2 |
54,507 | How to interpret the model parameters of libsvm via MATLAB interface? | Support vector machine classifiers use the following decision function to determine the label for a test instance $\mathbf{z}$:
$f(\mathbf{z})=\mathtt{sign}\big(\sum_{i=1}^{totalSV} y_i \alpha_i \kappa(\mathbf{x}_i,\mathbf{z})-\rho\big)=\mathtt{sign}\big(\langle\mathbf{w},\Phi(\mathbf{z})\rangle-\rho\big)$,
where $\kappa(\cdot,\cdot)$ is the kernel function, $\alpha$ contains the support values, $\mathbf{y}$ is the training label vector, $\rho$ is a bias term and $\mathbf{w}$ is the separating hyperplane in feature space.
In a LIBSVM model, sv_coef contains $\alpha_i y_i$ and SVs contains the support vectors ($\mathbf{x}_i$). To predict you need to perform kernel evaluations between the test point and all support vectors.
For the linear kernel ($\kappa(\mathbf{x},\mathbf{z})=\mathbf{x}^T\mathbf{z}$) you can compute $\mathbf{w}$ explicitly:
$\mathbf{w}=\sum_{i=1}^{totalSV} \alpha_i y_i \mathbf{x}_i=\mathtt{sv\_coef}^T \times \mathtt{SVs}$.
Subsequently, predictions are simply based on the sign of $\mathbf{w}^T\mathbf{z}-\rho$. | How to interpret the model parameters of libsvm via MATLAB interface? | Support vector machine classifiers use the following decision function to determine the label for a test instance $\mathbf{z}$:
$f(\mathbf{z})=\mathtt{sign}\big(\sum_{i=1}^{totalSV} y_i \alpha_i \kapp | How to interpret the model parameters of libsvm via MATLAB interface?
Support vector machine classifiers use the following decision function to determine the label for a test instance $\mathbf{z}$:
$f(\mathbf{z})=\mathtt{sign}\big(\sum_{i=1}^{totalSV} y_i \alpha_i \kappa(\mathbf{x}_i,\mathbf{z})-\rho\big)=\mathtt{sign}\big(\langle\mathbf{w},\Phi(\mathbf{z})\rangle-\rho\big)$,
where $\kappa(\cdot,\cdot)$ is the kernel function, $\alpha$ contains the support values, $\mathbf{y}$ is the training label vector, $\rho$ is a bias term and $\mathbf{w}$ is the separating hyperplane in feature space.
In a LIBSVM model, sv_coef contains $\alpha_i y_i$ and SVs contains the support vectors ($\mathbf{x}_i$). To predict you need to perform kernel evaluations between the test point and all support vectors.
For the linear kernel ($\kappa(\mathbf{x},\mathbf{z})=\mathbf{x}^T\mathbf{z}$) you can compute $\mathbf{w}$ explicitly:
$\mathbf{w}=\sum_{i=1}^{totalSV} \alpha_i y_i \mathbf{x}_i=\mathtt{sv\_coef}^T \times \mathtt{SVs}$.
Subsequently, predictions are simply based on the sign of $\mathbf{w}^T\mathbf{z}-\rho$. | How to interpret the model parameters of libsvm via MATLAB interface?
Support vector machine classifiers use the following decision function to determine the label for a test instance $\mathbf{z}$:
$f(\mathbf{z})=\mathtt{sign}\big(\sum_{i=1}^{totalSV} y_i \alpha_i \kapp |
54,508 | How to interpret the model parameters of libsvm via MATLAB interface? | Nevermind, I found svm.cpp in the svmlight package and read the svm_predict function. It is written for the general case for n classes but for the simple case of two classes their logic boild down to
>> sv=model.SVs;
>> svc=model.sv_coef;
>> sv546=sv(1:546, :); %Since model.label is [1, -1] and model.nSV=[546; 246]
>> sv246=sv(547:end, :);
>> svc546=svc(1:546);
>> svc246=svc(547:end);
>> weight_for_minus1=transpose(svc246)*sv246; %Since model.label is [1, -1] and model.nSV=[546; 246]
>> weight_for_plus1=transpose(svc546)*sv546;
>> 'now multiply weight_for_minu1 and weight_for_plus1 with the 997-dimensional feature and select whichever is positive' | How to interpret the model parameters of libsvm via MATLAB interface? | Nevermind, I found svm.cpp in the svmlight package and read the svm_predict function. It is written for the general case for n classes but for the simple case of two classes their logic boild down to
| How to interpret the model parameters of libsvm via MATLAB interface?
Nevermind, I found svm.cpp in the svmlight package and read the svm_predict function. It is written for the general case for n classes but for the simple case of two classes their logic boild down to
>> sv=model.SVs;
>> svc=model.sv_coef;
>> sv546=sv(1:546, :); %Since model.label is [1, -1] and model.nSV=[546; 246]
>> sv246=sv(547:end, :);
>> svc546=svc(1:546);
>> svc246=svc(547:end);
>> weight_for_minus1=transpose(svc246)*sv246; %Since model.label is [1, -1] and model.nSV=[546; 246]
>> weight_for_plus1=transpose(svc546)*sv546;
>> 'now multiply weight_for_minu1 and weight_for_plus1 with the 997-dimensional feature and select whichever is positive' | How to interpret the model parameters of libsvm via MATLAB interface?
Nevermind, I found svm.cpp in the svmlight package and read the svm_predict function. It is written for the general case for n classes but for the simple case of two classes their logic boild down to
|
54,509 | What is the proper name for a backward forecast? | Backcast, although I have seen hindcast as well. | What is the proper name for a backward forecast? | Backcast, although I have seen hindcast as well. | What is the proper name for a backward forecast?
Backcast, although I have seen hindcast as well. | What is the proper name for a backward forecast?
Backcast, although I have seen hindcast as well. |
54,510 | What is the proper name for a backward forecast? | I'm just learning Timeseries now but found this wonderful paper: Caporin and Sartore use the term back-calculation. They acknowledge it's not a common term, noting other used terms: retropolation, reconstruction, and back-casting (ARIMA).
see end of section 2:
Caporin, M., & Sartore, D. (2006). Methodological aspects of time series back-calculation. | What is the proper name for a backward forecast? | I'm just learning Timeseries now but found this wonderful paper: Caporin and Sartore use the term back-calculation. They acknowledge it's not a common term, noting other used terms: retropolation, rec | What is the proper name for a backward forecast?
I'm just learning Timeseries now but found this wonderful paper: Caporin and Sartore use the term back-calculation. They acknowledge it's not a common term, noting other used terms: retropolation, reconstruction, and back-casting (ARIMA).
see end of section 2:
Caporin, M., & Sartore, D. (2006). Methodological aspects of time series back-calculation. | What is the proper name for a backward forecast?
I'm just learning Timeseries now but found this wonderful paper: Caporin and Sartore use the term back-calculation. They acknowledge it's not a common term, noting other used terms: retropolation, rec |
54,511 | NMDS and variance explained by vector fitting | I wouldn't place much stock in "rules of thumb" such as this. It is dependent upon so many things such as the number of variables, the number of sites, what dissimilarity you use etc. Also note that the vector fitting approach is inherently linear and we have no reason to presume that the relationship between the variable and the NMDS configuration is linear.
The key thing is that it is a small/modest correlation but that it is significant. But you probably want to look at the linearity assumption.
In the vegan package for R we have function ordisurf() for this. It fits a 2-d surface to an NMDS solution using a GAM via function gam() in package mgcv. It essentially fits the model
$$g(\mu_i) = \eta_i = \beta_0 + f(nmds_{1i}, nmds_{2i})$$
where $f$ is a 2-d smooth function of the 1st and 2nd axes of the NMDS, $g$ is the link function, $\mu$ the expectation of the response, and $\eta$ is the linear predictor. The error distribution is a member of the exponential family of distributions. The function $f$ can be isotropic in which case we use smooths formed by s() employing by default thin plate splines. Anisotropic surfaces can be fitted too, where we use smooths formed by te(); tensor product smooths.
The complexity of the smooth is chosen during fitting using a, by default in the development versions, REML criterion. GCV smoothness selection is also available.
Here is an R example using one of the in-built data sets provided with vegan
require("vegan")
data(dune)
data(dune.env)
## fit NMDS using Bray-Curtis dissimilarity (default)
set.seed(12)
sol <- metaMDS(dune)
## NMDS plot
plot(sol)
## Fit and add the 2d surface
sol.s <- ordisurf(sol ~ A1, data = dune.env, method = "REML",
select = TRUE)
## look at the fitted model
summary(sol.s)
This produces
> summary(sol.s)
Family: gaussian
Link function: identity
Formula:
y ~ s(x1, x2, k = knots[1], bs = bs[1])
<environment: 0x2fb78a0>
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.8500 0.4105 11.81 9.65e-10 ***
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(x1,x2) 1.591 9 0.863 0.0203 *
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
R-sq.(adj) = 0.29 Deviance explained = 35%
REML score = 41.587 Scale est. = 3.3706 n = 20
and
In this case a linear vector fit seems reasonable for this variable. Read ?ordisurf for details on the arguments used, especially what select = TRUE does. | NMDS and variance explained by vector fitting | I wouldn't place much stock in "rules of thumb" such as this. It is dependent upon so many things such as the number of variables, the number of sites, what dissimilarity you use etc. Also note that t | NMDS and variance explained by vector fitting
I wouldn't place much stock in "rules of thumb" such as this. It is dependent upon so many things such as the number of variables, the number of sites, what dissimilarity you use etc. Also note that the vector fitting approach is inherently linear and we have no reason to presume that the relationship between the variable and the NMDS configuration is linear.
The key thing is that it is a small/modest correlation but that it is significant. But you probably want to look at the linearity assumption.
In the vegan package for R we have function ordisurf() for this. It fits a 2-d surface to an NMDS solution using a GAM via function gam() in package mgcv. It essentially fits the model
$$g(\mu_i) = \eta_i = \beta_0 + f(nmds_{1i}, nmds_{2i})$$
where $f$ is a 2-d smooth function of the 1st and 2nd axes of the NMDS, $g$ is the link function, $\mu$ the expectation of the response, and $\eta$ is the linear predictor. The error distribution is a member of the exponential family of distributions. The function $f$ can be isotropic in which case we use smooths formed by s() employing by default thin plate splines. Anisotropic surfaces can be fitted too, where we use smooths formed by te(); tensor product smooths.
The complexity of the smooth is chosen during fitting using a, by default in the development versions, REML criterion. GCV smoothness selection is also available.
Here is an R example using one of the in-built data sets provided with vegan
require("vegan")
data(dune)
data(dune.env)
## fit NMDS using Bray-Curtis dissimilarity (default)
set.seed(12)
sol <- metaMDS(dune)
## NMDS plot
plot(sol)
## Fit and add the 2d surface
sol.s <- ordisurf(sol ~ A1, data = dune.env, method = "REML",
select = TRUE)
## look at the fitted model
summary(sol.s)
This produces
> summary(sol.s)
Family: gaussian
Link function: identity
Formula:
y ~ s(x1, x2, k = knots[1], bs = bs[1])
<environment: 0x2fb78a0>
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.8500 0.4105 11.81 9.65e-10 ***
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(x1,x2) 1.591 9 0.863 0.0203 *
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
R-sq.(adj) = 0.29 Deviance explained = 35%
REML score = 41.587 Scale est. = 3.3706 n = 20
and
In this case a linear vector fit seems reasonable for this variable. Read ?ordisurf for details on the arguments used, especially what select = TRUE does. | NMDS and variance explained by vector fitting
I wouldn't place much stock in "rules of thumb" such as this. It is dependent upon so many things such as the number of variables, the number of sites, what dissimilarity you use etc. Also note that t |
54,512 | Computing by hand the optimal threshold value for a biomarker using the Youden Index | It is a mistake to think that an optimum threshold can be computed without knowing the cost of a false positive and the cost of a false negative for a specific subject. And if those costs are not identical for all subjects, it is easy to see that no threshold should be used. ROC curves and Youden indexes are only useful for mass one-time group decision making where utilities are unknowable. You are making a series of very subtle assumptions. One of these is that the binary choice is forced, i.e., there is no gray zone that would lead to a "defer the decision, get more data" action. | Computing by hand the optimal threshold value for a biomarker using the Youden Index | It is a mistake to think that an optimum threshold can be computed without knowing the cost of a false positive and the cost of a false negative for a specific subject. And if those costs are not ide | Computing by hand the optimal threshold value for a biomarker using the Youden Index
It is a mistake to think that an optimum threshold can be computed without knowing the cost of a false positive and the cost of a false negative for a specific subject. And if those costs are not identical for all subjects, it is easy to see that no threshold should be used. ROC curves and Youden indexes are only useful for mass one-time group decision making where utilities are unknowable. You are making a series of very subtle assumptions. One of these is that the binary choice is forced, i.e., there is no gray zone that would lead to a "defer the decision, get more data" action. | Computing by hand the optimal threshold value for a biomarker using the Youden Index
It is a mistake to think that an optimum threshold can be computed without knowing the cost of a false positive and the cost of a false negative for a specific subject. And if those costs are not ide |
54,513 | What test should be used to tell if two linear regression lines are significantly different? | In this particular case, one of your lines has a known slope and intercept (intercept 0, slope 1), so you don't fit some larger interaction model, you can just jointly test whether the other model is consistent with the population intercept and slope being 0 and 1 respectively.
This is a standard thing for a linear model.
It's slightly easier to regress y-x on x and in the second regression test for both intercept and slope being 0.
The RSS for the reduced model is the sum of (y-x)^2. The RSS for the full model can be extracted from the anova of the linear regression and you can perform an F test, but if you're working in R you can do this kind of thing:
nullm <- lm((y-x)~0)
fullm <- lm((y-x)~x)
anova(nullm,fullm)
The model "nullm" is the LS model $y = 0 + 1 x + \varepsilon$
The model "fullm" is just the LS model with two parameters, but it has to have the same LHS as "nullm" to go into anova, so it looks unconventional. The function anova then calculates the F-test for the improvement of the full model over the null, which adds two parameters and reduces the residual sum of squares by the SS explained by the full model. This acts as a test of the null ($\text{H}_0: \alpha=\beta=0$) against the alternative that at least one of the two is not 0.
However, in this case, you can already see that the hypothesis is going to be rejected because the intercept is already very different from 0 (p=0.001), so there's probably no need to go through and do the whole thing, the result will be rejection at typical significance levels. | What test should be used to tell if two linear regression lines are significantly different? | In this particular case, one of your lines has a known slope and intercept (intercept 0, slope 1), so you don't fit some larger interaction model, you can just jointly test whether the other model is | What test should be used to tell if two linear regression lines are significantly different?
In this particular case, one of your lines has a known slope and intercept (intercept 0, slope 1), so you don't fit some larger interaction model, you can just jointly test whether the other model is consistent with the population intercept and slope being 0 and 1 respectively.
This is a standard thing for a linear model.
It's slightly easier to regress y-x on x and in the second regression test for both intercept and slope being 0.
The RSS for the reduced model is the sum of (y-x)^2. The RSS for the full model can be extracted from the anova of the linear regression and you can perform an F test, but if you're working in R you can do this kind of thing:
nullm <- lm((y-x)~0)
fullm <- lm((y-x)~x)
anova(nullm,fullm)
The model "nullm" is the LS model $y = 0 + 1 x + \varepsilon$
The model "fullm" is just the LS model with two parameters, but it has to have the same LHS as "nullm" to go into anova, so it looks unconventional. The function anova then calculates the F-test for the improvement of the full model over the null, which adds two parameters and reduces the residual sum of squares by the SS explained by the full model. This acts as a test of the null ($\text{H}_0: \alpha=\beta=0$) against the alternative that at least one of the two is not 0.
However, in this case, you can already see that the hypothesis is going to be rejected because the intercept is already very different from 0 (p=0.001), so there's probably no need to go through and do the whole thing, the result will be rejection at typical significance levels. | What test should be used to tell if two linear regression lines are significantly different?
In this particular case, one of your lines has a known slope and intercept (intercept 0, slope 1), so you don't fit some larger interaction model, you can just jointly test whether the other model is |
54,514 | What test should be used to tell if two linear regression lines are significantly different? | Just estimate both lines in a single model using an interaction effect and test whether the interaction effect and the main effect equals 0. | What test should be used to tell if two linear regression lines are significantly different? | Just estimate both lines in a single model using an interaction effect and test whether the interaction effect and the main effect equals 0. | What test should be used to tell if two linear regression lines are significantly different?
Just estimate both lines in a single model using an interaction effect and test whether the interaction effect and the main effect equals 0. | What test should be used to tell if two linear regression lines are significantly different?
Just estimate both lines in a single model using an interaction effect and test whether the interaction effect and the main effect equals 0. |
54,515 | High censoring rate in survival analysis | The Kaplan-Meier estimator is not biased when a large proportion of individuals are censored. One of the problems we often observe is that the majority of power for the log-rank test is derived from early failure times which are difficult to observe in KM curves. It does mean that the median survival time is an unreliable point estimate. However, the hazard ratio from a Cox model serves as a good estimate of the relative risk and is unbiased regardless of the amount of censoring that occurs. Both the log rank and the Cox model are adequate tests of survival that are unbiased in interval, right, and left censored data.
The KM curves are biased when there is informative censoring however. | High censoring rate in survival analysis | The Kaplan-Meier estimator is not biased when a large proportion of individuals are censored. One of the problems we often observe is that the majority of power for the log-rank test is derived from e | High censoring rate in survival analysis
The Kaplan-Meier estimator is not biased when a large proportion of individuals are censored. One of the problems we often observe is that the majority of power for the log-rank test is derived from early failure times which are difficult to observe in KM curves. It does mean that the median survival time is an unreliable point estimate. However, the hazard ratio from a Cox model serves as a good estimate of the relative risk and is unbiased regardless of the amount of censoring that occurs. Both the log rank and the Cox model are adequate tests of survival that are unbiased in interval, right, and left censored data.
The KM curves are biased when there is informative censoring however. | High censoring rate in survival analysis
The Kaplan-Meier estimator is not biased when a large proportion of individuals are censored. One of the problems we often observe is that the majority of power for the log-rank test is derived from e |
54,516 | High censoring rate in survival analysis | K-M does not work well for censoring proportions >50%. If you can analyze the distribution of your data, it is better to use a parametric method such as MLE. In alternative, you can also use imputation methods. | High censoring rate in survival analysis | K-M does not work well for censoring proportions >50%. If you can analyze the distribution of your data, it is better to use a parametric method such as MLE. In alternative, you can also use imputatio | High censoring rate in survival analysis
K-M does not work well for censoring proportions >50%. If you can analyze the distribution of your data, it is better to use a parametric method such as MLE. In alternative, you can also use imputation methods. | High censoring rate in survival analysis
K-M does not work well for censoring proportions >50%. If you can analyze the distribution of your data, it is better to use a parametric method such as MLE. In alternative, you can also use imputatio |
54,517 | vector fit interpretation NMDS | Vector fitting is a regression. Explicitly, the model fitted is
$$y = \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
where $y$ is the environmental variable requiring a vector, $X_i$ is the $i$th ordination "axis" score (here for the first two ordination "axes") and $\varepsilon$ the unexplained variance. Both $y$ and $X_i$ are centred prior to fitting the model, hence no intercept. The $\hat{\beta}_j$ are the coordinates of the vector for $y$ in the ordination space spanned by the $i$ ordination axes; these may be normalised to unit length.
As this is a regression, $R^2$ is easily computed and so could the significance of the coefficients or $R^2$. However, we presume that the model assumptions are not fully met and hence we use a permutation test to test significance of the $R^2$ of the model.
The permutation test doesn't create the overall $R^2$, what is done is that we permute the values of the response $y$ into random order. Next we use the fitted regression model (equation above) to predict the randomised response data and compute the $R^2$ between the randomised response and the fitted values from the model. This $R^2$ value is recorded and then the procedure is done again with a different random permutation. We keep doing this a modest number of times (say 999). Under the null hypothesis of no relationship between the ordination "axis" scores and the environmental variable, the observed $R^2$ value should be a common value among the permuted $R^2$ values. If however the observed $R^2$ is extreme relative to the permutation distribution of $R^2$ then it is unlikely that the Null hypothesis is correct as we have substantial evidence against it. The proportion of times a randomised $R^2$ from the distribution is equal to or greater than the observed $R^2$ is a value known as the permutation $p$ value.
An example, fully worked may help with this. Using the vegan package for R and some in-built data
require(vegan)
data(varespec)
data(varechem)
## fit PCA
ord <- rda(varespec)
## fit vector for Al - gather data
dat <- cbind.data.frame(Al = varechem$Al,
scores(ord, display = "sites", scaling = 1))
## fit the model
mod <- lm(Al ~ PC1 + PC2, data = dat)
summary(mod)
This gives
> summary(mod)
Call:
lm(formula = Al ~ PC1 + PC2, data = dat)
Residuals:
Min 1Q Median 3Q Max
-172.30 -58.00 -12.54 58.44 239.46
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 142.475 19.807 7.193 4.34e-07 ***
PC1 31.143 9.238 3.371 0.00289 **
PC2 27.492 13.442 2.045 0.05356 .
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Residual standard error: 97.04 on 21 degrees of freedom
Multiple R-squared: 0.4254, Adjusted R-squared: 0.3707
F-statistic: 7.774 on 2 and 21 DF, p-value: 0.002974
Note the value for the Multiple R-squared (0.4254).
vegan has a canned function for doing all of this, on multiple environmental variables at once; envfit(). Compare the $R^2$ from above with the vector-fitted value (to keep things simple I just do Al here, but you could pass all of varechem and envfit would fit vectors [centroids for factors] for all variables.)
set.seed(42) ## make this reproducible - pseudo-random permutations!
envfit(ord, varechem[, "Al", drop = FALSE])
> envfit(ord, varechem[, "Al", drop = FALSE])
***VECTORS
PC1 PC2 r2 Pr(>r)
Al 0.85495 0.51871 0.4254 0.004 **
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
P values based on 999 permutations.
The two $R^2$ values shown are exactly the same.
[Do note that envfit doesn't actually fit models via lm internally - it uses a QR decomposition. This is the same methods employed deeper down in lm but we call it directly to fit the model manually as we want it without the extra things that something like lm.fit would give us.] | vector fit interpretation NMDS | Vector fitting is a regression. Explicitly, the model fitted is
$$y = \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
where $y$ is the environmental variable requiring a vector, $X_i$ is the $i$th ordinatio | vector fit interpretation NMDS
Vector fitting is a regression. Explicitly, the model fitted is
$$y = \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
where $y$ is the environmental variable requiring a vector, $X_i$ is the $i$th ordination "axis" score (here for the first two ordination "axes") and $\varepsilon$ the unexplained variance. Both $y$ and $X_i$ are centred prior to fitting the model, hence no intercept. The $\hat{\beta}_j$ are the coordinates of the vector for $y$ in the ordination space spanned by the $i$ ordination axes; these may be normalised to unit length.
As this is a regression, $R^2$ is easily computed and so could the significance of the coefficients or $R^2$. However, we presume that the model assumptions are not fully met and hence we use a permutation test to test significance of the $R^2$ of the model.
The permutation test doesn't create the overall $R^2$, what is done is that we permute the values of the response $y$ into random order. Next we use the fitted regression model (equation above) to predict the randomised response data and compute the $R^2$ between the randomised response and the fitted values from the model. This $R^2$ value is recorded and then the procedure is done again with a different random permutation. We keep doing this a modest number of times (say 999). Under the null hypothesis of no relationship between the ordination "axis" scores and the environmental variable, the observed $R^2$ value should be a common value among the permuted $R^2$ values. If however the observed $R^2$ is extreme relative to the permutation distribution of $R^2$ then it is unlikely that the Null hypothesis is correct as we have substantial evidence against it. The proportion of times a randomised $R^2$ from the distribution is equal to or greater than the observed $R^2$ is a value known as the permutation $p$ value.
An example, fully worked may help with this. Using the vegan package for R and some in-built data
require(vegan)
data(varespec)
data(varechem)
## fit PCA
ord <- rda(varespec)
## fit vector for Al - gather data
dat <- cbind.data.frame(Al = varechem$Al,
scores(ord, display = "sites", scaling = 1))
## fit the model
mod <- lm(Al ~ PC1 + PC2, data = dat)
summary(mod)
This gives
> summary(mod)
Call:
lm(formula = Al ~ PC1 + PC2, data = dat)
Residuals:
Min 1Q Median 3Q Max
-172.30 -58.00 -12.54 58.44 239.46
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 142.475 19.807 7.193 4.34e-07 ***
PC1 31.143 9.238 3.371 0.00289 **
PC2 27.492 13.442 2.045 0.05356 .
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Residual standard error: 97.04 on 21 degrees of freedom
Multiple R-squared: 0.4254, Adjusted R-squared: 0.3707
F-statistic: 7.774 on 2 and 21 DF, p-value: 0.002974
Note the value for the Multiple R-squared (0.4254).
vegan has a canned function for doing all of this, on multiple environmental variables at once; envfit(). Compare the $R^2$ from above with the vector-fitted value (to keep things simple I just do Al here, but you could pass all of varechem and envfit would fit vectors [centroids for factors] for all variables.)
set.seed(42) ## make this reproducible - pseudo-random permutations!
envfit(ord, varechem[, "Al", drop = FALSE])
> envfit(ord, varechem[, "Al", drop = FALSE])
***VECTORS
PC1 PC2 r2 Pr(>r)
Al 0.85495 0.51871 0.4254 0.004 **
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
P values based on 999 permutations.
The two $R^2$ values shown are exactly the same.
[Do note that envfit doesn't actually fit models via lm internally - it uses a QR decomposition. This is the same methods employed deeper down in lm but we call it directly to fit the model manually as we want it without the extra things that something like lm.fit would give us.] | vector fit interpretation NMDS
Vector fitting is a regression. Explicitly, the model fitted is
$$y = \beta_1 X_1 + \beta_2 X_2 + \varepsilon$$
where $y$ is the environmental variable requiring a vector, $X_i$ is the $i$th ordinatio |
54,518 | Ordering in VAR models | I will try to answer your second part of the question. If you understand this, I hope you will be able to answer the first part.
Ordering means placing the variables (all) in the decreasing order of exogenity . For example, if y1,y2, and y3 are three variables in the system and if we have from economic theory (or previous empirical findings) that y2 is relatively more exogeneous than y1 and y3 and y1 is relatively more exogeneous than y3 but less exogenous than y2, then we have the order as y2 y1 y3. To put in simple words, y2 (say weather) is more likely to influence y1 (GDP) and y3 (traveling) but the reverse is not true and y1 (GDP) is more likely to influence y3 (traveling) but the reverse is not true.Note that we are talking here of relative exogenity not the absolute one. | Ordering in VAR models | I will try to answer your second part of the question. If you understand this, I hope you will be able to answer the first part.
Ordering means placing the variables (all) in the decreasing order of | Ordering in VAR models
I will try to answer your second part of the question. If you understand this, I hope you will be able to answer the first part.
Ordering means placing the variables (all) in the decreasing order of exogenity . For example, if y1,y2, and y3 are three variables in the system and if we have from economic theory (or previous empirical findings) that y2 is relatively more exogeneous than y1 and y3 and y1 is relatively more exogeneous than y3 but less exogenous than y2, then we have the order as y2 y1 y3. To put in simple words, y2 (say weather) is more likely to influence y1 (GDP) and y3 (traveling) but the reverse is not true and y1 (GDP) is more likely to influence y3 (traveling) but the reverse is not true.Note that we are talking here of relative exogenity not the absolute one. | Ordering in VAR models
I will try to answer your second part of the question. If you understand this, I hope you will be able to answer the first part.
Ordering means placing the variables (all) in the decreasing order of |
54,519 | Comparison between MDL and BIC | The Bayesian Infomration Criterion (BIC) is given as:
\begin{equation}\label{eq_BIC_FINAL}
BIC = \log f\left( {\bf{x}}|\hat{{\bf{\theta}}}_i ; H_i\right) - \frac{1}{2} \log \left| I\left(\hat{{\bf{\theta}}}_i \right)\right| + \frac{n_i}{2} \log 2 \pi e \overset{i}{\rightarrow} max,
\end{equation}
where $i=1,\cdots,M$ is the model order index, $\left| \cdot \right|$ is the determinant, $I\left(\hat{{\bf{\theta}}}_i \right)$ is the Fisher Information Matrix for parameter ${\bf{\theta}}_i$ and $n_i$ is the number of unknown deterministic parameters under each hypothesized model.
MDL is derived directly from the BIC when $N\to \infty$ assuming i.i.d samples. Assuming $N$ i.i.d. samples we can write $I\left(\hat{\theta}_i \right) = N i\left(\hat{\theta}_i \right)$, where $i\left(\hat{\theta}_i \right)$ is the Fisher information matrix based on only one sample evaluated at $\hat{\theta}_i$. Inserting this into the BIC we get
\begin{equation}\label{eq_MDL1}
\log f\left( {\bf{x}} ; H_i\right) = \log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) - \frac{n_i}{2} \log N - \frac{1}{2} \log \left| i\left(\hat{\theta}_i \right)\right| + \frac{n_i}{2} \log 2 \pi e,
\end{equation}
where $H_i$ is the hypothesized model order.
Now taking $N$ to infinity will leave only the first two terms in the above equation so we get
\begin{equation}
\log f\left( {\bf{x}} ; H_i\right) = \log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) - \frac{n_i}{2} \log N \overset{i}{\rightarrow} max.
\end{equation}
Usually in the literature the signs are in the opposite direction so we wish to minimize the MDL:
\begin{equation}
MDL = -\log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) + \frac{n_i}{2} \log N \overset{i}{\rightarrow} min.
\end{equation}
So, obviously if one of the assumptions made above, namely, a lot of samples and i.i.d of the samples, does not hold, MDL will not give the same results as BIC. | Comparison between MDL and BIC | The Bayesian Infomration Criterion (BIC) is given as:
\begin{equation}\label{eq_BIC_FINAL}
BIC = \log f\left( {\bf{x}}|\hat{{\bf{\theta}}}_i ; H_i\right) - \frac{1}{2} \log \left| I\left(\hat{{\bf{\t | Comparison between MDL and BIC
The Bayesian Infomration Criterion (BIC) is given as:
\begin{equation}\label{eq_BIC_FINAL}
BIC = \log f\left( {\bf{x}}|\hat{{\bf{\theta}}}_i ; H_i\right) - \frac{1}{2} \log \left| I\left(\hat{{\bf{\theta}}}_i \right)\right| + \frac{n_i}{2} \log 2 \pi e \overset{i}{\rightarrow} max,
\end{equation}
where $i=1,\cdots,M$ is the model order index, $\left| \cdot \right|$ is the determinant, $I\left(\hat{{\bf{\theta}}}_i \right)$ is the Fisher Information Matrix for parameter ${\bf{\theta}}_i$ and $n_i$ is the number of unknown deterministic parameters under each hypothesized model.
MDL is derived directly from the BIC when $N\to \infty$ assuming i.i.d samples. Assuming $N$ i.i.d. samples we can write $I\left(\hat{\theta}_i \right) = N i\left(\hat{\theta}_i \right)$, where $i\left(\hat{\theta}_i \right)$ is the Fisher information matrix based on only one sample evaluated at $\hat{\theta}_i$. Inserting this into the BIC we get
\begin{equation}\label{eq_MDL1}
\log f\left( {\bf{x}} ; H_i\right) = \log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) - \frac{n_i}{2} \log N - \frac{1}{2} \log \left| i\left(\hat{\theta}_i \right)\right| + \frac{n_i}{2} \log 2 \pi e,
\end{equation}
where $H_i$ is the hypothesized model order.
Now taking $N$ to infinity will leave only the first two terms in the above equation so we get
\begin{equation}
\log f\left( {\bf{x}} ; H_i\right) = \log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) - \frac{n_i}{2} \log N \overset{i}{\rightarrow} max.
\end{equation}
Usually in the literature the signs are in the opposite direction so we wish to minimize the MDL:
\begin{equation}
MDL = -\log f\left( {\bf{x}} |\hat{\theta}_i ; H_i\right) + \frac{n_i}{2} \log N \overset{i}{\rightarrow} min.
\end{equation}
So, obviously if one of the assumptions made above, namely, a lot of samples and i.i.d of the samples, does not hold, MDL will not give the same results as BIC. | Comparison between MDL and BIC
The Bayesian Infomration Criterion (BIC) is given as:
\begin{equation}\label{eq_BIC_FINAL}
BIC = \log f\left( {\bf{x}}|\hat{{\bf{\theta}}}_i ; H_i\right) - \frac{1}{2} \log \left| I\left(\hat{{\bf{\t |
54,520 | Comparison between MDL and BIC | No, if MDL is minimized by a model with two states while BIC is minimized by a model with four, that would not of itself imply that MDL is better.
But it's possible I missed something. What would make you think so? | Comparison between MDL and BIC | No, if MDL is minimized by a model with two states while BIC is minimized by a model with four, that would not of itself imply that MDL is better.
But it's possible I missed something. What would mak | Comparison between MDL and BIC
No, if MDL is minimized by a model with two states while BIC is minimized by a model with four, that would not of itself imply that MDL is better.
But it's possible I missed something. What would make you think so? | Comparison between MDL and BIC
No, if MDL is minimized by a model with two states while BIC is minimized by a model with four, that would not of itself imply that MDL is better.
But it's possible I missed something. What would mak |
54,521 | Comparison between MDL and BIC | In a mathematical sense, there is no such thing as "better." There is only larger or smaller according to some sort of norm or other function producing real numbers and/or intervals as output.
If you ever hear anyone say something is "optimal," I recommend asking, "In what sense?" This forces them to tell how they came to that conclusion, and if it is not with a comparison between real numbers/intervals, it is probably not a scientific method. | Comparison between MDL and BIC | In a mathematical sense, there is no such thing as "better." There is only larger or smaller according to some sort of norm or other function producing real numbers and/or intervals as output.
If you | Comparison between MDL and BIC
In a mathematical sense, there is no such thing as "better." There is only larger or smaller according to some sort of norm or other function producing real numbers and/or intervals as output.
If you ever hear anyone say something is "optimal," I recommend asking, "In what sense?" This forces them to tell how they came to that conclusion, and if it is not with a comparison between real numbers/intervals, it is probably not a scientific method. | Comparison between MDL and BIC
In a mathematical sense, there is no such thing as "better." There is only larger or smaller according to some sort of norm or other function producing real numbers and/or intervals as output.
If you |
54,522 | Parametric vs. Nonparametric | Parametric does NOT mean "Bayesian based".
Here is one definition of "parametric statistics"
Parametric statistics is a branch of statistics that assumes data come
from a type of probability distribution and makes inferences about the parameters of the distribution
(From Wikipedia).
As Wikipedia goes on to note, most of the common, elementary statistics are parametric. For example, ordinary least squares regression is parametric. Loess regression is nonparametric.
Parametric statistics are usually easier to interpret and may be more powerful (in a statistical sense) but they are based on more assumptions than nonparametric statistics. They vary in their degree of robustness, but are usually less robust than nonparametric statistics.
For example, the equation derived from ordinary least squares regression is (in most cases, anyway) quite easy to understand. That from a regression involving splines is often much less clear and may require graphical representation to be understood well.
Bayesian statistics is something altogether different, having to do with using prior information. | Parametric vs. Nonparametric | Parametric does NOT mean "Bayesian based".
Here is one definition of "parametric statistics"
Parametric statistics is a branch of statistics that assumes data come
from a type of probability distri | Parametric vs. Nonparametric
Parametric does NOT mean "Bayesian based".
Here is one definition of "parametric statistics"
Parametric statistics is a branch of statistics that assumes data come
from a type of probability distribution and makes inferences about the parameters of the distribution
(From Wikipedia).
As Wikipedia goes on to note, most of the common, elementary statistics are parametric. For example, ordinary least squares regression is parametric. Loess regression is nonparametric.
Parametric statistics are usually easier to interpret and may be more powerful (in a statistical sense) but they are based on more assumptions than nonparametric statistics. They vary in their degree of robustness, but are usually less robust than nonparametric statistics.
For example, the equation derived from ordinary least squares regression is (in most cases, anyway) quite easy to understand. That from a regression involving splines is often much less clear and may require graphical representation to be understood well.
Bayesian statistics is something altogether different, having to do with using prior information. | Parametric vs. Nonparametric
Parametric does NOT mean "Bayesian based".
Here is one definition of "parametric statistics"
Parametric statistics is a branch of statistics that assumes data come
from a type of probability distri |
54,523 | General definition of stochastic processes | Definitions
Recall that a random variable $X$ is a measurable function defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with values in a real vector space $V$. If you would like to focus on concepts and shed the mathematical details, you may think of it as a
consistent way to write numbers on tickets in a box,
as I claimed in an answer at https://stats.stackexchange.com/a/54894. I like this point of view because it beautifully handles complicated generalizations, such as stochastic processes.
Instead of writing a number on each ticket, pick (once and for all) an index space $T$, such as the real numbers (to represent all possible times relative to some starting time) or all natural numbers (to represent discrete time series), or all possible points in space (for a spatial stochastic process). On each ticket $\omega$ there is written an entire real-valued function
$$X(\omega): T\to \mathbb{R}.$$
That's a stochastic process. To sample from it, mix up the tickets thoroughly and pull one out at random with probabilities given by $\mathbb{P}$.
This schematic of a stochastic process $X$ shows parts of three tickets, $\omega_1$, $\omega_2$, and $\omega_3$. On each ticket $\omega$ is displayed a function $X(\omega)$. For any $t$ on the horizontal axis (representing $T$) and any ticket $\omega$ in the box, you can look up the value of $\omega$'s function at $t$ and write it on the ticket: that's the random variable $X(\omega)(t)$.
Equivalent points of view
At the risk of seeming redundant, observe there are three mathematically equivalent ways to view $X$:
A random function $$X(\omega): T \to \mathbb{R}.$$
A random variable whose tickets are time-stamped outcomes in $\Omega$ $$X:T \times \Omega \to \mathbb{R};\quad X(t, \omega) = X(\omega)(t).$$
This is rather a formal equivalence, without a nice tickets-in-box interpretation.
An indexed set of random variables $$X_t: \Omega \to \mathbb{R};\quad X_t(\omega) = X(\omega)(t).$$
To sample from any $X_t$, pick a random ticket $\omega$ from the box and--ignoring the rest of the function $X$--just read its value at $t$.
Neither the sample space $\Omega$ nor the underlying probability measure $\mathbb{P}:\mathcal{F}\to\mathbb{R}$ need to change at all in any of these points of view.
Another approach
To work with a stochastic process, we can often reduce our considerations to finite subsets of $T$. If you fix one $t\in T$, and write the particular value $X(\omega)(t)$ on each ticket, you have--obviously--a random variable. Its name is $X_t$. Formally,
$$X_t(\omega) = X(\omega)(t).$$
If you fix two indexes $s, t\in T$, then you can write the ordered pair $(X(\omega)(s), X(\omega)(t))$ on each ticket $\omega$. This is a bivariate random variable, writen $(X_s, X_t)$. It can be studied like any other bivariate random variable. It has an obvious relationship to the preceding univariate variables $X_t$ and $X_s$: they are its marginals.
You can go further and consider any finite sequence of indexes $\mathcal{T}=(t_1, t_2, \ldots, t_n)$, and similarly define an $n$-variate random variable
$$X_\mathcal{T}(\omega) = (X(\omega)(t_1), X(\omega)(t_2), \ldots, X(\omega)(t_n)).$$
All these multivariate random variables are obviously interconnected. For example, if we re-order the $t_i$ we get another, distinct, random variable--but it's really the "same" random variable with its values re-ordered. And if we simply ignore some of the indexes, we get a kind of generalized marginal distribution, in the same way that $X_s$ is related to $(X_s, X_t)$.
The Kolmogorov Extension Theorem asserts that such a "consistent" family of multivariate random variables, indexed by finite subsets of $T$, is the same thing as the original stochastic process. In other words,
A stochastic process is a consistent family of multivariate random variables $X_\mathcal{T}$.
Afterward
The Kolmogorov Extension Theorem explains why in much of the literature you will see a focus on analyzing such families, especially for small $n$ (usually $n=1$ and $n=2$). Many kinds of stochastic processes are characterized in terms of them. For instance, in a stationary process a group of transformations $G$ operates transitively on $T$ without changing the distributions of $X_\mathcal{T}$. Specifically, for any $g\in G$ and $\mathcal{T}\subset T$, let $$g(\mathcal{T}) = \{g(t)\,|\, g\in \mathcal{T}\}$$ be the image of $\mathcal{T}$. Then $X_\mathcal{T}$ and $X_{g(\mathcal{T})}$ must have the same (multivariate) distribution. The commonest example is $T=\mathbb{R}$ and $G$ is the group of translations $\{t\to t+g\,|\, g\in\mathbb{R}\}$.
A process is second-order stationary when this invariance relationship is necessarily true for subsets $\mathcal{T}$ having just one or two elements, but perhaps not true for larger subsets. Assuming second-order stationarity means we can focus analysis on the univariate and bivariate distributions determined by $X$ and we don't have to worry about the origin of the "times" $T$. | General definition of stochastic processes | Definitions
Recall that a random variable $X$ is a measurable function defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with values in a real vector space $V$. If you would like to fo | General definition of stochastic processes
Definitions
Recall that a random variable $X$ is a measurable function defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with values in a real vector space $V$. If you would like to focus on concepts and shed the mathematical details, you may think of it as a
consistent way to write numbers on tickets in a box,
as I claimed in an answer at https://stats.stackexchange.com/a/54894. I like this point of view because it beautifully handles complicated generalizations, such as stochastic processes.
Instead of writing a number on each ticket, pick (once and for all) an index space $T$, such as the real numbers (to represent all possible times relative to some starting time) or all natural numbers (to represent discrete time series), or all possible points in space (for a spatial stochastic process). On each ticket $\omega$ there is written an entire real-valued function
$$X(\omega): T\to \mathbb{R}.$$
That's a stochastic process. To sample from it, mix up the tickets thoroughly and pull one out at random with probabilities given by $\mathbb{P}$.
This schematic of a stochastic process $X$ shows parts of three tickets, $\omega_1$, $\omega_2$, and $\omega_3$. On each ticket $\omega$ is displayed a function $X(\omega)$. For any $t$ on the horizontal axis (representing $T$) and any ticket $\omega$ in the box, you can look up the value of $\omega$'s function at $t$ and write it on the ticket: that's the random variable $X(\omega)(t)$.
Equivalent points of view
At the risk of seeming redundant, observe there are three mathematically equivalent ways to view $X$:
A random function $$X(\omega): T \to \mathbb{R}.$$
A random variable whose tickets are time-stamped outcomes in $\Omega$ $$X:T \times \Omega \to \mathbb{R};\quad X(t, \omega) = X(\omega)(t).$$
This is rather a formal equivalence, without a nice tickets-in-box interpretation.
An indexed set of random variables $$X_t: \Omega \to \mathbb{R};\quad X_t(\omega) = X(\omega)(t).$$
To sample from any $X_t$, pick a random ticket $\omega$ from the box and--ignoring the rest of the function $X$--just read its value at $t$.
Neither the sample space $\Omega$ nor the underlying probability measure $\mathbb{P}:\mathcal{F}\to\mathbb{R}$ need to change at all in any of these points of view.
Another approach
To work with a stochastic process, we can often reduce our considerations to finite subsets of $T$. If you fix one $t\in T$, and write the particular value $X(\omega)(t)$ on each ticket, you have--obviously--a random variable. Its name is $X_t$. Formally,
$$X_t(\omega) = X(\omega)(t).$$
If you fix two indexes $s, t\in T$, then you can write the ordered pair $(X(\omega)(s), X(\omega)(t))$ on each ticket $\omega$. This is a bivariate random variable, writen $(X_s, X_t)$. It can be studied like any other bivariate random variable. It has an obvious relationship to the preceding univariate variables $X_t$ and $X_s$: they are its marginals.
You can go further and consider any finite sequence of indexes $\mathcal{T}=(t_1, t_2, \ldots, t_n)$, and similarly define an $n$-variate random variable
$$X_\mathcal{T}(\omega) = (X(\omega)(t_1), X(\omega)(t_2), \ldots, X(\omega)(t_n)).$$
All these multivariate random variables are obviously interconnected. For example, if we re-order the $t_i$ we get another, distinct, random variable--but it's really the "same" random variable with its values re-ordered. And if we simply ignore some of the indexes, we get a kind of generalized marginal distribution, in the same way that $X_s$ is related to $(X_s, X_t)$.
The Kolmogorov Extension Theorem asserts that such a "consistent" family of multivariate random variables, indexed by finite subsets of $T$, is the same thing as the original stochastic process. In other words,
A stochastic process is a consistent family of multivariate random variables $X_\mathcal{T}$.
Afterward
The Kolmogorov Extension Theorem explains why in much of the literature you will see a focus on analyzing such families, especially for small $n$ (usually $n=1$ and $n=2$). Many kinds of stochastic processes are characterized in terms of them. For instance, in a stationary process a group of transformations $G$ operates transitively on $T$ without changing the distributions of $X_\mathcal{T}$. Specifically, for any $g\in G$ and $\mathcal{T}\subset T$, let $$g(\mathcal{T}) = \{g(t)\,|\, g\in \mathcal{T}\}$$ be the image of $\mathcal{T}$. Then $X_\mathcal{T}$ and $X_{g(\mathcal{T})}$ must have the same (multivariate) distribution. The commonest example is $T=\mathbb{R}$ and $G$ is the group of translations $\{t\to t+g\,|\, g\in\mathbb{R}\}$.
A process is second-order stationary when this invariance relationship is necessarily true for subsets $\mathcal{T}$ having just one or two elements, but perhaps not true for larger subsets. Assuming second-order stationarity means we can focus analysis on the univariate and bivariate distributions determined by $X$ and we don't have to worry about the origin of the "times" $T$. | General definition of stochastic processes
Definitions
Recall that a random variable $X$ is a measurable function defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ with values in a real vector space $V$. If you would like to fo |
54,524 | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$ | If a continuous-time process $x_t$ is geometric brownian motion it would have this property, or the discrete-time equivalent (geometric random walk).
A difference in logs is is (for $u_t$ small at least) effectively a percentage change.
See also the connection to the force of mortality (what actuaries used to call the hazard function, or rather they seem to be using it less these days) and the force of interest, which are 'instantaneous' equivalents of your annualized (or more generally, periodized) discrete measure. | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$ | If a continuous-time process $x_t$ is geometric brownian motion it would have this property, or the discrete-time equivalent (geometric random walk).
A difference in logs is is (for $u_t$ small at lea | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$
If a continuous-time process $x_t$ is geometric brownian motion it would have this property, or the discrete-time equivalent (geometric random walk).
A difference in logs is is (for $u_t$ small at least) effectively a percentage change.
See also the connection to the force of mortality (what actuaries used to call the hazard function, or rather they seem to be using it less these days) and the force of interest, which are 'instantaneous' equivalents of your annualized (or more generally, periodized) discrete measure. | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$
If a continuous-time process $x_t$ is geometric brownian motion it would have this property, or the discrete-time equivalent (geometric random walk).
A difference in logs is is (for $u_t$ small at lea |
54,525 | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$ | If $u_{t}$ is near 0, then after multiplication by 100 it could be interpreted as percentage change of $x$ minus 100% from period $t-1$ to $t$ , that is beacause we could approximate $log(x_{t}/x_{t-1})$ by $x_{t}/x_{t-1}-1$ "very near" the point $x=1$, when $x$ is far away from 1 this approximation doesn't hold. Put functions $y=log(x)$ and $y=x-1$ on one plot. | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$ | If $u_{t}$ is near 0, then after multiplication by 100 it could be interpreted as percentage change of $x$ minus 100% from period $t-1$ to $t$ , that is beacause we could approximate $log(x_{t}/x_{t-1 | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$
If $u_{t}$ is near 0, then after multiplication by 100 it could be interpreted as percentage change of $x$ minus 100% from period $t-1$ to $t$ , that is beacause we could approximate $log(x_{t}/x_{t-1})$ by $x_{t}/x_{t-1}-1$ "very near" the point $x=1$, when $x$ is far away from 1 this approximation doesn't hold. Put functions $y=log(x)$ and $y=x-1$ on one plot. | Practical interpretation for $u_t = \log(x_t) - \log(x_{t-1})$
If $u_{t}$ is near 0, then after multiplication by 100 it could be interpreted as percentage change of $x$ minus 100% from period $t-1$ to $t$ , that is beacause we could approximate $log(x_{t}/x_{t-1 |
54,526 | Disagreement between normality tests and histogram graphs | It appears that your data can only take on positive values. In this case, the hypothesis of normality is often rejected. Normally distributed random variables range from positive to negative infinity, so only positive values would violate this. You could try taking the log of the observations and seeing whether these are normally distributed.
If your data follow a normal distribution, then the points in your QQ-plot should lie on a 45-degree line through the origin. Your plots do not look like that at all.
The KS test is giving an error because the distributions being tested are presumed to be continuous. In this case, the probability of witnessing two observations with the exact same value is 0. Your data set contains ties, invalidating this assumption. When there are ties, an asymptotic approximation is used (you can read about this in the help file). The error that you are receiving has nothing to do with data sets with different sizes.
In your post, you never specified the question that you are trying to answer--with sufficient precision, anyway. Do you really want to test that the distributions are the same? Would it be sufficient to test that the means are the same?
Unless you are willing to assume that the variables follow some distribution, there isn't much of an alternative to the KS test if you want to test for the distributions being the same. But there are several ways to test for differences in means. | Disagreement between normality tests and histogram graphs | It appears that your data can only take on positive values. In this case, the hypothesis of normality is often rejected. Normally distributed random variables range from positive to negative infinity, | Disagreement between normality tests and histogram graphs
It appears that your data can only take on positive values. In this case, the hypothesis of normality is often rejected. Normally distributed random variables range from positive to negative infinity, so only positive values would violate this. You could try taking the log of the observations and seeing whether these are normally distributed.
If your data follow a normal distribution, then the points in your QQ-plot should lie on a 45-degree line through the origin. Your plots do not look like that at all.
The KS test is giving an error because the distributions being tested are presumed to be continuous. In this case, the probability of witnessing two observations with the exact same value is 0. Your data set contains ties, invalidating this assumption. When there are ties, an asymptotic approximation is used (you can read about this in the help file). The error that you are receiving has nothing to do with data sets with different sizes.
In your post, you never specified the question that you are trying to answer--with sufficient precision, anyway. Do you really want to test that the distributions are the same? Would it be sufficient to test that the means are the same?
Unless you are willing to assume that the variables follow some distribution, there isn't much of an alternative to the KS test if you want to test for the distributions being the same. But there are several ways to test for differences in means. | Disagreement between normality tests and histogram graphs
It appears that your data can only take on positive values. In this case, the hypothesis of normality is often rejected. Normally distributed random variables range from positive to negative infinity, |
54,527 | How to apply Slutksy's Theorem? | Since user Max never turned his comment into an answer, to lay this one to officially rest:
By Lindeberg-Levy CLT indeed
$$\sqrt{n}( \bar X_n - \alpha) \xrightarrow{d} N(0,\sigma^2)$$
By the Law of Large Numbers
$$\bar Y_n \xrightarrow{p} \beta$$
Then we can apply Slutsky's theorem
$$Z_n=\frac{\sqrt{n}( \bar X_n - \alpha)}{\bar Y_n} \xrightarrow{d}\frac 1{\beta}N(0,\sigma^2) = N(0, \sigma^2/{\beta^2})$$ | How to apply Slutksy's Theorem? | Since user Max never turned his comment into an answer, to lay this one to officially rest:
By Lindeberg-Levy CLT indeed
$$\sqrt{n}( \bar X_n - \alpha) \xrightarrow{d} N(0,\sigma^2)$$
By the Law of L | How to apply Slutksy's Theorem?
Since user Max never turned his comment into an answer, to lay this one to officially rest:
By Lindeberg-Levy CLT indeed
$$\sqrt{n}( \bar X_n - \alpha) \xrightarrow{d} N(0,\sigma^2)$$
By the Law of Large Numbers
$$\bar Y_n \xrightarrow{p} \beta$$
Then we can apply Slutsky's theorem
$$Z_n=\frac{\sqrt{n}( \bar X_n - \alpha)}{\bar Y_n} \xrightarrow{d}\frac 1{\beta}N(0,\sigma^2) = N(0, \sigma^2/{\beta^2})$$ | How to apply Slutksy's Theorem?
Since user Max never turned his comment into an answer, to lay this one to officially rest:
By Lindeberg-Levy CLT indeed
$$\sqrt{n}( \bar X_n - \alpha) \xrightarrow{d} N(0,\sigma^2)$$
By the Law of L |
54,528 | Probability of an order statistic | Let's generalize a little: you have $n=8$ data in sorted order, $x_1 \lt x_2 \lt \cdots \lt x_n$, which you wish to divide randomly into groups of size $\alpha=4$ and $\beta=4$. Denote the division by the indicator of $\beta$: this is, in effect, an $n$-digit binary number having exactly $\beta$ ones. (Examples appear below.) In order for $x_i$ to be the $k=3$rd smallest in the second group, we need three things to happen. Binomial coefficients count the number of ways they can happen:
Digit $i$ of the indicator is $1$. This happens in $1 = \binom{1}{1}$ ways.
There are exactly $k-1$ $1$'s among digits $1$ through $i-1$. This happens in $\color{red}{\binom{i-1}{k-1}}$ ways.
There are exactly $\beta-k$ $1$'s among digits $i+1$ through $n$. This happens in $\color{blue}{\binom{n-i}{\beta-k}}$ ways.
These three events are independent because they describe non-overlapping positions in the indicator, so their product is the number of ways of performing the split.
The total number of ways in which the data can be split is given by the binomial coefficient $\binom{n}{\beta}$, each of which is equally likely, whence the chance that $x_i$ is $k$th smallest in the second group is
$$\frac{\color{red}{\binom{i-1}{k-1}} \color{blue}{\binom{n-i}{\beta-k}}}{\binom{n}{\beta}}.$$
(Here and later, red objects denote or count numbers ranked ahead of $x_i$ and blue objects denote or count numbers ranked after $x_i$.)
For example, let $n=8$, $\alpha=4$, $\beta=n-\alpha=4$, and $k=3$ (which is the specific instance in the question). Let's tabulate $i$, the corresponding binomial coefficients, and their product:
i Choose(i-1,2) Choose(8-i,4-3) Product
1 0 7 0
2 0 6 0
3 1 5 5
4 3 4 12
5 6 3 18
6 10 2 20
7 15 1 15
8 21 0 0
The total, $0+0+5+12+\cdots+15+0$, is $70$, which is precisely $\binom{8}{4}$, confirming the law of total probability. The interpretations are:
There is no chance that either $x_1$ or $x_2$ could be the third smallest elements in the second group.
There are $1\times 5=5$ ways out of $70$ that $x_3$ could be third smallest in the second group, whence the answer to the question about the $T_i$ is $5/70 = 1/14 \approx 7.1$%. In terms of the binary indicators, these five ways can be written
$$\color{red}{11}\ 1\ \color{blue}{10000},\quad \color{red}{11}\ 1\ \color{blue}{01000},\quad \color{red}{11}\ 1\ \color{blue}{00100},\quad \color{red}{11}\ 1\ \color{blue}{00010},\quad \color{red}{11}\ 1\ \color{blue}{00001}.$$
For instance, the fifth indicator $\color{red}{11}\ 1\ \color{blue}{00001}$ identifies $\{\color{red}{x_1}, \color{red}{x_2}, x_3, \color{blue}{x_8}\}$ as the second group.
There are $3\times 4=12$ ways out of $70$ that $x_4$ could be third smallest in the second group:
$$\color{red}{110}\ 1\ \color{blue}{1000},\quad \color{red}{110}\ 1\ \color{blue}{0100},\quad \color{red}{110}\ 1\ \color{blue}{0010},\quad \color{red}{110}\ 1\ \color{blue}{0001} \\
\color{red}{101}\ 1\ \color{blue}{1000},\quad \color{red}{101}\ 1\ \color{blue}{0100},\quad \color{red}{101}\ 1\ \color{blue}{0010},\quad \color{red}{101}\ 1\ \color{blue}{0001} \\
\color{red}{011}\ 1\ \color{blue}{1000},\quad \color{red}{011}\ 1\ \color{blue}{0100},\quad \color{red}{011}\ 1\ \color{blue}{0010},\quad \color{red}{011}\ 1\ \color{blue}{0001}.$$
Etc. | Probability of an order statistic | Let's generalize a little: you have $n=8$ data in sorted order, $x_1 \lt x_2 \lt \cdots \lt x_n$, which you wish to divide randomly into groups of size $\alpha=4$ and $\beta=4$. Denote the division b | Probability of an order statistic
Let's generalize a little: you have $n=8$ data in sorted order, $x_1 \lt x_2 \lt \cdots \lt x_n$, which you wish to divide randomly into groups of size $\alpha=4$ and $\beta=4$. Denote the division by the indicator of $\beta$: this is, in effect, an $n$-digit binary number having exactly $\beta$ ones. (Examples appear below.) In order for $x_i$ to be the $k=3$rd smallest in the second group, we need three things to happen. Binomial coefficients count the number of ways they can happen:
Digit $i$ of the indicator is $1$. This happens in $1 = \binom{1}{1}$ ways.
There are exactly $k-1$ $1$'s among digits $1$ through $i-1$. This happens in $\color{red}{\binom{i-1}{k-1}}$ ways.
There are exactly $\beta-k$ $1$'s among digits $i+1$ through $n$. This happens in $\color{blue}{\binom{n-i}{\beta-k}}$ ways.
These three events are independent because they describe non-overlapping positions in the indicator, so their product is the number of ways of performing the split.
The total number of ways in which the data can be split is given by the binomial coefficient $\binom{n}{\beta}$, each of which is equally likely, whence the chance that $x_i$ is $k$th smallest in the second group is
$$\frac{\color{red}{\binom{i-1}{k-1}} \color{blue}{\binom{n-i}{\beta-k}}}{\binom{n}{\beta}}.$$
(Here and later, red objects denote or count numbers ranked ahead of $x_i$ and blue objects denote or count numbers ranked after $x_i$.)
For example, let $n=8$, $\alpha=4$, $\beta=n-\alpha=4$, and $k=3$ (which is the specific instance in the question). Let's tabulate $i$, the corresponding binomial coefficients, and their product:
i Choose(i-1,2) Choose(8-i,4-3) Product
1 0 7 0
2 0 6 0
3 1 5 5
4 3 4 12
5 6 3 18
6 10 2 20
7 15 1 15
8 21 0 0
The total, $0+0+5+12+\cdots+15+0$, is $70$, which is precisely $\binom{8}{4}$, confirming the law of total probability. The interpretations are:
There is no chance that either $x_1$ or $x_2$ could be the third smallest elements in the second group.
There are $1\times 5=5$ ways out of $70$ that $x_3$ could be third smallest in the second group, whence the answer to the question about the $T_i$ is $5/70 = 1/14 \approx 7.1$%. In terms of the binary indicators, these five ways can be written
$$\color{red}{11}\ 1\ \color{blue}{10000},\quad \color{red}{11}\ 1\ \color{blue}{01000},\quad \color{red}{11}\ 1\ \color{blue}{00100},\quad \color{red}{11}\ 1\ \color{blue}{00010},\quad \color{red}{11}\ 1\ \color{blue}{00001}.$$
For instance, the fifth indicator $\color{red}{11}\ 1\ \color{blue}{00001}$ identifies $\{\color{red}{x_1}, \color{red}{x_2}, x_3, \color{blue}{x_8}\}$ as the second group.
There are $3\times 4=12$ ways out of $70$ that $x_4$ could be third smallest in the second group:
$$\color{red}{110}\ 1\ \color{blue}{1000},\quad \color{red}{110}\ 1\ \color{blue}{0100},\quad \color{red}{110}\ 1\ \color{blue}{0010},\quad \color{red}{110}\ 1\ \color{blue}{0001} \\
\color{red}{101}\ 1\ \color{blue}{1000},\quad \color{red}{101}\ 1\ \color{blue}{0100},\quad \color{red}{101}\ 1\ \color{blue}{0010},\quad \color{red}{101}\ 1\ \color{blue}{0001} \\
\color{red}{011}\ 1\ \color{blue}{1000},\quad \color{red}{011}\ 1\ \color{blue}{0100},\quad \color{red}{011}\ 1\ \color{blue}{0010},\quad \color{red}{011}\ 1\ \color{blue}{0001}.$$
Etc. | Probability of an order statistic
Let's generalize a little: you have $n=8$ data in sorted order, $x_1 \lt x_2 \lt \cdots \lt x_n$, which you wish to divide randomly into groups of size $\alpha=4$ and $\beta=4$. Denote the division b |
54,529 | What is the joint probability distribution of two same variables | $X$ is not jointly continuous with itself in the sense that there is no joint density
function (pdf) $f_{X,X}(s,t)$ that has positive value over a region of positive area
in the plane
with coordinate axes $s$ and $t$. All the probability mass lies on the straight line of slope $1$ through the origin (a region of zero area) and the joint cumulative
probability distribution
function CDF is
$$F_{X,X}(s,t) = P\{X \leq s, X \leq t\} = P\{X \leq \min(s,t)\} = F_X(\min(s,t)).$$
As whuber points out in the comments on another answer,
$\frac{\partial^2F_{X,X}(s,t)}{\partial s\partial t}$ is not
defined for $s=t$. | What is the joint probability distribution of two same variables | $X$ is not jointly continuous with itself in the sense that there is no joint density
function (pdf) $f_{X,X}(s,t)$ that has positive value over a region of positive area
in the plane
with coordinat | What is the joint probability distribution of two same variables
$X$ is not jointly continuous with itself in the sense that there is no joint density
function (pdf) $f_{X,X}(s,t)$ that has positive value over a region of positive area
in the plane
with coordinate axes $s$ and $t$. All the probability mass lies on the straight line of slope $1$ through the origin (a region of zero area) and the joint cumulative
probability distribution
function CDF is
$$F_{X,X}(s,t) = P\{X \leq s, X \leq t\} = P\{X \leq \min(s,t)\} = F_X(\min(s,t)).$$
As whuber points out in the comments on another answer,
$\frac{\partial^2F_{X,X}(s,t)}{\partial s\partial t}$ is not
defined for $s=t$. | What is the joint probability distribution of two same variables
$X$ is not jointly continuous with itself in the sense that there is no joint density
function (pdf) $f_{X,X}(s,t)$ that has positive value over a region of positive area
in the plane
with coordinat |
54,530 | What is the joint probability distribution of two same variables | $F_{(X,X)}(t,s) = P[X \le t,X \le s] = P[X \le \inf(s,t)]$
$f(s,t) = \frac{ \partial ^2F_{(X,X)}}{\partial s \partial t}(t,s) = \frac{\partial^2 (P[X \le t] \Large{1_{\{s=t\}}})}{\partial^2 t } $
Where
$\frac{\partial \Large{1_{\{s=t\}}}}{\partial t}$
$= lim_{\sigma \mapsto 0} \frac{1}{\sqrt{2\sigma \pi}} e^{\frac{-s^2}{\sigma^2}}$
More info about the indicator function derivation are here | What is the joint probability distribution of two same variables | $F_{(X,X)}(t,s) = P[X \le t,X \le s] = P[X \le \inf(s,t)]$
$f(s,t) = \frac{ \partial ^2F_{(X,X)}}{\partial s \partial t}(t,s) = \frac{\partial^2 (P[X \le t] \Large{1_{\{s=t\}}})}{\partial^2 t } $
Wh | What is the joint probability distribution of two same variables
$F_{(X,X)}(t,s) = P[X \le t,X \le s] = P[X \le \inf(s,t)]$
$f(s,t) = \frac{ \partial ^2F_{(X,X)}}{\partial s \partial t}(t,s) = \frac{\partial^2 (P[X \le t] \Large{1_{\{s=t\}}})}{\partial^2 t } $
Where
$\frac{\partial \Large{1_{\{s=t\}}}}{\partial t}$
$= lim_{\sigma \mapsto 0} \frac{1}{\sqrt{2\sigma \pi}} e^{\frac{-s^2}{\sigma^2}}$
More info about the indicator function derivation are here | What is the joint probability distribution of two same variables
$F_{(X,X)}(t,s) = P[X \le t,X \le s] = P[X \le \inf(s,t)]$
$f(s,t) = \frac{ \partial ^2F_{(X,X)}}{\partial s \partial t}(t,s) = \frac{\partial^2 (P[X \le t] \Large{1_{\{s=t\}}})}{\partial^2 t } $
Wh |
54,531 | What is the joint probability distribution of two same variables | Random Discrete Variables Case:
For Y=X,
then pij = 0 as xi and Xj are always exclusive
for i=j and pij=pi for i=j. as xi and xi to happen in the same time has pi chance.
So E[X^2] = E[XX] = Sum (xi^2*pi) for both cases | What is the joint probability distribution of two same variables | Random Discrete Variables Case:
For Y=X,
then pij = 0 as xi and Xj are always exclusive
for i=j and pij=pi for i=j. as xi and xi to happen in the same time has pi chance.
So E[X^2] = E[XX] = Sum (xi | What is the joint probability distribution of two same variables
Random Discrete Variables Case:
For Y=X,
then pij = 0 as xi and Xj are always exclusive
for i=j and pij=pi for i=j. as xi and xi to happen in the same time has pi chance.
So E[X^2] = E[XX] = Sum (xi^2*pi) for both cases | What is the joint probability distribution of two same variables
Random Discrete Variables Case:
For Y=X,
then pij = 0 as xi and Xj are always exclusive
for i=j and pij=pi for i=j. as xi and xi to happen in the same time has pi chance.
So E[X^2] = E[XX] = Sum (xi |
54,532 | Dependent Bernoulli trials | There are expressions you can write down, but I hope you realize how uninformative they are. Saying that the variables are not known to be indpendent, without saying anything else, gives no usable information. It's like saying that you have a friend whose name is not known to be Bob, then asking what you can say about your friend's height and age. So, here is a nearly meaningless restatement:
$$p(x_1,...,x_n) = \prod_i p(X_i=x_i|X_1=x_1,...,X_{i-1}=x_{i-1}).$$ | Dependent Bernoulli trials | There are expressions you can write down, but I hope you realize how uninformative they are. Saying that the variables are not known to be indpendent, without saying anything else, gives no usable inf | Dependent Bernoulli trials
There are expressions you can write down, but I hope you realize how uninformative they are. Saying that the variables are not known to be indpendent, without saying anything else, gives no usable information. It's like saying that you have a friend whose name is not known to be Bob, then asking what you can say about your friend's height and age. So, here is a nearly meaningless restatement:
$$p(x_1,...,x_n) = \prod_i p(X_i=x_i|X_1=x_1,...,X_{i-1}=x_{i-1}).$$ | Dependent Bernoulli trials
There are expressions you can write down, but I hope you realize how uninformative they are. Saying that the variables are not known to be indpendent, without saying anything else, gives no usable inf |
54,533 | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | If you classify a fraction $k$ of your cases as positive then, because of the randomness, the same fraction $k$ of cases which should be positive will be classified positive (true positives), and the same fraction $k$ of cases which should be negative will be classified positive (false positives).
So the true positive rate and the false positive rate are the same. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | If you classify a fraction $k$ of your cases as positive then, because of the randomness, the same fraction $k$ of cases which should be positive will be classified positive (true positives), and the | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
If you classify a fraction $k$ of your cases as positive then, because of the randomness, the same fraction $k$ of cases which should be positive will be classified positive (true positives), and the same fraction $k$ of cases which should be negative will be classified positive (false positives).
So the true positive rate and the false positive rate are the same. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
If you classify a fraction $k$ of your cases as positive then, because of the randomness, the same fraction $k$ of cases which should be positive will be classified positive (true positives), and the |
54,534 | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | Identity
Let $T$ be the event that a case is positive, and $R$ the event a case is predicted to be positive by a classifier.
Since $T$ and $T^c$ are mutually exclusive and collectively exhaustive, we can decompose $\mathbb{P}(R)$ as follows:
\begin{split}
\mathbb{P}(R) & = \mathbb{P}(R|T)\mathbb{P}(T)+\mathbb{P}(R|T^c)\mathbb{P}(T^c)\\
& = \mathbb{P}(R|T)(1-\mathbb{P}(T^c))+\mathbb{P}(R|T^c)\mathbb{P}(T^c)\\
& = [\mathbb{P}(R|T^c)-\mathbb{P}(R|T)]\mathbb{P}(T^c)+\mathbb{P}(R|T).
\end{split}
The identity
$$\mathbb{P}(R)-\mathbb{P}(R|T) = [\mathbb{P}(R|T^c)-\mathbb{P}(R|T)]\mathbb{P}(T^c),$$
means that $\mathbb{P}(R)-\mathbb{P}(R|T) = 0$ if and only if $\mathbb{P}(R|T^c)-\mathbb{P}(R|T)=0$, for $\mathbb{P}(T^c)>0$.
Random guessing
To begin, inspect the left-hand side condition: $\mathbb{P}(R)=\mathbb{P}(R|T)$. This condition implies the independence between events $R$ and $T$. A classifier that is based on random guessing has to satisfy this condition.
Suppose the classifier is random guessing but it does not satisfy the left-hand side condition, i.e. $\mathbb{P}(R)\ne\mathbb{P}(R|T)$. Then, the guesses are biased for cases that are in $T$, i.e. the classifier guesses differently when encountering a positive case. This contradicts the notion of a random guess.
In other words, if the classifier is random guessing unconditionally, or conditionally on positive cases, it should perform equally well in both cases.
Line y = x
Next, inspect the right-hand side condition: $\mathbb{P}(R|T)=\mathbb{P}(R|T^c)$. This condition means that the true-positive rate equals to the false-positive rate. Geometrically, the line $y=x$ on the ROC graph represents this right-hand side condition. This is because, the ROC graph y-axis and x-axis represent the true-positive rate and false-positive rate respectively.
Equivalance
To conclude, the left-hand side condition represents a random-guessing classifier, and the right-hand side condition represents the line $y=x$. They are in fact equivalent. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | Identity
Let $T$ be the event that a case is positive, and $R$ the event a case is predicted to be positive by a classifier.
Since $T$ and $T^c$ are mutually exclusive and collectively exhaustive, we | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
Identity
Let $T$ be the event that a case is positive, and $R$ the event a case is predicted to be positive by a classifier.
Since $T$ and $T^c$ are mutually exclusive and collectively exhaustive, we can decompose $\mathbb{P}(R)$ as follows:
\begin{split}
\mathbb{P}(R) & = \mathbb{P}(R|T)\mathbb{P}(T)+\mathbb{P}(R|T^c)\mathbb{P}(T^c)\\
& = \mathbb{P}(R|T)(1-\mathbb{P}(T^c))+\mathbb{P}(R|T^c)\mathbb{P}(T^c)\\
& = [\mathbb{P}(R|T^c)-\mathbb{P}(R|T)]\mathbb{P}(T^c)+\mathbb{P}(R|T).
\end{split}
The identity
$$\mathbb{P}(R)-\mathbb{P}(R|T) = [\mathbb{P}(R|T^c)-\mathbb{P}(R|T)]\mathbb{P}(T^c),$$
means that $\mathbb{P}(R)-\mathbb{P}(R|T) = 0$ if and only if $\mathbb{P}(R|T^c)-\mathbb{P}(R|T)=0$, for $\mathbb{P}(T^c)>0$.
Random guessing
To begin, inspect the left-hand side condition: $\mathbb{P}(R)=\mathbb{P}(R|T)$. This condition implies the independence between events $R$ and $T$. A classifier that is based on random guessing has to satisfy this condition.
Suppose the classifier is random guessing but it does not satisfy the left-hand side condition, i.e. $\mathbb{P}(R)\ne\mathbb{P}(R|T)$. Then, the guesses are biased for cases that are in $T$, i.e. the classifier guesses differently when encountering a positive case. This contradicts the notion of a random guess.
In other words, if the classifier is random guessing unconditionally, or conditionally on positive cases, it should perform equally well in both cases.
Line y = x
Next, inspect the right-hand side condition: $\mathbb{P}(R|T)=\mathbb{P}(R|T^c)$. This condition means that the true-positive rate equals to the false-positive rate. Geometrically, the line $y=x$ on the ROC graph represents this right-hand side condition. This is because, the ROC graph y-axis and x-axis represent the true-positive rate and false-positive rate respectively.
Equivalance
To conclude, the left-hand side condition represents a random-guessing classifier, and the right-hand side condition represents the line $y=x$. They are in fact equivalent. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
Identity
Let $T$ be the event that a case is positive, and $R$ the event a case is predicted to be positive by a classifier.
Since $T$ and $T^c$ are mutually exclusive and collectively exhaustive, we |
54,535 | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | A general classifier produces a point in the ROC space rather than a curve.
In order to consider a curve you typically further assume a parameterized classifier class of the form $f_t(X) = \mathbb{1}[h(X)>t]$, where $h(X)$ is a continuous random variable.
Now $(P(h(X)>t|Y=0),P(h(X)>t|Y=1))$ is a curve in the ROC space (parametrized by t).
In this case and if in addition $h(X)$ is independent of $Y$ then
$$P(h(X)>t | Y=1) = P(h(X)>t) = P(h(X)>t|Y=0)$$
and the curve is the line $(P(h(X)>t),P(h(X)>t))$. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$? | A general classifier produces a point in the ROC space rather than a curve.
In order to consider a curve you typically further assume a parameterized classifier class of the form $f_t(X) = \mathbb{1}[ | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
A general classifier produces a point in the ROC space rather than a curve.
In order to consider a curve you typically further assume a parameterized classifier class of the form $f_t(X) = \mathbb{1}[h(X)>t]$, where $h(X)$ is a continuous random variable.
Now $(P(h(X)>t|Y=0),P(h(X)>t|Y=1))$ is a curve in the ROC space (parametrized by t).
In this case and if in addition $h(X)$ is independent of $Y$ then
$$P(h(X)>t | Y=1) = P(h(X)>t) = P(h(X)>t|Y=0)$$
and the curve is the line $(P(h(X)>t),P(h(X)>t))$. | Why is the ROC curve of a random classifier the line $\text{FPR}=\text{TPR}$?
A general classifier produces a point in the ROC space rather than a curve.
In order to consider a curve you typically further assume a parameterized classifier class of the form $f_t(X) = \mathbb{1}[ |
54,536 | How to improve neural network sensitivity with a lopsided binary outcome? | Yes, this is common with an imbalance in training data and some types of relationships.
Suppose bad students pass a tough course with probability $0$, while good students pass the course with probability $1/3$. If the only information you get to observe is whether the student is good or bad, then your most accurate prediction is that the student will fail every time. You may learn from the training data that a good student is more likely to pass than a bad student, but you will never believe that a particular student is more likely to pass than to fail.
Is this really a problem? That depends on how you want to use the model. If you have to bet a dollar for each student on whether the student will pass or fail, it may be right to bet that each student will fail. If you feel it is more costly to predict A for something which is actually of class B than to predict B for something which is actually A, then you may want to incorporate that into the cost function during training. If you are trying to generate realistic-looking data, then you may want to use the model's outputs stochastically instead of generating the most likely outcome.
In some cases there is enough observable information, but the model is not learning this. For example, if you observe latitude and longitude, and try to classify the location as "Delaware" vs. "Not Delaware," then your classifier might first learn that Delaware is negligibly small. You can try things such as changing the cost function (such as from squared error to cross-entropy loss) which severely punishes assigning a low probability to the correct class. You can select a more balanced subset of the data. If you rebalance the data, you could include equal numbers of points in and out of Delaware, or you could focus on points which a simpler classifier believes are close to Delaware. This may trade accuracy in areas you don't believe are close to Delaware for accuracy near the known borders. If you concentrate on getting the border with Maryland right, you might miss the fact that Delaware isn't connected since it includes a bit of land across the Delaware River. | How to improve neural network sensitivity with a lopsided binary outcome? | Yes, this is common with an imbalance in training data and some types of relationships.
Suppose bad students pass a tough course with probability $0$, while good students pass the course with probabil | How to improve neural network sensitivity with a lopsided binary outcome?
Yes, this is common with an imbalance in training data and some types of relationships.
Suppose bad students pass a tough course with probability $0$, while good students pass the course with probability $1/3$. If the only information you get to observe is whether the student is good or bad, then your most accurate prediction is that the student will fail every time. You may learn from the training data that a good student is more likely to pass than a bad student, but you will never believe that a particular student is more likely to pass than to fail.
Is this really a problem? That depends on how you want to use the model. If you have to bet a dollar for each student on whether the student will pass or fail, it may be right to bet that each student will fail. If you feel it is more costly to predict A for something which is actually of class B than to predict B for something which is actually A, then you may want to incorporate that into the cost function during training. If you are trying to generate realistic-looking data, then you may want to use the model's outputs stochastically instead of generating the most likely outcome.
In some cases there is enough observable information, but the model is not learning this. For example, if you observe latitude and longitude, and try to classify the location as "Delaware" vs. "Not Delaware," then your classifier might first learn that Delaware is negligibly small. You can try things such as changing the cost function (such as from squared error to cross-entropy loss) which severely punishes assigning a low probability to the correct class. You can select a more balanced subset of the data. If you rebalance the data, you could include equal numbers of points in and out of Delaware, or you could focus on points which a simpler classifier believes are close to Delaware. This may trade accuracy in areas you don't believe are close to Delaware for accuracy near the known borders. If you concentrate on getting the border with Maryland right, you might miss the fact that Delaware isn't connected since it includes a bit of land across the Delaware River. | How to improve neural network sensitivity with a lopsided binary outcome?
Yes, this is common with an imbalance in training data and some types of relationships.
Suppose bad students pass a tough course with probability $0$, while good students pass the course with probabil |
54,537 | Does a zig-zagging residual plot mean that normality has been violated? | +1 to @StatsStudent; your basic issue here is that you have few data. However, it might help to talk a little about what those plots are there for. Of course, you can get many things from looking at a plot, but those are the standard lm() diagnostic plots in R, so I will mention a conventional use for each.
Residuals vs Fitted:
This plot can be used to assess model misspecification. For example, if you have only one covariate, you can use this to detect if the wrong functional form has been used. Imagine if the residuals formed a curve, with the residuals below the dotted gray line on the sides, and above the line in the center, that would suggest you need to add a squared term to capture a curvilinear relationship.
R has helpfully laid a loess line over the residuals to make it easier to see whatever structure there may be in the residuals. When the smoothing bandwidth parameter, $\alpha$, is small, the line will bounce around much more, whereas when it's large, the loess fit will tend to be fairly straight no matter how curvy the data are. In your case, you have few data, and $\alpha$ is too small, so the line zig-zags from one point to the next, but this only means that the default setting for the loess line is miscalibrated for very small datasets. You don't see the kind of systematic deviations I described above.
Normal Q-Q:
Your next plot is a qq-plot. This is the plot you should primarily focus on to determine if your residuals are roughly normally distributed. (Note that only the residuals need to be normally distributed.) Here we see that your data track the dotted gray line very well, so there is no indication that your residuals deviate from normality. (There is one potentially interesting point, #29, that deviates from the line, we'll come back to that in a moment.)
Scale-Location:
The scale-location plot can help you determine if there is substantial heteroskedasticity. What you are looking for here is typically if the plot is fan-shaped, with one side more spread out than the other. You don't have that. (Once again, the loess fit goes up in the center, but you have more data there, so they ought to spread further, and $\alpha$ is again too small for your $N$.)
Residuals vs Leverage:
This graph helps you determine if some of your data are driving your results. That is, you don't want to draw the conclusion that is based on just a couple of data points, where the rest of your data don't suggest that conclusion. This is a question of leverage. Just because a datum has a large residual value doesn't mean it exerts much influence on the estimated slope, it depends on where that datum lies along the x-axis. Data near the mean of x exert much less leverage even if their residual values are large, whereas smaller residuals can exert substantial leverage if they are far enough away from the mean of x. What we see here is that all your data have Cook's distance values less than .5 (including point #29), so you don't seem to have a problem with that, either.
In sum, these plots give you reason to have confidence in your model. | Does a zig-zagging residual plot mean that normality has been violated? | +1 to @StatsStudent; your basic issue here is that you have few data. However, it might help to talk a little about what those plots are there for. Of course, you can get many things from looking at | Does a zig-zagging residual plot mean that normality has been violated?
+1 to @StatsStudent; your basic issue here is that you have few data. However, it might help to talk a little about what those plots are there for. Of course, you can get many things from looking at a plot, but those are the standard lm() diagnostic plots in R, so I will mention a conventional use for each.
Residuals vs Fitted:
This plot can be used to assess model misspecification. For example, if you have only one covariate, you can use this to detect if the wrong functional form has been used. Imagine if the residuals formed a curve, with the residuals below the dotted gray line on the sides, and above the line in the center, that would suggest you need to add a squared term to capture a curvilinear relationship.
R has helpfully laid a loess line over the residuals to make it easier to see whatever structure there may be in the residuals. When the smoothing bandwidth parameter, $\alpha$, is small, the line will bounce around much more, whereas when it's large, the loess fit will tend to be fairly straight no matter how curvy the data are. In your case, you have few data, and $\alpha$ is too small, so the line zig-zags from one point to the next, but this only means that the default setting for the loess line is miscalibrated for very small datasets. You don't see the kind of systematic deviations I described above.
Normal Q-Q:
Your next plot is a qq-plot. This is the plot you should primarily focus on to determine if your residuals are roughly normally distributed. (Note that only the residuals need to be normally distributed.) Here we see that your data track the dotted gray line very well, so there is no indication that your residuals deviate from normality. (There is one potentially interesting point, #29, that deviates from the line, we'll come back to that in a moment.)
Scale-Location:
The scale-location plot can help you determine if there is substantial heteroskedasticity. What you are looking for here is typically if the plot is fan-shaped, with one side more spread out than the other. You don't have that. (Once again, the loess fit goes up in the center, but you have more data there, so they ought to spread further, and $\alpha$ is again too small for your $N$.)
Residuals vs Leverage:
This graph helps you determine if some of your data are driving your results. That is, you don't want to draw the conclusion that is based on just a couple of data points, where the rest of your data don't suggest that conclusion. This is a question of leverage. Just because a datum has a large residual value doesn't mean it exerts much influence on the estimated slope, it depends on where that datum lies along the x-axis. Data near the mean of x exert much less leverage even if their residual values are large, whereas smaller residuals can exert substantial leverage if they are far enough away from the mean of x. What we see here is that all your data have Cook's distance values less than .5 (including point #29), so you don't seem to have a problem with that, either.
In sum, these plots give you reason to have confidence in your model. | Does a zig-zagging residual plot mean that normality has been violated?
+1 to @StatsStudent; your basic issue here is that you have few data. However, it might help to talk a little about what those plots are there for. Of course, you can get many things from looking at |
54,538 | Does a zig-zagging residual plot mean that normality has been violated? | I don't see any cause for concern here -- No assumptions are obviously violated. But this is often difficult to confirm with so few data points. I think you are ok. | Does a zig-zagging residual plot mean that normality has been violated? | I don't see any cause for concern here -- No assumptions are obviously violated. But this is often difficult to confirm with so few data points. I think you are ok. | Does a zig-zagging residual plot mean that normality has been violated?
I don't see any cause for concern here -- No assumptions are obviously violated. But this is often difficult to confirm with so few data points. I think you are ok. | Does a zig-zagging residual plot mean that normality has been violated?
I don't see any cause for concern here -- No assumptions are obviously violated. But this is often difficult to confirm with so few data points. I think you are ok. |
54,539 | Interpreting multiple regression coefficients with 2 continuous variables interacting and 2 categorical variables interacting | As @gung said, it would help if you gave your full equation and DV, but, here, if the interaction between sex-female and mobility is -10.1, it means that the effect of high mobility on the dependent variable is 10.1 units less for women then men. Similarly, the effect of being female on the DV is 10.1 units less for high mobility people than for low.
For continuous variables, it is much the same, except that it is per unit of the other IV. So, 1.3 for the interaction between weight and IQ means that the effect of IQ on the DV is 1.3 units higher for each increase of one unit (pound? kilogram?) of weight, and the effect of weight is 1.3 units higher for each increase of 1 point in IQ. In other words, the effect is more positive for people who are both smart and heavy than for people who are one or the other. | Interpreting multiple regression coefficients with 2 continuous variables interacting and 2 categori | As @gung said, it would help if you gave your full equation and DV, but, here, if the interaction between sex-female and mobility is -10.1, it means that the effect of high mobility on the dependent v | Interpreting multiple regression coefficients with 2 continuous variables interacting and 2 categorical variables interacting
As @gung said, it would help if you gave your full equation and DV, but, here, if the interaction between sex-female and mobility is -10.1, it means that the effect of high mobility on the dependent variable is 10.1 units less for women then men. Similarly, the effect of being female on the DV is 10.1 units less for high mobility people than for low.
For continuous variables, it is much the same, except that it is per unit of the other IV. So, 1.3 for the interaction between weight and IQ means that the effect of IQ on the DV is 1.3 units higher for each increase of one unit (pound? kilogram?) of weight, and the effect of weight is 1.3 units higher for each increase of 1 point in IQ. In other words, the effect is more positive for people who are both smart and heavy than for people who are one or the other. | Interpreting multiple regression coefficients with 2 continuous variables interacting and 2 categori
As @gung said, it would help if you gave your full equation and DV, but, here, if the interaction between sex-female and mobility is -10.1, it means that the effect of high mobility on the dependent v |
54,540 | Modifying the time granularity of a state sequence | You can simply select the corresponding columns. In your case, this should be columns 1, 6, 11, ... You can get the column indices using the "seq" function:
column.5min <- seq(from = 1, to = 1440, by=5)
Now you can select the column, for instance using:
myseq5min <- myseq[, column.5min]
Here is an example using the "mvad" data set and selecting the first state of each year.
## Loading the library
library(TraMineR)
data(mvad)
## Defining sequence properties
mvad.alphabet <- c("employment", "FE", "HE", "joblessness", "school", "training")
mvad.lab <- c("employment", "further education", "higher education", "joblessness", "school", "training")
mvad.shortlab <- c("EM", "FE", "HE", "JL", "SC", "TR")
## The state sequence object.
mvad.seq <- seqdef(mvad, 17:86, alphabet = mvad.alphabet, states = mvad.shortlab, labels = mvad.lab, xtstep = 6)
## Now select the column every year (every twelve monthes)
mvad.seq.year <- mvad.seq[, seq(from=1, to=70, by=12)]
seqdplot(mvad.seq.year) | Modifying the time granularity of a state sequence | You can simply select the corresponding columns. In your case, this should be columns 1, 6, 11, ... You can get the column indices using the "seq" function:
column.5min <- seq(from = 1, to = 1440, by= | Modifying the time granularity of a state sequence
You can simply select the corresponding columns. In your case, this should be columns 1, 6, 11, ... You can get the column indices using the "seq" function:
column.5min <- seq(from = 1, to = 1440, by=5)
Now you can select the column, for instance using:
myseq5min <- myseq[, column.5min]
Here is an example using the "mvad" data set and selecting the first state of each year.
## Loading the library
library(TraMineR)
data(mvad)
## Defining sequence properties
mvad.alphabet <- c("employment", "FE", "HE", "joblessness", "school", "training")
mvad.lab <- c("employment", "further education", "higher education", "joblessness", "school", "training")
mvad.shortlab <- c("EM", "FE", "HE", "JL", "SC", "TR")
## The state sequence object.
mvad.seq <- seqdef(mvad, 17:86, alphabet = mvad.alphabet, states = mvad.shortlab, labels = mvad.lab, xtstep = 6)
## Now select the column every year (every twelve monthes)
mvad.seq.year <- mvad.seq[, seq(from=1, to=70, by=12)]
seqdplot(mvad.seq.year) | Modifying the time granularity of a state sequence
You can simply select the corresponding columns. In your case, this should be columns 1, 6, 11, ... You can get the column indices using the "seq" function:
column.5min <- seq(from = 1, to = 1440, by= |
54,541 | Modifying the time granularity of a state sequence | An alternative solution is to use the seqgranularity function provided by the TraMineRextras package.
This function changes the time granularity using different methods, currently either "first" state or "last" state, but other methods such as choosing the most frequent state in the aggregated spell should be implemented in the future.
For the example above, you would just use
mvadg.seq.year <- seqgranularity(mvad.seq, tspan=12, method = "first")
TraMineRextras should be made available on the CRAN in the near future. In the meantime, if you have the latest version of R, you can just install it from R-Forge with
install.packages("TraMineRextras", repos="http://R-Forge.R-project.org") | Modifying the time granularity of a state sequence | An alternative solution is to use the seqgranularity function provided by the TraMineRextras package.
This function changes the time granularity using different methods, currently either "first" state | Modifying the time granularity of a state sequence
An alternative solution is to use the seqgranularity function provided by the TraMineRextras package.
This function changes the time granularity using different methods, currently either "first" state or "last" state, but other methods such as choosing the most frequent state in the aggregated spell should be implemented in the future.
For the example above, you would just use
mvadg.seq.year <- seqgranularity(mvad.seq, tspan=12, method = "first")
TraMineRextras should be made available on the CRAN in the near future. In the meantime, if you have the latest version of R, you can just install it from R-Forge with
install.packages("TraMineRextras", repos="http://R-Forge.R-project.org") | Modifying the time granularity of a state sequence
An alternative solution is to use the seqgranularity function provided by the TraMineRextras package.
This function changes the time granularity using different methods, currently either "first" state |
54,542 | What makes a GLM estimate the means differently from the actual sample means? | Let us have some data (shown below), predictand Y and two factors X1 and X2. X1 has 2 groups, and X2 has 3 groups. (In this particular example, the design is incomplete though, because combination X1=2 & X2=3 is absent.)
Let us run GLM command (shown). The settings are default: full factorial model, SS III type of squares, intercept present. The command requests to print out observed means for all groups of factors as well as for their combinations and also to print out the corresponding estimated means. It also saves predicted values for Y (shown below as "pre").
UNIANOVA y BY x1 x2
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/SAVE=PRED
/EMMEANS=TABLES(OVERALL)
/EMMEANS=TABLES(x1)
/EMMEANS=TABLES(x2)
/EMMEANS=TABLES(x1*x2)
/PRINT=DESCRIPTIVE
/CRITERIA=ALPHA(.05)
/DESIGN=x1 x2 x1*x2.
y x1 x2 pre
.725581 1 1 .725581
-.147728 1 2 .046662
.496867 1 2 .046662
-.985803 1 2 .046662
-.139656 1 2 .046662
-.381405 1 2 .046662
1.437696 1 2 .046662
.039809 1 3 -.748909
-1.537626 1 3 -.748909
-.402714 2 1 .159152
1.900394 2 1 .159152
.883087 2 1 .159152
-1.744157 2 1 .159152
1.009084 2 2 .288968
1.169746 2 2 .288968
.579917 2 2 .288968
-1.022533 2 2 .288968
-.587685 2 2 .288968
.814123 2 2 .288968
.003084 2 2 .288968
-1.068938 2 2 .288968
-.175502 2 2 .288968
1.290405 2 2 .288968
1.166946 2 2 .288968
-.645831 2 3 -.645831
1.061533 3 1 1.061533
1.143789 3 2 .676997
.210205 3 2 .676997
-.643339 3 3 -.360148
-.076957 3 3 -.360148
Let us compare observed and estimated means printed out (I don't show these tables here). First, we can notice that on the lowest (the cell) level of the design, i.e. on the level of combinations of groups X1 * X2 estimated means equal observed means. This is because we had used saturated, full factorial model including all possible interactions between the factors. Second, we can see that when it comes to means on the higher, marginal, level, estimated means do not (generally) equal observed means. For example, the observed marginal mean for X1=1 is -0.05470 and the corresponding estimated mean is 0.00778.
Can we show where this difference stems from? Yes. The observed marginal mean corresponds to the simple mean of the predicted values. For X1=1, this is mean(.725581,.046662,.046662,.046662,.046662,.046662,.046662,-.748909,-.748909) = -0.05470 which is the same as the simple mean of the observed values mean(.725581,-.147728,.496867,-.985803,-.139656,-.381405,1.437696,.039809,-1.537626) = -0.05470. On the other hand, the estimated marginal mean is given by averaging the predicted values with the collapsed groups weighted equally. That is, X2=1, X2=2, X2=3 are given equal weight despite their unequal frequencies, and so 0.00778 = mean(.725581,.046662,-.748909). You may conclude yourself that if the design had been balanced - cells contained equal frequencies - estimated and observed means would have been equal to each other.
That was a simple explanation for the simple case (by "simple case" I mean defaults such as Type III SS, intercept, no covariates). You may consult "SPSS Algorithms" help document to read about how estimated, expected means are actually computed in the general case. | What makes a GLM estimate the means differently from the actual sample means? | Let us have some data (shown below), predictand Y and two factors X1 and X2. X1 has 2 groups, and X2 has 3 groups. (In this particular example, the design is incomplete though, because combination X1= | What makes a GLM estimate the means differently from the actual sample means?
Let us have some data (shown below), predictand Y and two factors X1 and X2. X1 has 2 groups, and X2 has 3 groups. (In this particular example, the design is incomplete though, because combination X1=2 & X2=3 is absent.)
Let us run GLM command (shown). The settings are default: full factorial model, SS III type of squares, intercept present. The command requests to print out observed means for all groups of factors as well as for their combinations and also to print out the corresponding estimated means. It also saves predicted values for Y (shown below as "pre").
UNIANOVA y BY x1 x2
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/SAVE=PRED
/EMMEANS=TABLES(OVERALL)
/EMMEANS=TABLES(x1)
/EMMEANS=TABLES(x2)
/EMMEANS=TABLES(x1*x2)
/PRINT=DESCRIPTIVE
/CRITERIA=ALPHA(.05)
/DESIGN=x1 x2 x1*x2.
y x1 x2 pre
.725581 1 1 .725581
-.147728 1 2 .046662
.496867 1 2 .046662
-.985803 1 2 .046662
-.139656 1 2 .046662
-.381405 1 2 .046662
1.437696 1 2 .046662
.039809 1 3 -.748909
-1.537626 1 3 -.748909
-.402714 2 1 .159152
1.900394 2 1 .159152
.883087 2 1 .159152
-1.744157 2 1 .159152
1.009084 2 2 .288968
1.169746 2 2 .288968
.579917 2 2 .288968
-1.022533 2 2 .288968
-.587685 2 2 .288968
.814123 2 2 .288968
.003084 2 2 .288968
-1.068938 2 2 .288968
-.175502 2 2 .288968
1.290405 2 2 .288968
1.166946 2 2 .288968
-.645831 2 3 -.645831
1.061533 3 1 1.061533
1.143789 3 2 .676997
.210205 3 2 .676997
-.643339 3 3 -.360148
-.076957 3 3 -.360148
Let us compare observed and estimated means printed out (I don't show these tables here). First, we can notice that on the lowest (the cell) level of the design, i.e. on the level of combinations of groups X1 * X2 estimated means equal observed means. This is because we had used saturated, full factorial model including all possible interactions between the factors. Second, we can see that when it comes to means on the higher, marginal, level, estimated means do not (generally) equal observed means. For example, the observed marginal mean for X1=1 is -0.05470 and the corresponding estimated mean is 0.00778.
Can we show where this difference stems from? Yes. The observed marginal mean corresponds to the simple mean of the predicted values. For X1=1, this is mean(.725581,.046662,.046662,.046662,.046662,.046662,.046662,-.748909,-.748909) = -0.05470 which is the same as the simple mean of the observed values mean(.725581,-.147728,.496867,-.985803,-.139656,-.381405,1.437696,.039809,-1.537626) = -0.05470. On the other hand, the estimated marginal mean is given by averaging the predicted values with the collapsed groups weighted equally. That is, X2=1, X2=2, X2=3 are given equal weight despite their unequal frequencies, and so 0.00778 = mean(.725581,.046662,-.748909). You may conclude yourself that if the design had been balanced - cells contained equal frequencies - estimated and observed means would have been equal to each other.
That was a simple explanation for the simple case (by "simple case" I mean defaults such as Type III SS, intercept, no covariates). You may consult "SPSS Algorithms" help document to read about how estimated, expected means are actually computed in the general case. | What makes a GLM estimate the means differently from the actual sample means?
Let us have some data (shown below), predictand Y and two factors X1 and X2. X1 has 2 groups, and X2 has 3 groups. (In this particular example, the design is incomplete though, because combination X1= |
54,543 | What makes a GLM estimate the means differently from the actual sample means? | From my perspective this is completely legitimate question, which is in fact asked by many of my customers.
The mismatch can be attributed to the following:
Missing Data. SPSS by default excludes missing data case-wise. So if you happen to have missing observations in one factor, the cases on which the means are computed for the model (and, say, 2nd factor) are different (smaller in number) compared to those on which descriptive statistics are based.
Marginal means are computed with assumption, that each cell (i.e. combination of factor levels) has equal weight. This is in accordance with the more basic assumptions of ANOVA, and this is how basically ANOVA sees the data. So, if you have e.g. sex on two levels and education on two levels, the marginal mean for males is computed as $\frac{(\text{Mean of males with high education}) + (\text{Mean of males with college education})}{2}$ rather than
$\frac{(\text{Mean of males with high edu}) \cdot (\text{# males with high edu}) + (\text{Mean of males with college edu}) \cdot (\text{# males with college edu})}{\text{Number of males with any edu}}$, which equivalent to simple mean of males with any education.
This second point is equivalent to the answers given by
ttnphns and Kavin Kane, but in (my opinion) easier language. | What makes a GLM estimate the means differently from the actual sample means? | From my perspective this is completely legitimate question, which is in fact asked by many of my customers.
The mismatch can be attributed to the following:
Missing Data. SPSS by default excludes mis | What makes a GLM estimate the means differently from the actual sample means?
From my perspective this is completely legitimate question, which is in fact asked by many of my customers.
The mismatch can be attributed to the following:
Missing Data. SPSS by default excludes missing data case-wise. So if you happen to have missing observations in one factor, the cases on which the means are computed for the model (and, say, 2nd factor) are different (smaller in number) compared to those on which descriptive statistics are based.
Marginal means are computed with assumption, that each cell (i.e. combination of factor levels) has equal weight. This is in accordance with the more basic assumptions of ANOVA, and this is how basically ANOVA sees the data. So, if you have e.g. sex on two levels and education on two levels, the marginal mean for males is computed as $\frac{(\text{Mean of males with high education}) + (\text{Mean of males with college education})}{2}$ rather than
$\frac{(\text{Mean of males with high edu}) \cdot (\text{# males with high edu}) + (\text{Mean of males with college edu}) \cdot (\text{# males with college edu})}{\text{Number of males with any edu}}$, which equivalent to simple mean of males with any education.
This second point is equivalent to the answers given by
ttnphns and Kavin Kane, but in (my opinion) easier language. | What makes a GLM estimate the means differently from the actual sample means?
From my perspective this is completely legitimate question, which is in fact asked by many of my customers.
The mismatch can be attributed to the following:
Missing Data. SPSS by default excludes mis |
54,544 | What makes a GLM estimate the means differently from the actual sample means? | If you fitted a model with only one factor with the same number of levels as means you want to estimate (well, maybe one less as you have the intercept term), then the estimated means should be exactly the observed means. When you add other covariates (variables in the model) then when you estimate a least squares mean, balance is usually assumed. The means that for categorical covariates, the estimate asses that in the population, the number of individuals would be evenly spread across the levels of a covariates. In SAS, there is an option - OBSERVEDMARGINS or OM - that will give estimates based on the proportions observed in your sample, and these will usually be a lot closer to the sample means.
You have not mentioned that you are doing with the mixed or random effects. I wouldn't expect these to have a huge effect on the estimates of the means though.
I would say one thing though-if you don't understand what the methods you are using- get the help of a statistician. These things are easy to get wrong. | What makes a GLM estimate the means differently from the actual sample means? | If you fitted a model with only one factor with the same number of levels as means you want to estimate (well, maybe one less as you have the intercept term), then the estimated means should be exactl | What makes a GLM estimate the means differently from the actual sample means?
If you fitted a model with only one factor with the same number of levels as means you want to estimate (well, maybe one less as you have the intercept term), then the estimated means should be exactly the observed means. When you add other covariates (variables in the model) then when you estimate a least squares mean, balance is usually assumed. The means that for categorical covariates, the estimate asses that in the population, the number of individuals would be evenly spread across the levels of a covariates. In SAS, there is an option - OBSERVEDMARGINS or OM - that will give estimates based on the proportions observed in your sample, and these will usually be a lot closer to the sample means.
You have not mentioned that you are doing with the mixed or random effects. I wouldn't expect these to have a huge effect on the estimates of the means though.
I would say one thing though-if you don't understand what the methods you are using- get the help of a statistician. These things are easy to get wrong. | What makes a GLM estimate the means differently from the actual sample means?
If you fitted a model with only one factor with the same number of levels as means you want to estimate (well, maybe one less as you have the intercept term), then the estimated means should be exactl |
54,545 | How many random permutations to cover all possible permutations? | Permutation usually refers to something else, so it's probably better to call your problem "random binary words" or something similar.
The question of how long it takes to get at least one representative of each type is called the Coupon Collector Problem. If you assume that all binary words of length $N$ are equally likely, then there are $2^N$ types of coupons. You can write the time to collect all coupons as a sum of the times to collect the $i$th new coupon, a sum of independent geometric random variables. So, the expected number of coupons it takes to collect them all is $2^N \sum_{i=1}^{2^N} 1/i \sim 2^N \log 2^N$, or more precisely $2^N \log 2^N + 2^N \gamma + 1/2 + o(1)$. For $N=10$ this is about $7689$. The variance is $2^{2N} \sum_{i=1}^{2^N} 1/i^2 \approx 2^{2N} \frac {\pi^2}{6}$, so the standard deviation is about $2^N \frac{\pi}{\sqrt{6}}$. For $N=10$ this is about $1313$. Note that a normal approximation is NOT appropriate here.
One crude bound is Chebyshev's inequality, which says that the chance that a random variable is more than $k$ standard deviations away from the mean is at most $1/k^2$, and the similar Cantelli's inequality is that the chance that a random variable is at least $k$ standard deviations above the mean is at most $1/(k^2+1)$. This gives you an upper bound of about $7689 + 1313 \approx 9002$ for the median, and $7689 + 1313\sqrt{19} \sim 13412$ for the $95$th percentile.
If these bounds are not good enough, there are more precise but more complicated asymptotics known. Another approach, suitable perhaps up to $N = 25$, is to compute the exact distribution numerically using the representation as a sum of independent geometric distributions. | How many random permutations to cover all possible permutations? | Permutation usually refers to something else, so it's probably better to call your problem "random binary words" or something similar.
The question of how long it takes to get at least one representa | How many random permutations to cover all possible permutations?
Permutation usually refers to something else, so it's probably better to call your problem "random binary words" or something similar.
The question of how long it takes to get at least one representative of each type is called the Coupon Collector Problem. If you assume that all binary words of length $N$ are equally likely, then there are $2^N$ types of coupons. You can write the time to collect all coupons as a sum of the times to collect the $i$th new coupon, a sum of independent geometric random variables. So, the expected number of coupons it takes to collect them all is $2^N \sum_{i=1}^{2^N} 1/i \sim 2^N \log 2^N$, or more precisely $2^N \log 2^N + 2^N \gamma + 1/2 + o(1)$. For $N=10$ this is about $7689$. The variance is $2^{2N} \sum_{i=1}^{2^N} 1/i^2 \approx 2^{2N} \frac {\pi^2}{6}$, so the standard deviation is about $2^N \frac{\pi}{\sqrt{6}}$. For $N=10$ this is about $1313$. Note that a normal approximation is NOT appropriate here.
One crude bound is Chebyshev's inequality, which says that the chance that a random variable is more than $k$ standard deviations away from the mean is at most $1/k^2$, and the similar Cantelli's inequality is that the chance that a random variable is at least $k$ standard deviations above the mean is at most $1/(k^2+1)$. This gives you an upper bound of about $7689 + 1313 \approx 9002$ for the median, and $7689 + 1313\sqrt{19} \sim 13412$ for the $95$th percentile.
If these bounds are not good enough, there are more precise but more complicated asymptotics known. Another approach, suitable perhaps up to $N = 25$, is to compute the exact distribution numerically using the representation as a sum of independent geometric distributions. | How many random permutations to cover all possible permutations?
Permutation usually refers to something else, so it's probably better to call your problem "random binary words" or something similar.
The question of how long it takes to get at least one representa |
54,546 | Naive Bayes fails with a perfect predictor | Note that dat$X in your code is a numeric variable. The NaiveBayes implementation in klaR for numeric predictor variables calculates the mean and standard deviations of the predictor variable at each outcome level. Rather than dealing with standard deviations of 0, the klaR authors decided to throw an error.
If you change dat$X to a factor, it will create classification tables you are likely expecting. Alternatively, the naiveBayes function in the e1071 package will return distributions with a standard deviation of 0 if you prefer that over throwing errors, or you can delete the stop(...) code towards the end of klaR:::NaiveBayes.default (though that might cause problems with prediction and plotting functions in klaR). | Naive Bayes fails with a perfect predictor | Note that dat$X in your code is a numeric variable. The NaiveBayes implementation in klaR for numeric predictor variables calculates the mean and standard deviations of the predictor variable at each | Naive Bayes fails with a perfect predictor
Note that dat$X in your code is a numeric variable. The NaiveBayes implementation in klaR for numeric predictor variables calculates the mean and standard deviations of the predictor variable at each outcome level. Rather than dealing with standard deviations of 0, the klaR authors decided to throw an error.
If you change dat$X to a factor, it will create classification tables you are likely expecting. Alternatively, the naiveBayes function in the e1071 package will return distributions with a standard deviation of 0 if you prefer that over throwing errors, or you can delete the stop(...) code towards the end of klaR:::NaiveBayes.default (though that might cause problems with prediction and plotting functions in klaR). | Naive Bayes fails with a perfect predictor
Note that dat$X in your code is a numeric variable. The NaiveBayes implementation in klaR for numeric predictor variables calculates the mean and standard deviations of the predictor variable at each |
54,547 | Comparing P value from t test vs. Mann-Whitney test | It depends.
If you assume that the data are sampled from Gaussian distributions, then the t test has a bit more power (depending on sample size) so will -- on average -- have a lower P value. But only on average. For any particular set of data, the t test may give a higher or a lower P value.
If you don't assume the data are sampled from Gaussian distributions, then the Mann-Whitney test may have more power (depending on how far the distribution is from Gaussian). If so, you'd expect the Mann-Whitney test to have the lower P value on average, but the results are not predictable for any particular set of data.
What does "on average" mean? Perform both tests on many sets of (simulated) data. Compute the average P value from the t test, and also the average P value from the Mann-Whitney test. Now compare the two averages. | Comparing P value from t test vs. Mann-Whitney test | It depends.
If you assume that the data are sampled from Gaussian distributions, then the t test has a bit more power (depending on sample size) so will -- on average -- have a lower P value. But only | Comparing P value from t test vs. Mann-Whitney test
It depends.
If you assume that the data are sampled from Gaussian distributions, then the t test has a bit more power (depending on sample size) so will -- on average -- have a lower P value. But only on average. For any particular set of data, the t test may give a higher or a lower P value.
If you don't assume the data are sampled from Gaussian distributions, then the Mann-Whitney test may have more power (depending on how far the distribution is from Gaussian). If so, you'd expect the Mann-Whitney test to have the lower P value on average, but the results are not predictable for any particular set of data.
What does "on average" mean? Perform both tests on many sets of (simulated) data. Compute the average P value from the t test, and also the average P value from the Mann-Whitney test. Now compare the two averages. | Comparing P value from t test vs. Mann-Whitney test
It depends.
If you assume that the data are sampled from Gaussian distributions, then the t test has a bit more power (depending on sample size) so will -- on average -- have a lower P value. But only |
54,548 | Comparing P value from t test vs. Mann-Whitney test | Here's an example showing exactly the behavior @Harvey describes above. We simulate varying degrees of normality by appending an outlier of varying degrees to a random normal draw of sample size 100 and calculate t-test p-values and U-test p-values for each, then plot as a function of how much of an outlier we appended.
For simulations where the outlier is still pretty normal, we get more power from the t-test. For simulations where the outlier is incredibly non-normal, we get more power from the U-test. This is because the t-test's power drops enormously (and appears to asymptote around 0.33) while the U-test's power is fixed without regard to normality (as it should be).
R code for the figure shown below:
library(tidyverse)
map_dfr(2^(1:10), function(non_norm){
bind_rows(replicate(20, {
a <- rnorm(100)
b <- c(rnorm(99, mean = 2), non_norm)
c(stud_t_test=t.test(a, b)$p.value, u_test=wilcox.test(a, b)$p.value)
}, simplify = FALSE)) %>%
mutate(non_norm)
}) %>%
pivot_longer(ends_with("test"), names_to = "test_type", values_to = "p_value") %>%
ggplot(aes(x=non_norm, y=p_value, color=test_type, group=interaction(non_norm, test_type))) +
geom_boxplot() +
scale_x_continuous(trans = "log2", breaks = 2^(1:10)) +
scale_y_log10(breaks = 10^(-(0:3)*10)) +
theme_bw() | Comparing P value from t test vs. Mann-Whitney test | Here's an example showing exactly the behavior @Harvey describes above. We simulate varying degrees of normality by appending an outlier of varying degrees to a random normal draw of sample size 100 a | Comparing P value from t test vs. Mann-Whitney test
Here's an example showing exactly the behavior @Harvey describes above. We simulate varying degrees of normality by appending an outlier of varying degrees to a random normal draw of sample size 100 and calculate t-test p-values and U-test p-values for each, then plot as a function of how much of an outlier we appended.
For simulations where the outlier is still pretty normal, we get more power from the t-test. For simulations where the outlier is incredibly non-normal, we get more power from the U-test. This is because the t-test's power drops enormously (and appears to asymptote around 0.33) while the U-test's power is fixed without regard to normality (as it should be).
R code for the figure shown below:
library(tidyverse)
map_dfr(2^(1:10), function(non_norm){
bind_rows(replicate(20, {
a <- rnorm(100)
b <- c(rnorm(99, mean = 2), non_norm)
c(stud_t_test=t.test(a, b)$p.value, u_test=wilcox.test(a, b)$p.value)
}, simplify = FALSE)) %>%
mutate(non_norm)
}) %>%
pivot_longer(ends_with("test"), names_to = "test_type", values_to = "p_value") %>%
ggplot(aes(x=non_norm, y=p_value, color=test_type, group=interaction(non_norm, test_type))) +
geom_boxplot() +
scale_x_continuous(trans = "log2", breaks = 2^(1:10)) +
scale_y_log10(breaks = 10^(-(0:3)*10)) +
theme_bw() | Comparing P value from t test vs. Mann-Whitney test
Here's an example showing exactly the behavior @Harvey describes above. We simulate varying degrees of normality by appending an outlier of varying degrees to a random normal draw of sample size 100 a |
54,549 | More interpretable measure of association than odds ratios for contingency tables with 0 counts | I am not exactly sure what you want to get finally, but have a look at this mosaic plots, testing independence:
And for the second dataset:
In both cases the data is dependent, but it is dependent in different manner here: if about the first plot we can just tell that 11 is too small (comparing to the whole table), then about the second plot we can tell that 0 is too small comparing to 17.
P.S. I have cheated here a little to create the second plot, I changed the data to get rid of zero:
3010, 12320
1, 170
UPD 1. R code for the mosaic plots:
x <- matrix(0, ncol=2, nrow=2)
x[1,] <- c(139, 467)
x[2,] <- c(11, 104)
mosaicplot(x, ylab="Exposure", xlab="Desease", shade=TRUE)
UPD 2. Few words about what this plot shows. The area of each cell is proportional to the number of samples with the combination of the properties. That is just an easy way to visualize such tables as provided in the question.
The more interesting thing is the colors ("shade" parameter). It performs something like Chi Squared test: compares the theoretical distribution when the independence hypothesis is true with the given distribution. Then the large deviations are colored: the more significant the deviation, the more saturated the color.
Unfortunately this tool only checks the dependencies in data, but that is usually all you need in case of two variables and four possible observation. | More interpretable measure of association than odds ratios for contingency tables with 0 counts | I am not exactly sure what you want to get finally, but have a look at this mosaic plots, testing independence:
And for the second dataset:
In both cases the data is dependent, but it is dependent i | More interpretable measure of association than odds ratios for contingency tables with 0 counts
I am not exactly sure what you want to get finally, but have a look at this mosaic plots, testing independence:
And for the second dataset:
In both cases the data is dependent, but it is dependent in different manner here: if about the first plot we can just tell that 11 is too small (comparing to the whole table), then about the second plot we can tell that 0 is too small comparing to 17.
P.S. I have cheated here a little to create the second plot, I changed the data to get rid of zero:
3010, 12320
1, 170
UPD 1. R code for the mosaic plots:
x <- matrix(0, ncol=2, nrow=2)
x[1,] <- c(139, 467)
x[2,] <- c(11, 104)
mosaicplot(x, ylab="Exposure", xlab="Desease", shade=TRUE)
UPD 2. Few words about what this plot shows. The area of each cell is proportional to the number of samples with the combination of the properties. That is just an easy way to visualize such tables as provided in the question.
The more interesting thing is the colors ("shade" parameter). It performs something like Chi Squared test: compares the theoretical distribution when the independence hypothesis is true with the given distribution. Then the large deviations are colored: the more significant the deviation, the more saturated the color.
Unfortunately this tool only checks the dependencies in data, but that is usually all you need in case of two variables and four possible observation. | More interpretable measure of association than odds ratios for contingency tables with 0 counts
I am not exactly sure what you want to get finally, but have a look at this mosaic plots, testing independence:
And for the second dataset:
In both cases the data is dependent, but it is dependent i |
54,550 | More interpretable measure of association than odds ratios for contingency tables with 0 counts | In my mind, as an Epidemiologist, it depends on why there were zero counts.
If there is a particular combination of exposure and disease that is known to be possible but rare, then the usual way to proceed is to add some small number to each cell, usually 0.5 or so, and proceed from there, usually using Exact statistics to produce an OR that is interpretable in the usual fashion.
But its possible that there's a reason that there's a 0 cell count there. It might be etiologic, at which point that's its own major finding, or it might be revealing a flaw in the study design - for some reason you aren't sampling that particular combination, they're being excluded from the study, etc.
Both warrant further investigation, and in both cases you've moved beyond a single number emerging from a contingency table. | More interpretable measure of association than odds ratios for contingency tables with 0 counts | In my mind, as an Epidemiologist, it depends on why there were zero counts.
If there is a particular combination of exposure and disease that is known to be possible but rare, then the usual way to pr | More interpretable measure of association than odds ratios for contingency tables with 0 counts
In my mind, as an Epidemiologist, it depends on why there were zero counts.
If there is a particular combination of exposure and disease that is known to be possible but rare, then the usual way to proceed is to add some small number to each cell, usually 0.5 or so, and proceed from there, usually using Exact statistics to produce an OR that is interpretable in the usual fashion.
But its possible that there's a reason that there's a 0 cell count there. It might be etiologic, at which point that's its own major finding, or it might be revealing a flaw in the study design - for some reason you aren't sampling that particular combination, they're being excluded from the study, etc.
Both warrant further investigation, and in both cases you've moved beyond a single number emerging from a contingency table. | More interpretable measure of association than odds ratios for contingency tables with 0 counts
In my mind, as an Epidemiologist, it depends on why there were zero counts.
If there is a particular combination of exposure and disease that is known to be possible but rare, then the usual way to pr |
54,551 | More interpretable measure of association than odds ratios for contingency tables with 0 counts | A convenient parameterisation of this problem is through the marginal and conditional probabilities. So we have a parameter for exposure $\pi_{E}$ and two parameters for disease given the exposure: $\pi_{D|E}$ and $\pi_{D|\overline{E}}$. Then we do a hypothesis test
$$H_0:\; \pi_{D|E}=\pi_{D|\overline{E}}$$
I would have thought that a good measure of association is $1-P(H_0|nI)$ where $n$ is the sampled counts and $I$ is the prior information/assumptions.
Alternatively, you could calculate the posterior distribution of the difference $\delta=\pi_{D|E}-\pi_{D|\overline{E}}$ or the odds ratio $\gamma=\frac{\pi_{D|E}}{\pi_{D|\overline{E}}}$ if you prefer. One thing to note is that because the disease count is small, you inference will be sensitive to the prior distribution - so a sensitivity check for a few similar priors would be desirable.
There is also a fair amount of literature on Bayesian approaches. here is a useful blog entry which discusses an example of what I'm talking about. | More interpretable measure of association than odds ratios for contingency tables with 0 counts | A convenient parameterisation of this problem is through the marginal and conditional probabilities. So we have a parameter for exposure $\pi_{E}$ and two parameters for disease given the exposure: $ | More interpretable measure of association than odds ratios for contingency tables with 0 counts
A convenient parameterisation of this problem is through the marginal and conditional probabilities. So we have a parameter for exposure $\pi_{E}$ and two parameters for disease given the exposure: $\pi_{D|E}$ and $\pi_{D|\overline{E}}$. Then we do a hypothesis test
$$H_0:\; \pi_{D|E}=\pi_{D|\overline{E}}$$
I would have thought that a good measure of association is $1-P(H_0|nI)$ where $n$ is the sampled counts and $I$ is the prior information/assumptions.
Alternatively, you could calculate the posterior distribution of the difference $\delta=\pi_{D|E}-\pi_{D|\overline{E}}$ or the odds ratio $\gamma=\frac{\pi_{D|E}}{\pi_{D|\overline{E}}}$ if you prefer. One thing to note is that because the disease count is small, you inference will be sensitive to the prior distribution - so a sensitivity check for a few similar priors would be desirable.
There is also a fair amount of literature on Bayesian approaches. here is a useful blog entry which discusses an example of what I'm talking about. | More interpretable measure of association than odds ratios for contingency tables with 0 counts
A convenient parameterisation of this problem is through the marginal and conditional probabilities. So we have a parameter for exposure $\pi_{E}$ and two parameters for disease given the exposure: $ |
54,552 | How do you calculate the expected value of mixed lognormal distribution? | I'll try to give an answer for the mixture case. Let's formalize the set-up. We consider a random variable $X$ and an indicator random variable $I$, with $P[I=1] = 1-P[I=2] = p$, independent of $X$. Furthermore, for the mixture we have that the law of $X$ given that $I=1$ is the law of $X_1$, which is Gaussian with mean $U_1$ and variance $\sigma^2_1$; and, if $I=2$, the law is that of $X_2$ with law $N(U_2,\sigma^2_2)$.
Then, for $Y=\exp(X)$ we can calculate the expectation as
$$
\begin{align}
E[Y] &= E[\exp(X)] = p E[\exp(X)|I=1] + (1-p) E[\exp(X)|I=2] \\
&= p E[\exp(X_1)] + (1-p) E[\exp(X_2)] \\
&= p \exp(U_1+ \sigma^2_1/2) + (1-p) \exp(U_2+ \sigma^2_2/2)
\end{align}
$$
by using the expectation of a log-normal. | How do you calculate the expected value of mixed lognormal distribution? | I'll try to give an answer for the mixture case. Let's formalize the set-up. We consider a random variable $X$ and an indicator random variable $I$, with $P[I=1] = 1-P[I=2] = p$, independent of $X$. F | How do you calculate the expected value of mixed lognormal distribution?
I'll try to give an answer for the mixture case. Let's formalize the set-up. We consider a random variable $X$ and an indicator random variable $I$, with $P[I=1] = 1-P[I=2] = p$, independent of $X$. Furthermore, for the mixture we have that the law of $X$ given that $I=1$ is the law of $X_1$, which is Gaussian with mean $U_1$ and variance $\sigma^2_1$; and, if $I=2$, the law is that of $X_2$ with law $N(U_2,\sigma^2_2)$.
Then, for $Y=\exp(X)$ we can calculate the expectation as
$$
\begin{align}
E[Y] &= E[\exp(X)] = p E[\exp(X)|I=1] + (1-p) E[\exp(X)|I=2] \\
&= p E[\exp(X_1)] + (1-p) E[\exp(X_2)] \\
&= p \exp(U_1+ \sigma^2_1/2) + (1-p) \exp(U_2+ \sigma^2_2/2)
\end{align}
$$
by using the expectation of a log-normal. | How do you calculate the expected value of mixed lognormal distribution?
I'll try to give an answer for the mixture case. Let's formalize the set-up. We consider a random variable $X$ and an indicator random variable $I$, with $P[I=1] = 1-P[I=2] = p$, independent of $X$. F |
54,553 | How do you calculate the expected value of mixed lognormal distribution? | For instance, if you are dealing with a two component mixture, the first moment is calculated as follows:
$\mu_{\rm mixture}=\pi_{1} \cdot \mu_{1} + \pi_{2} \cdot \mu_{2}$,
where $\pi_{i}\ (i=1,2)$ stands for the components' weights and $\mu_{i}\ (i=1,2)$ for the means. | How do you calculate the expected value of mixed lognormal distribution? | For instance, if you are dealing with a two component mixture, the first moment is calculated as follows:
$\mu_{\rm mixture}=\pi_{1} \cdot \mu_{1} + \pi_{2} \cdot \mu_{2}$,
where $\pi_{i}\ (i=1,2)$ s | How do you calculate the expected value of mixed lognormal distribution?
For instance, if you are dealing with a two component mixture, the first moment is calculated as follows:
$\mu_{\rm mixture}=\pi_{1} \cdot \mu_{1} + \pi_{2} \cdot \mu_{2}$,
where $\pi_{i}\ (i=1,2)$ stands for the components' weights and $\mu_{i}\ (i=1,2)$ for the means. | How do you calculate the expected value of mixed lognormal distribution?
For instance, if you are dealing with a two component mixture, the first moment is calculated as follows:
$\mu_{\rm mixture}=\pi_{1} \cdot \mu_{1} + \pi_{2} \cdot \mu_{2}$,
where $\pi_{i}\ (i=1,2)$ s |
54,554 | Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters | There was a fairly good commentary in the Journal of Wildlife Management concerning uninformative parameters within the AIC framework.
Arnold, T. W. 2010. Uninformative parameters and model selection using Akaikeβs Information Criterion. Journal of Wildlife Management 74:1175β1178. [Link].
We usually consider models within 2 delta AIC as competitive. However, if a model has an addition of only one parameter to its competitor and that parameter is not significant, that parameter is likely spurious. AIC = β2LL + 2K so the penalty for adding one parameter is +2 AIC. If only one parameter is added but the AIC is within 2 delta AIC, the model fit was not improved enough to overcome the penalty. Therefore, that parameter is uninformative and should not be included in the model or interpreted as having an effect. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame | There was a fairly good commentary in the Journal of Wildlife Management concerning uninformative parameters within the AIC framework.
Arnold, T. W. 2010. Uninformative parameters and model selectio | Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters
There was a fairly good commentary in the Journal of Wildlife Management concerning uninformative parameters within the AIC framework.
Arnold, T. W. 2010. Uninformative parameters and model selection using Akaikeβs Information Criterion. Journal of Wildlife Management 74:1175β1178. [Link].
We usually consider models within 2 delta AIC as competitive. However, if a model has an addition of only one parameter to its competitor and that parameter is not significant, that parameter is likely spurious. AIC = β2LL + 2K so the penalty for adding one parameter is +2 AIC. If only one parameter is added but the AIC is within 2 delta AIC, the model fit was not improved enough to overcome the penalty. Therefore, that parameter is uninformative and should not be included in the model or interpreted as having an effect. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame
There was a fairly good commentary in the Journal of Wildlife Management concerning uninformative parameters within the AIC framework.
Arnold, T. W. 2010. Uninformative parameters and model selectio |
54,555 | Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters | I couldn't find AICmodelSelect in any R package, searching in both R with ?? and Google. What package did you use? Or is it R?
In any case, if the log likelihoods are equal and the models have different numbers of parameters, then the AIC are not equal, which is what you have entered. The formula for AIC is
$AIC = 2k - 2ln(L)$
where k is the number of parameters and $2ln(L)$ is the log likelihood.
In your case the two AICs would be 6 + 10182.0284 and 4 + 10182.0284, the second is smaller and that is the model you should choose, based on AIC. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame | I couldn't find AICmodelSelect in any R package, searching in both R with ?? and Google. What package did you use? Or is it R?
In any case, if the log likelihoods are equal and the models have differe | Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters
I couldn't find AICmodelSelect in any R package, searching in both R with ?? and Google. What package did you use? Or is it R?
In any case, if the log likelihoods are equal and the models have different numbers of parameters, then the AIC are not equal, which is what you have entered. The formula for AIC is
$AIC = 2k - 2ln(L)$
where k is the number of parameters and $2ln(L)$ is the log likelihood.
In your case the two AICs would be 6 + 10182.0284 and 4 + 10182.0284, the second is smaller and that is the model you should choose, based on AIC. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame
I couldn't find AICmodelSelect in any R package, searching in both R with ?? and Google. What package did you use? Or is it R?
In any case, if the log likelihoods are equal and the models have differe |
54,556 | Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters | Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Why do people strictly rely upon a criteria (ie AIC) to determine the "best" model? Why not use the principle of N.I.I.D and parsimony as the guide instead of a fit statistic? Sure we can compare variance to after that to see who had a better model, but this whole rule based way of modeling is contra to what I believe in.
As you may know, N.I.I.D. is what we are first taught in time series analysis that the errors should be gaussian. By using AIC or BIC criterion to build a model you are losing the goal of building a model and more of fitting. I have found that instead of using AIC that focusing on the N.I.I.D. of the errors, significance of parameters and parsimony you will have a better model using an Identification scheme focused on robustified ACF and PACF. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame | Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
| Using AIC, for model selection when both models are equally weighted, and one model has fewer parameters
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Why do people strictly rely upon a criteria (ie AIC) to determine the "best" model? Why not use the principle of N.I.I.D and parsimony as the guide instead of a fit statistic? Sure we can compare variance to after that to see who had a better model, but this whole rule based way of modeling is contra to what I believe in.
As you may know, N.I.I.D. is what we are first taught in time series analysis that the errors should be gaussian. By using AIC or BIC criterion to build a model you are losing the goal of building a model and more of fitting. I have found that instead of using AIC that focusing on the N.I.I.D. of the errors, significance of parameters and parsimony you will have a better model using an Identification scheme focused on robustified ACF and PACF. | Using AIC, for model selection when both models are equally weighted, and one model has fewer parame
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
|
54,557 | Why don't the results of testing $H_0 : \beta = 0$ and $H_0 : {\rm cor}(X,Y)=0$ agree? | The reason is that you're testing two different hypotheses:
the Pearson correlation test is testing whether there is a non-zero correlation between the given predictor and the response variable, not taking into account the context supplied by the other predictors.
The $t$-test for the regression coefficient is testing whether that predictor has a non-zero effect when the other predictors are in the model.
The two need not agree when some of the predictive power of a given predictor is subsumed another predictor (or predictors). The often happens when there is collinearity. For example, suppose that you have two predictors $X_1, X_2$ that are highly correlated with each other and are also highly correlated with the response, $Y$. Then it is quite likely that both will produce a significant result from the Pearson correlation test but, most likely, only one (or neither) of the two predictors will be significant when you enter them into the model simultaneously. Here is an example in R (unnecessary output lines were deleted):
x1 = rnorm(200)
x2 = .9*x1 + sqrt(1-.9^2)*rnorm(200)
y = 1 + 2*x1 + rnorm(200,sd=5)
# Pearson correlation test.
cor.test(x1,y)$p.value
[1] 6.002424e-07
cor.test(x2,y)$p.value
[1] 3.473047e-07
# linear regression
summary(lm(y~x1+x2))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3835 0.3445 4.016 8.4e-05 ***
x1 0.8621 0.8069 1.068 0.287
x2 1.1716 0.7893 1.484 0.139
What you may be thinking of is that when you're fitting a simple linear regression model, i.e. a regression with only one predictor, the Pearson correlation test will agree with the $t$-test of the regression coefficient:
summary( lm(y~x1) )
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3369 0.3441 3.886 0.000139 ***
x1 1.9249 0.3731 5.159 6e-07 ***
In that case, they actually are testing the same hypothesis - i.e. "is $X_1$ linearly related to $Y$?" - and it turns out that the hypothesis tests are actually exactly the same, so the $p$-values will be identical. | Why don't the results of testing $H_0 : \beta = 0$ and $H_0 : {\rm cor}(X,Y)=0$ agree? | The reason is that you're testing two different hypotheses:
the Pearson correlation test is testing whether there is a non-zero correlation between the given predictor and the response variable, not | Why don't the results of testing $H_0 : \beta = 0$ and $H_0 : {\rm cor}(X,Y)=0$ agree?
The reason is that you're testing two different hypotheses:
the Pearson correlation test is testing whether there is a non-zero correlation between the given predictor and the response variable, not taking into account the context supplied by the other predictors.
The $t$-test for the regression coefficient is testing whether that predictor has a non-zero effect when the other predictors are in the model.
The two need not agree when some of the predictive power of a given predictor is subsumed another predictor (or predictors). The often happens when there is collinearity. For example, suppose that you have two predictors $X_1, X_2$ that are highly correlated with each other and are also highly correlated with the response, $Y$. Then it is quite likely that both will produce a significant result from the Pearson correlation test but, most likely, only one (or neither) of the two predictors will be significant when you enter them into the model simultaneously. Here is an example in R (unnecessary output lines were deleted):
x1 = rnorm(200)
x2 = .9*x1 + sqrt(1-.9^2)*rnorm(200)
y = 1 + 2*x1 + rnorm(200,sd=5)
# Pearson correlation test.
cor.test(x1,y)$p.value
[1] 6.002424e-07
cor.test(x2,y)$p.value
[1] 3.473047e-07
# linear regression
summary(lm(y~x1+x2))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3835 0.3445 4.016 8.4e-05 ***
x1 0.8621 0.8069 1.068 0.287
x2 1.1716 0.7893 1.484 0.139
What you may be thinking of is that when you're fitting a simple linear regression model, i.e. a regression with only one predictor, the Pearson correlation test will agree with the $t$-test of the regression coefficient:
summary( lm(y~x1) )
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.3369 0.3441 3.886 0.000139 ***
x1 1.9249 0.3731 5.159 6e-07 ***
In that case, they actually are testing the same hypothesis - i.e. "is $X_1$ linearly related to $Y$?" - and it turns out that the hypothesis tests are actually exactly the same, so the $p$-values will be identical. | Why don't the results of testing $H_0 : \beta = 0$ and $H_0 : {\rm cor}(X,Y)=0$ agree?
The reason is that you're testing two different hypotheses:
the Pearson correlation test is testing whether there is a non-zero correlation between the given predictor and the response variable, not |
54,558 | Bias in classifier model selection | The key question is "have the test examples in the final cross-validation been involved in selecting any aspect of the model"; if the answer is "yes" then the performance estimate is likely to be biased. If the answer is "no" then it is probably unbiased.
For example nested cross-validation is fine (as all model choices are determined using only the examples in the "training" partition of the outer cross-validation. If you use repeated cross-validation to set the hyper-parameters or select features and then use repeated cross-validation (using a different set of random partitionings) then that will give a biased performance estimate, as all of the data have influenced a choice about the model evaluated by the second cross-validation. | Bias in classifier model selection | The key question is "have the test examples in the final cross-validation been involved in selecting any aspect of the model"; if the answer is "yes" then the performance estimate is likely to be bias | Bias in classifier model selection
The key question is "have the test examples in the final cross-validation been involved in selecting any aspect of the model"; if the answer is "yes" then the performance estimate is likely to be biased. If the answer is "no" then it is probably unbiased.
For example nested cross-validation is fine (as all model choices are determined using only the examples in the "training" partition of the outer cross-validation. If you use repeated cross-validation to set the hyper-parameters or select features and then use repeated cross-validation (using a different set of random partitionings) then that will give a biased performance estimate, as all of the data have influenced a choice about the model evaluated by the second cross-validation. | Bias in classifier model selection
The key question is "have the test examples in the final cross-validation been involved in selecting any aspect of the model"; if the answer is "yes" then the performance estimate is likely to be bias |
54,559 | Bias in classifier model selection | There is a difference between repeated cross-validation and nested-cross validation. The latter is useful for determining hyper-parameters and selecting features.
I've seen a couple of recent papers about the bias-variance implications of repeated cross-validation. Rodriguez and Lozano (IEEE T.PAMI 2010) test on artificial datasets (based on parameterisations of a single mixture model) and conclude that repeated cross-validation is useful and reduces variance, while the inner k-fold cross-validation gives a tradeoff between bias and variance (with k = 5 or 10 recommended for comparing algorithms as a reasonable tradeoff) - they used 10 repetitions, but anything from 2 or 3 to 20 or 30 is reasonable in my experience. The exact tradeoff and relation to the "true" accuracy depends on the dataset.
Vanwinckelen and Blockeel (2012) explore with 9 of the larger UCI datasets, with subsets of 200 and 1000 used for cross-validation, and the full dataset used to approximate the full population. For 10-CV typically all but a couple of "true" accuracies are within the confidence interval determined by the 10-CV, but for 10x10-CV and 30x10-CV all but a couple are outside the confidence interval. Also for all but a couple, the difference between the estimated and true accuracy is better for the 1000 than the 200 samples. These datasets vary in size (and we may already use 30% of the data) so their representativeness of the population is an unwarranted assumption, and this usage is thus in fact also artificial. But the contradictory results of the two papers nonetheless do seem valid for their data, however, I think the truth, and the ideal approach, lies somewhere between.
For CxK-CV increase C by a factor of four halves the size of the confidence interval. But you are still using the same data different ways, and this apparent reduction of variance will in the end become increasingly spurious (because the independence assumption is violated). For the artificial data with the simple mixture model of the first study, 10x10-CV seems to remain within the useful range, but for most of the real datasets, the 10 repetitions seems already to be too much.
I tend to use 2x5-CV (not 5x2-CV as recommended by Dietterich) if I am not too tight on data. Where we are really scrabbling for enough data (in very large very hard signal processing problems) but can't afford to do LOO, we use Cx20-CV with C up to 10, but using an early-stopping significance-estimating technique to stop when no significant improvement can be expected, typically avoiding half the runs, suggesting that C of 5 suffices.
See:
David M W Powers and Adham Atyabi, βThe Problem of Cross-Validation: Averaging and Bias, Reptition and Significanceβ, Spring World Congress on Engineering and Technology, Xian China, May 2012, IEEE USA, V2:93-97
What is lacking at the moment is good ways of seeing how many repetitions are useful, and when the reduction in variance increasing CxK-CV purports to achieve actually ceases to be real. The repetition count C of 5 is a compromise between the for and against recommendations of the two papers I cited earlier, and using the original variance for the confidence interval, and the repetition just to improve the estimate, is a suggestion of the agin paper. But really we need a method of assessing when this is, an early stopping technique like in our paper that avoids being led astray by underestimation - though we saw no sign of this in our studies on real data we really have no way of knowing as we used all the data we available for the CxK-CV. | Bias in classifier model selection | There is a difference between repeated cross-validation and nested-cross validation. The latter is useful for determining hyper-parameters and selecting features.
I've seen a couple of recent paper | Bias in classifier model selection
There is a difference between repeated cross-validation and nested-cross validation. The latter is useful for determining hyper-parameters and selecting features.
I've seen a couple of recent papers about the bias-variance implications of repeated cross-validation. Rodriguez and Lozano (IEEE T.PAMI 2010) test on artificial datasets (based on parameterisations of a single mixture model) and conclude that repeated cross-validation is useful and reduces variance, while the inner k-fold cross-validation gives a tradeoff between bias and variance (with k = 5 or 10 recommended for comparing algorithms as a reasonable tradeoff) - they used 10 repetitions, but anything from 2 or 3 to 20 or 30 is reasonable in my experience. The exact tradeoff and relation to the "true" accuracy depends on the dataset.
Vanwinckelen and Blockeel (2012) explore with 9 of the larger UCI datasets, with subsets of 200 and 1000 used for cross-validation, and the full dataset used to approximate the full population. For 10-CV typically all but a couple of "true" accuracies are within the confidence interval determined by the 10-CV, but for 10x10-CV and 30x10-CV all but a couple are outside the confidence interval. Also for all but a couple, the difference between the estimated and true accuracy is better for the 1000 than the 200 samples. These datasets vary in size (and we may already use 30% of the data) so their representativeness of the population is an unwarranted assumption, and this usage is thus in fact also artificial. But the contradictory results of the two papers nonetheless do seem valid for their data, however, I think the truth, and the ideal approach, lies somewhere between.
For CxK-CV increase C by a factor of four halves the size of the confidence interval. But you are still using the same data different ways, and this apparent reduction of variance will in the end become increasingly spurious (because the independence assumption is violated). For the artificial data with the simple mixture model of the first study, 10x10-CV seems to remain within the useful range, but for most of the real datasets, the 10 repetitions seems already to be too much.
I tend to use 2x5-CV (not 5x2-CV as recommended by Dietterich) if I am not too tight on data. Where we are really scrabbling for enough data (in very large very hard signal processing problems) but can't afford to do LOO, we use Cx20-CV with C up to 10, but using an early-stopping significance-estimating technique to stop when no significant improvement can be expected, typically avoiding half the runs, suggesting that C of 5 suffices.
See:
David M W Powers and Adham Atyabi, βThe Problem of Cross-Validation: Averaging and Bias, Reptition and Significanceβ, Spring World Congress on Engineering and Technology, Xian China, May 2012, IEEE USA, V2:93-97
What is lacking at the moment is good ways of seeing how many repetitions are useful, and when the reduction in variance increasing CxK-CV purports to achieve actually ceases to be real. The repetition count C of 5 is a compromise between the for and against recommendations of the two papers I cited earlier, and using the original variance for the confidence interval, and the repetition just to improve the estimate, is a suggestion of the agin paper. But really we need a method of assessing when this is, an early stopping technique like in our paper that avoids being led astray by underestimation - though we saw no sign of this in our studies on real data we really have no way of knowing as we used all the data we available for the CxK-CV. | Bias in classifier model selection
There is a difference between repeated cross-validation and nested-cross validation. The latter is useful for determining hyper-parameters and selecting features.
I've seen a couple of recent paper |
54,560 | Does a regression tree strictly dominate OLS in prediction? | No Regression trees do not dominate OLS regression. OLS regression is intended for models where you want to estimate $E[Y|X]$ where $X$ is a set of predictors and the residuals from the model are continuous and Gaussian with mean $0$. Under that setting OLS should be superior to the regression tree. Remember that the regression model takes acoount of the value of the covariate whereas the tree splits or partitions it into discrete segments and thus does not fully use all the information in the data other than to use it to find the best places to split. On the other hand when there are outliers or the residual component is very non normal the OLS regression puts too much weight on outliers and leverage points and the regression tree or a robust linear regression may do much better. Also even though you may be more comfortable with $R^2$, $p$-values and regression coefficients one of the importnat points about CART that Richard Olshen pointed out in the CART book is that when they applied classification and regression trees to medical problems the physicians found the tree structure very intuitive and more believable than a linear regression or linear discriminant analysis. | Does a regression tree strictly dominate OLS in prediction? | No Regression trees do not dominate OLS regression. OLS regression is intended for models where you want to estimate $E[Y|X]$ where $X$ is a set of predictors and the residuals from the model are con | Does a regression tree strictly dominate OLS in prediction?
No Regression trees do not dominate OLS regression. OLS regression is intended for models where you want to estimate $E[Y|X]$ where $X$ is a set of predictors and the residuals from the model are continuous and Gaussian with mean $0$. Under that setting OLS should be superior to the regression tree. Remember that the regression model takes acoount of the value of the covariate whereas the tree splits or partitions it into discrete segments and thus does not fully use all the information in the data other than to use it to find the best places to split. On the other hand when there are outliers or the residual component is very non normal the OLS regression puts too much weight on outliers and leverage points and the regression tree or a robust linear regression may do much better. Also even though you may be more comfortable with $R^2$, $p$-values and regression coefficients one of the importnat points about CART that Richard Olshen pointed out in the CART book is that when they applied classification and regression trees to medical problems the physicians found the tree structure very intuitive and more believable than a linear regression or linear discriminant analysis. | Does a regression tree strictly dominate OLS in prediction?
No Regression trees do not dominate OLS regression. OLS regression is intended for models where you want to estimate $E[Y|X]$ where $X$ is a set of predictors and the residuals from the model are con |
54,561 | How to do binary logistic regression on people (couples) clustered within homes? | For me, this sounds like a (more or less typical) dyadic data set and I would definitely control for dyadic dependencies (i.e. at the houshold level) via multilevel/structural equation modeling.
David Kenny owns a great website on Dyadic Analysis. He also is co-author of a book on Dyadic Data Analysis that is highly recommanded.
Since you seem to use Stata, I would use the xtmelogit command (see here for more information). | How to do binary logistic regression on people (couples) clustered within homes? | For me, this sounds like a (more or less typical) dyadic data set and I would definitely control for dyadic dependencies (i.e. at the houshold level) via multilevel/structural equation modeling.
Davi | How to do binary logistic regression on people (couples) clustered within homes?
For me, this sounds like a (more or less typical) dyadic data set and I would definitely control for dyadic dependencies (i.e. at the houshold level) via multilevel/structural equation modeling.
David Kenny owns a great website on Dyadic Analysis. He also is co-author of a book on Dyadic Data Analysis that is highly recommanded.
Since you seem to use Stata, I would use the xtmelogit command (see here for more information). | How to do binary logistic regression on people (couples) clustered within homes?
For me, this sounds like a (more or less typical) dyadic data set and I would definitely control for dyadic dependencies (i.e. at the houshold level) via multilevel/structural equation modeling.
Davi |
54,562 | How to do binary logistic regression on people (couples) clustered within homes? | One assumption of fixed-effects general linear models (e.g. "ordinary" logistic regression) is that observations are independent of each other. However, there is likely some dependency in the observations in your study. For example, two people living in the same household are more likely to have similar diets and similar levels of physical activity than two people living in separate households.
I would consider modelling the data using either a logistic or Poisson mixed-effects model. The fixed effects would be your measured exposure covariates. The random effect would be the household. I am not particularly famililar with Stata's mixed effects syntax. In R, for a logistic mixed-effects model, I would call glmer(outcome ~ exposure1 + exposure2 + (1|household), data = study.data, family = binomial). A quick Google search suggests that the equivalent in Stata would be xtmelogit outcome exposure1 exposure2 || household. | How to do binary logistic regression on people (couples) clustered within homes? | One assumption of fixed-effects general linear models (e.g. "ordinary" logistic regression) is that observations are independent of each other. However, there is likely some dependency in the observat | How to do binary logistic regression on people (couples) clustered within homes?
One assumption of fixed-effects general linear models (e.g. "ordinary" logistic regression) is that observations are independent of each other. However, there is likely some dependency in the observations in your study. For example, two people living in the same household are more likely to have similar diets and similar levels of physical activity than two people living in separate households.
I would consider modelling the data using either a logistic or Poisson mixed-effects model. The fixed effects would be your measured exposure covariates. The random effect would be the household. I am not particularly famililar with Stata's mixed effects syntax. In R, for a logistic mixed-effects model, I would call glmer(outcome ~ exposure1 + exposure2 + (1|household), data = study.data, family = binomial). A quick Google search suggests that the equivalent in Stata would be xtmelogit outcome exposure1 exposure2 || household. | How to do binary logistic regression on people (couples) clustered within homes?
One assumption of fixed-effects general linear models (e.g. "ordinary" logistic regression) is that observations are independent of each other. However, there is likely some dependency in the observat |
54,563 | How to do binary logistic regression on people (couples) clustered within homes? | I see two possibilities. One would be to apply separate models (one for the female and one for the male member of the couple). The second possibility would be to have one model with an indicator variable to distinguish the male member from the female member of the couple. | How to do binary logistic regression on people (couples) clustered within homes? | I see two possibilities. One would be to apply separate models (one for the female and one for the male member of the couple). The second possibility would be to have one model with an indicator vari | How to do binary logistic regression on people (couples) clustered within homes?
I see two possibilities. One would be to apply separate models (one for the female and one for the male member of the couple). The second possibility would be to have one model with an indicator variable to distinguish the male member from the female member of the couple. | How to do binary logistic regression on people (couples) clustered within homes?
I see two possibilities. One would be to apply separate models (one for the female and one for the male member of the couple). The second possibility would be to have one model with an indicator vari |
54,564 | Does autocorrelation cause bias in the regression parameters in piecewise regression? | A regression parameter that is often forgotten is the variance of the residuals. This one will be biased if residuals are correlated. This means that p-values of whatever test you are performing have to be handled with great care.
Otherwise, if you fit a single line through something that is not linear (your case), you should observe auto-correlation of the residuals, but through the X variable, not through time. In that case the parameters are not biased, they are just wrong.
However you specifically mention that your residuals are auto-correlated in time, so you could perhaps add time as a variable in your model and check whether this decorrelates the residuals. | Does autocorrelation cause bias in the regression parameters in piecewise regression? | A regression parameter that is often forgotten is the variance of the residuals. This one will be biased if residuals are correlated. This means that p-values of whatever test you are performing have | Does autocorrelation cause bias in the regression parameters in piecewise regression?
A regression parameter that is often forgotten is the variance of the residuals. This one will be biased if residuals are correlated. This means that p-values of whatever test you are performing have to be handled with great care.
Otherwise, if you fit a single line through something that is not linear (your case), you should observe auto-correlation of the residuals, but through the X variable, not through time. In that case the parameters are not biased, they are just wrong.
However you specifically mention that your residuals are auto-correlated in time, so you could perhaps add time as a variable in your model and check whether this decorrelates the residuals. | Does autocorrelation cause bias in the regression parameters in piecewise regression?
A regression parameter that is often forgotten is the variance of the residuals. This one will be biased if residuals are correlated. This means that p-values of whatever test you are performing have |
54,565 | Does autocorrelation cause bias in the regression parameters in piecewise regression? | Thanks for sharing your data. It raises some interesting answers. To begin with a potentially useful model between y and x is which suggests a strong relationship between y and two previous y's and both a contemporaneous and lag 1 effect of X. The plot of actual/fit and forecast is and the cleansed ( outlier adjusted series) is does not suggest level shifts and/or local time trends but rather a few one-time anomalies. The ACF of the original series is while the ACF of the model residuals is . In conclusion there is no need for "local splines", local trends , level shifts in the presence of the x variable which carries the load for the visually suggestive non-conditional plot of y .Now if we ignore the x variable and any memory in y (the ARIMA structure ) and only focus on detecting and incorporating any needed pulses, level shifts, seasonal pulses and.or local time trends we get a totally different answer. Here is the equation showing two time trends and two level shifts and a few pulses reflecting unknown exceptional activity. The actual/fit/forecast is . The acf of the residuals suggests some omitted ( by design/specification of no ARIMA ) memory structure ! and a plot of the residuals . The fit/fore graphic tells the story of the equation in a visual way . As I commented to my friend Michael, the data suggests appropriate remedies. In summary in the absence of x and the memory of y there are a few "local splines" whose length and time range can be found analytically. If x is included and the past of y is considered then there is no additional need for these "local time trends' I hope this has been of help to the original poster and to others who have commented here. I notice that I misnamed the data ringwald rather than ringold. For transparency I am one of the developers of AUTOBOX , the software that I used for this analysis. The methods used are based upon the pioneering work of G.C.Tiao and others. | Does autocorrelation cause bias in the regression parameters in piecewise regression? | Thanks for sharing your data. It raises some interesting answers. To begin with a potentially useful model between y and x is which suggests a strong relationship between y and two previous y's and bo | Does autocorrelation cause bias in the regression parameters in piecewise regression?
Thanks for sharing your data. It raises some interesting answers. To begin with a potentially useful model between y and x is which suggests a strong relationship between y and two previous y's and both a contemporaneous and lag 1 effect of X. The plot of actual/fit and forecast is and the cleansed ( outlier adjusted series) is does not suggest level shifts and/or local time trends but rather a few one-time anomalies. The ACF of the original series is while the ACF of the model residuals is . In conclusion there is no need for "local splines", local trends , level shifts in the presence of the x variable which carries the load for the visually suggestive non-conditional plot of y .Now if we ignore the x variable and any memory in y (the ARIMA structure ) and only focus on detecting and incorporating any needed pulses, level shifts, seasonal pulses and.or local time trends we get a totally different answer. Here is the equation showing two time trends and two level shifts and a few pulses reflecting unknown exceptional activity. The actual/fit/forecast is . The acf of the residuals suggests some omitted ( by design/specification of no ARIMA ) memory structure ! and a plot of the residuals . The fit/fore graphic tells the story of the equation in a visual way . As I commented to my friend Michael, the data suggests appropriate remedies. In summary in the absence of x and the memory of y there are a few "local splines" whose length and time range can be found analytically. If x is included and the past of y is considered then there is no additional need for these "local time trends' I hope this has been of help to the original poster and to others who have commented here. I notice that I misnamed the data ringwald rather than ringold. For transparency I am one of the developers of AUTOBOX , the software that I used for this analysis. The methods used are based upon the pioneering work of G.C.Tiao and others. | Does autocorrelation cause bias in the regression parameters in piecewise regression?
Thanks for sharing your data. It raises some interesting answers. To begin with a potentially useful model between y and x is which suggests a strong relationship between y and two previous y's and bo |
54,566 | Does autocorrelation cause bias in the regression parameters in piecewise regression? | I think piecewise regression means fitting several different lines at various cut points. It is not clear whether the number of cutoffs is prespecified and whether their locations are prespecified. Even if they are all prespecified it seems that each piece would be fit by ordinary regression and the problem of correlationed residuals and under or overestimated residual variance would be there in each line. There is also another assumption not stated. Are the residuals for each line assumed to have the same variance as for all the others? So the problem exists and may be worse in this more complex type of regression. Regarding IrishStat's comment. I think he is right for the piecewise model because the breakpoints could be time interventions that affect the stationary component and possibly also the nonstationary component of the model. But in ordinary harmonic regression the nonstationary seasonal component is modelled by sine and cosine functions of t and the residuals could be white noises or they could be model as a stationary process in time such as an AR(p) model. | Does autocorrelation cause bias in the regression parameters in piecewise regression? | I think piecewise regression means fitting several different lines at various cut points. It is not clear whether the number of cutoffs is prespecified and whether their locations are prespecified. | Does autocorrelation cause bias in the regression parameters in piecewise regression?
I think piecewise regression means fitting several different lines at various cut points. It is not clear whether the number of cutoffs is prespecified and whether their locations are prespecified. Even if they are all prespecified it seems that each piece would be fit by ordinary regression and the problem of correlationed residuals and under or overestimated residual variance would be there in each line. There is also another assumption not stated. Are the residuals for each line assumed to have the same variance as for all the others? So the problem exists and may be worse in this more complex type of regression. Regarding IrishStat's comment. I think he is right for the piecewise model because the breakpoints could be time interventions that affect the stationary component and possibly also the nonstationary component of the model. But in ordinary harmonic regression the nonstationary seasonal component is modelled by sine and cosine functions of t and the residuals could be white noises or they could be model as a stationary process in time such as an AR(p) model. | Does autocorrelation cause bias in the regression parameters in piecewise regression?
I think piecewise regression means fitting several different lines at various cut points. It is not clear whether the number of cutoffs is prespecified and whether their locations are prespecified. |
54,567 | Probability distribution of Fourier coefficients | The complex Fourier coefficients of a random series form a 2-D normal distribution in the complex plane (a Gaussian rotated around zero). When taking the magnitude of the complex spectrum, at each magnitude $r$ (the distance from the origin) the probability density will be $\int 2 \pi r dr$, multiplied by the Gaussian, which gives (up to some constants) $r\times e^{-r^2}$. Incidentally, this does not depend on the shape of the original random distribution (from the central limit theorem, I guess). I tried in Matlab randn(...) (normally distributed), rand(...) (uniformly distributed) or rand(...)>.5 (zeros and ones only). Magic! | Probability distribution of Fourier coefficients | The complex Fourier coefficients of a random series form a 2-D normal distribution in the complex plane (a Gaussian rotated around zero). When taking the magnitude of the complex spectrum, at each mag | Probability distribution of Fourier coefficients
The complex Fourier coefficients of a random series form a 2-D normal distribution in the complex plane (a Gaussian rotated around zero). When taking the magnitude of the complex spectrum, at each magnitude $r$ (the distance from the origin) the probability density will be $\int 2 \pi r dr$, multiplied by the Gaussian, which gives (up to some constants) $r\times e^{-r^2}$. Incidentally, this does not depend on the shape of the original random distribution (from the central limit theorem, I guess). I tried in Matlab randn(...) (normally distributed), rand(...) (uniformly distributed) or rand(...)>.5 (zeros and ones only). Magic! | Probability distribution of Fourier coefficients
The complex Fourier coefficients of a random series form a 2-D normal distribution in the complex plane (a Gaussian rotated around zero). When taking the magnitude of the complex spectrum, at each mag |
54,568 | Probability distribution of Fourier coefficients | You may find this topic dealt with in Brillinger, D.R. Time Series Analysis and Theory, in Chapter 4, particularly Theorem 4.4.2. I think in your case the answer is that the Fourier coefficients will have asymptotically a complex normal distribution, as pointed in the response by @micork. This will be the case rather generally, the crucial assumption being that the dependence in the original time series is mild enough (for precise conditions, see op. cit. Assumption 2.6.1). | Probability distribution of Fourier coefficients | You may find this topic dealt with in Brillinger, D.R. Time Series Analysis and Theory, in Chapter 4, particularly Theorem 4.4.2. I think in your case the answer is that the Fourier coefficients will | Probability distribution of Fourier coefficients
You may find this topic dealt with in Brillinger, D.R. Time Series Analysis and Theory, in Chapter 4, particularly Theorem 4.4.2. I think in your case the answer is that the Fourier coefficients will have asymptotically a complex normal distribution, as pointed in the response by @micork. This will be the case rather generally, the crucial assumption being that the dependence in the original time series is mild enough (for precise conditions, see op. cit. Assumption 2.6.1). | Probability distribution of Fourier coefficients
You may find this topic dealt with in Brillinger, D.R. Time Series Analysis and Theory, in Chapter 4, particularly Theorem 4.4.2. I think in your case the answer is that the Fourier coefficients will |
54,569 | Significance of difference in means | They're probably not confidence intervals for the mean, but rather standard deviations from the data, just reported weirdly.
This interpretation is supported by the very confused presentation of results in the second quote. Fisher's exact test is a) not the same as a chi-squared test, b) almost certainly inappropriate given most sampling schemes (both margins are seldom fixed). And if they used both we'd want to know why. Neither test compares percentages, although probably results are ultimately reported in percentages. Finally, applying Students t with such low counts seems risky at best.
From the other side of the paywall its hard to say more, but I think you're seeing a weak analysis ambiguously presented. | Significance of difference in means | They're probably not confidence intervals for the mean, but rather standard deviations from the data, just reported weirdly.
This interpretation is supported by the very confused presentation of res | Significance of difference in means
They're probably not confidence intervals for the mean, but rather standard deviations from the data, just reported weirdly.
This interpretation is supported by the very confused presentation of results in the second quote. Fisher's exact test is a) not the same as a chi-squared test, b) almost certainly inappropriate given most sampling schemes (both margins are seldom fixed). And if they used both we'd want to know why. Neither test compares percentages, although probably results are ultimately reported in percentages. Finally, applying Students t with such low counts seems risky at best.
From the other side of the paywall its hard to say more, but I think you're seeing a weak analysis ambiguously presented. | Significance of difference in means
They're probably not confidence intervals for the mean, but rather standard deviations from the data, just reported weirdly.
This interpretation is supported by the very confused presentation of res |
54,570 | Significance of difference in means | The data aren't normal. I presume that the number of acute visits is an integer, i.e. you can't visit a patient 1.5 times. You either visit them once or twice.
As an example, here are some data:
Mean: 3.2 sd: 2.142
8 8 4 1 2 2 0 2 5 2 3 3 3 1 5 4 4 1 4 2
and
Mean:1.25 sd: 1.164
4 0 4 1 1 2 0 1 2 2 1 2 1 0 0 0 1 1 1 1
If you performed a t-test, you would get p=0.001 | Significance of difference in means | The data aren't normal. I presume that the number of acute visits is an integer, i.e. you can't visit a patient 1.5 times. You either visit them once or twice.
As an example, here are some data:
Mean | Significance of difference in means
The data aren't normal. I presume that the number of acute visits is an integer, i.e. you can't visit a patient 1.5 times. You either visit them once or twice.
As an example, here are some data:
Mean: 3.2 sd: 2.142
8 8 4 1 2 2 0 2 5 2 3 3 3 1 5 4 4 1 4 2
and
Mean:1.25 sd: 1.164
4 0 4 1 1 2 0 1 2 2 1 2 1 0 0 0 1 1 1 1
If you performed a t-test, you would get p=0.001 | Significance of difference in means
The data aren't normal. I presume that the number of acute visits is an integer, i.e. you can't visit a patient 1.5 times. You either visit them once or twice.
As an example, here are some data:
Mean |
54,571 | Inverse hyperbolic sine transformation: estimation of theta | The basic idea is as follows,
You have the IHS transformation
$$z_j = g_j(y_j;\theta)= \operatorname{sinh}^{-1}(\theta y_j)/\theta,\,\,j=1,...,n.$$
Then you have to find the value of $\theta$ that maximises the concentrated log-likelihood
$$L(\theta) = -\dfrac{n}{2}\log[g(\theta)^TMg(\theta)] - \dfrac{1}{2}\sum_j\log(1+\theta^2 y_j^2),$$
where $g(\theta)=(g_1(y_1;\theta),...,g_n(y_n;\theta))$, $M = I - X(X^TX)^{-1}X^T,$ and $X$ is the matrix of explanatory variables.
I hope this helps.
Ref: Alternative Transformations to Handle Extreme Values of the Dependent Variable
Author(s): John B. Burbidge, Lonnie Magee, A. Leslie Robb
Source: Journal of the American Statistical Association, Vol. 83, No. 401 (Mar., 1988), pp. 123-127x | Inverse hyperbolic sine transformation: estimation of theta | The basic idea is as follows,
You have the IHS transformation
$$z_j = g_j(y_j;\theta)= \operatorname{sinh}^{-1}(\theta y_j)/\theta,\,\,j=1,...,n.$$
Then you have to find the value of $\theta$ that max | Inverse hyperbolic sine transformation: estimation of theta
The basic idea is as follows,
You have the IHS transformation
$$z_j = g_j(y_j;\theta)= \operatorname{sinh}^{-1}(\theta y_j)/\theta,\,\,j=1,...,n.$$
Then you have to find the value of $\theta$ that maximises the concentrated log-likelihood
$$L(\theta) = -\dfrac{n}{2}\log[g(\theta)^TMg(\theta)] - \dfrac{1}{2}\sum_j\log(1+\theta^2 y_j^2),$$
where $g(\theta)=(g_1(y_1;\theta),...,g_n(y_n;\theta))$, $M = I - X(X^TX)^{-1}X^T,$ and $X$ is the matrix of explanatory variables.
I hope this helps.
Ref: Alternative Transformations to Handle Extreme Values of the Dependent Variable
Author(s): John B. Burbidge, Lonnie Magee, A. Leslie Robb
Source: Journal of the American Statistical Association, Vol. 83, No. 401 (Mar., 1988), pp. 123-127x | Inverse hyperbolic sine transformation: estimation of theta
The basic idea is as follows,
You have the IHS transformation
$$z_j = g_j(y_j;\theta)= \operatorname{sinh}^{-1}(\theta y_j)/\theta,\,\,j=1,...,n.$$
Then you have to find the value of $\theta$ that max |
54,572 | 2 period difference-in-differences fixed effects versus OLS | @Charlie is right. You only have two time periods, so there will inevitably be variation in the $i$-specific sample variances of $x_{it}$. In addition, even if you have programmed the simulation for there to be homogenous effects, due to small number of periods there will inevitably be some sample correlation between $x_{it}$ and, e.g., your error term, and so there will inevitably be some "effect heterogeneity" in the $i$-specific partial relationships between $x_{it}$ and $y_{it}$. The interaction of conditional variance and effect heterogeneity tilts your FE estimates of coefficients on $x_{it}$. The coefficient on $x_{it}$ is a precision-weighted average of the $i$-specific coefficients on $x_{it}$. A different tilting occurs when you fit OLS to the model that you have specified above: now, the coefficient on $x_{it}$ is a precision weighted average of the coefficients on $x_{it}$ for the those with $treatment_i=1$ and those with $treatment_i=0$. These differences propagate to your estimates of $\beta_3$. Think Frisch-Waugh-Lovell. To demonstrate the validity of Charlie's claim, simply generate $x_{it}$'s where the variance is exactly constant for each $i$, but you still have different patterns. E.g, randomly assign $i$'s to have either $(x_{i1}, x_{i2})=(0,1)$ or $(1,0)$. If you do this, you will see that the differences between the FE and OLS estimates disappears. | 2 period difference-in-differences fixed effects versus OLS | @Charlie is right. You only have two time periods, so there will inevitably be variation in the $i$-specific sample variances of $x_{it}$. In addition, even if you have programmed the simulation for | 2 period difference-in-differences fixed effects versus OLS
@Charlie is right. You only have two time periods, so there will inevitably be variation in the $i$-specific sample variances of $x_{it}$. In addition, even if you have programmed the simulation for there to be homogenous effects, due to small number of periods there will inevitably be some sample correlation between $x_{it}$ and, e.g., your error term, and so there will inevitably be some "effect heterogeneity" in the $i$-specific partial relationships between $x_{it}$ and $y_{it}$. The interaction of conditional variance and effect heterogeneity tilts your FE estimates of coefficients on $x_{it}$. The coefficient on $x_{it}$ is a precision-weighted average of the $i$-specific coefficients on $x_{it}$. A different tilting occurs when you fit OLS to the model that you have specified above: now, the coefficient on $x_{it}$ is a precision weighted average of the coefficients on $x_{it}$ for the those with $treatment_i=1$ and those with $treatment_i=0$. These differences propagate to your estimates of $\beta_3$. Think Frisch-Waugh-Lovell. To demonstrate the validity of Charlie's claim, simply generate $x_{it}$'s where the variance is exactly constant for each $i$, but you still have different patterns. E.g, randomly assign $i$'s to have either $(x_{i1}, x_{i2})=(0,1)$ or $(1,0)$. If you do this, you will see that the differences between the FE and OLS estimates disappears. | 2 period difference-in-differences fixed effects versus OLS
@Charlie is right. You only have two time periods, so there will inevitably be variation in the $i$-specific sample variances of $x_{it}$. In addition, even if you have programmed the simulation for |
54,573 | 2 period difference-in-differences fixed effects versus OLS | First, I'm not sure what you mean by "fixed effects" regression as compared to OLS. In econometrics, at least, the standard fixed effects model is estimated via OLS. I'm assuming that you run a regression using group means instead of individual data, but I'm not sure.
In your model without $x$, it is fully flexible: all combinations of year and treatment are given different expected values. You are making no linearity assumptions here. Once you add in $x$, you are assuming that the response of $y$ to $x$ is linear and that this response does not depend upon the values of the fixed effects; that is, there is no heterogeneity in the impact of $x$. If there is heterogeneity, you can get different results using different estimation procedures.
My coauthors and I discuss these issues in our paper, "Broken or Fixed Effects?". | 2 period difference-in-differences fixed effects versus OLS | First, I'm not sure what you mean by "fixed effects" regression as compared to OLS. In econometrics, at least, the standard fixed effects model is estimated via OLS. I'm assuming that you run a regres | 2 period difference-in-differences fixed effects versus OLS
First, I'm not sure what you mean by "fixed effects" regression as compared to OLS. In econometrics, at least, the standard fixed effects model is estimated via OLS. I'm assuming that you run a regression using group means instead of individual data, but I'm not sure.
In your model without $x$, it is fully flexible: all combinations of year and treatment are given different expected values. You are making no linearity assumptions here. Once you add in $x$, you are assuming that the response of $y$ to $x$ is linear and that this response does not depend upon the values of the fixed effects; that is, there is no heterogeneity in the impact of $x$. If there is heterogeneity, you can get different results using different estimation procedures.
My coauthors and I discuss these issues in our paper, "Broken or Fixed Effects?". | 2 period difference-in-differences fixed effects versus OLS
First, I'm not sure what you mean by "fixed effects" regression as compared to OLS. In econometrics, at least, the standard fixed effects model is estimated via OLS. I'm assuming that you run a regres |
54,574 | Is it appropriate to run correlations first and then a regression? | In my opinion, it is OK to inspect correlations first. In fact such exploratory data analysis is important, for one thing so that you know about any possible problems in advance with multicollinearity.
The best way to choose covariates in the first instance is by recourse to a priori understanding of the causal relations between the covariates and the outcome(s). A good way to do this is by drawing a causal path diagram or directed acyclic graph. This has the advantage of allowing the identification of potential confounding variables that should also be controlled for, but also the identification of a minimally sufficient set of covariates, to avoid over-adjustment (which can result in the reversal paradox). An excellent description of these pitfalls can be found here.
If you really have no a priori knowledge to help choose candidate covariates, then you are running the risk of choosing covariates on the basis of spurious correlations. In this case you can employ stepwise procedures to choose covariates, but this can result in inflated Type 1 errors (ie familywise error), and you also need to be very careful about multicollinearity. | Is it appropriate to run correlations first and then a regression? | In my opinion, it is OK to inspect correlations first. In fact such exploratory data analysis is important, for one thing so that you know about any possible problems in advance with multicollinearity | Is it appropriate to run correlations first and then a regression?
In my opinion, it is OK to inspect correlations first. In fact such exploratory data analysis is important, for one thing so that you know about any possible problems in advance with multicollinearity.
The best way to choose covariates in the first instance is by recourse to a priori understanding of the causal relations between the covariates and the outcome(s). A good way to do this is by drawing a causal path diagram or directed acyclic graph. This has the advantage of allowing the identification of potential confounding variables that should also be controlled for, but also the identification of a minimally sufficient set of covariates, to avoid over-adjustment (which can result in the reversal paradox). An excellent description of these pitfalls can be found here.
If you really have no a priori knowledge to help choose candidate covariates, then you are running the risk of choosing covariates on the basis of spurious correlations. In this case you can employ stepwise procedures to choose covariates, but this can result in inflated Type 1 errors (ie familywise error), and you also need to be very careful about multicollinearity. | Is it appropriate to run correlations first and then a regression?
In my opinion, it is OK to inspect correlations first. In fact such exploratory data analysis is important, for one thing so that you know about any possible problems in advance with multicollinearity |
54,575 | How to use statistics to help buy a house? | I'm pretty sure this is the marriage problem. The idea is: You need to find a spouse. Researching information about a spouse is hard, and you can only look at one at a time. After some time spent looking (which we assume is constant), you can estimate a SpouseValue, which is how happy you would be married to this person. Then you must either marry the candidate, or move on and look for someone new.
The one difference between this and the problem you specify is that in the marriage problem, you have a predefined maximum number of candidates $n$ (after all, you'll have to settle eventually!). Probably this applies to your housing problem too (after all, you need to live somewhere!).
The optimal policy for making a decision under this condition is to first assess $\frac{n}{e}$ (that is $e$ the numeric constant) applicants at random, and accept none of them. Then keep interviewing. For each new candidate, determine if they are the best one seen so far. If they are, stop. This is your spouse (or house). Otherwise, keep going until you have to settle.
That's it. Keep in mind that the policy is optimal, but not guaranteed to pick the best candidate. You get the best one about a third of the time though, even for large $n$, so it's not half bad.
EDIT: Also, this assumes you care more about finding the best house than about the cost of looking... | How to use statistics to help buy a house? | I'm pretty sure this is the marriage problem. The idea is: You need to find a spouse. Researching information about a spouse is hard, and you can only look at one at a time. After some time spent look | How to use statistics to help buy a house?
I'm pretty sure this is the marriage problem. The idea is: You need to find a spouse. Researching information about a spouse is hard, and you can only look at one at a time. After some time spent looking (which we assume is constant), you can estimate a SpouseValue, which is how happy you would be married to this person. Then you must either marry the candidate, or move on and look for someone new.
The one difference between this and the problem you specify is that in the marriage problem, you have a predefined maximum number of candidates $n$ (after all, you'll have to settle eventually!). Probably this applies to your housing problem too (after all, you need to live somewhere!).
The optimal policy for making a decision under this condition is to first assess $\frac{n}{e}$ (that is $e$ the numeric constant) applicants at random, and accept none of them. Then keep interviewing. For each new candidate, determine if they are the best one seen so far. If they are, stop. This is your spouse (or house). Otherwise, keep going until you have to settle.
That's it. Keep in mind that the policy is optimal, but not guaranteed to pick the best candidate. You get the best one about a third of the time though, even for large $n$, so it's not half bad.
EDIT: Also, this assumes you care more about finding the best house than about the cost of looking... | How to use statistics to help buy a house?
I'm pretty sure this is the marriage problem. The idea is: You need to find a spouse. Researching information about a spouse is hard, and you can only look at one at a time. After some time spent look |
54,576 | Estimating the mutual information for two signal samples | I cannot immediately see a bug in your program. However, I see I few things that might alter the outcome of our results.
First of all, I would use $\sum_{ij} p_{ij} \left(\log p_{ij} - \log p_i - \log p_j\right)$ instead of $\sum_{ij} p_{ij} \log \frac{p_{ij}}{p_i\cdot p_j}$ for numerical stability. As a general rule, if you have to multiply probabilities, it is better to work in the log-domain.
Second, since you use different kernel density estimators for the marginals and the joint distribution, they might not be consistent. Let $p(x,y)$ be your joint estimate and $q(x)$ the estimate of one of your marginals. Since $q$ is the marginal of the joint distribution it must hold that $\int p(x,y) dy = p(x) = q(x)$. This is not necessarily the case since you used two different estimators.
Another thing that might get you in trouble is that you interpolate. If you interpolate a density, it will most certainly not integrate to one anymore (that, of course, depends on the type of interpolation you use). This means that you don't even use proper densities anymore.
The easiest solution you could just try out is to use a 2d histogram $H_{ij}$. Let $h_{ij}=H_{ij}/\sum_{ij}H_{ij}$ be the normalized histogram. Then you get the marginals via $h_i = \sum_j h_{ij}$ and $h_j = \sum_i h_{ij}$. The mutual information can then be computed via
lh1 = log(sum(h,1));
lh2 = log(sum(h,2));
I = sum(sum(h .* bsxfun(@minus,bsxfun(@minus,log(h),lh1),lh2) ));
For increasing number of datapoints and smaller bins, this should converge to the correct value. In our case, I would try to use different bin sizes, compute the MI and take a bin size from the region where the value of the MI seems stable. Alternatively, you could use the heuristic by
Scott, D. W. (1979). On optimal and data-based histograms. Biometrika, 66(3), 605-610. doi:10.1093/biomet/66.3.605
to choose the bin size.
If you really want to use kernel density estimators, I would adapt the approach from above and
compute the marginals from the joint estimation
not use interpolation. As I understand, the KDE give you the values of the density at arbitrary base points anyway. Better use those.
One final note, while the MI converges to the true MI with the histogram approach, the entropy does not. So if you want to estimate the differential entropy with a histogram approach, you have to correct for the bin size. | Estimating the mutual information for two signal samples | I cannot immediately see a bug in your program. However, I see I few things that might alter the outcome of our results.
First of all, I would use $\sum_{ij} p_{ij} \left(\log p_{ij} - \log p_i - \lo | Estimating the mutual information for two signal samples
I cannot immediately see a bug in your program. However, I see I few things that might alter the outcome of our results.
First of all, I would use $\sum_{ij} p_{ij} \left(\log p_{ij} - \log p_i - \log p_j\right)$ instead of $\sum_{ij} p_{ij} \log \frac{p_{ij}}{p_i\cdot p_j}$ for numerical stability. As a general rule, if you have to multiply probabilities, it is better to work in the log-domain.
Second, since you use different kernel density estimators for the marginals and the joint distribution, they might not be consistent. Let $p(x,y)$ be your joint estimate and $q(x)$ the estimate of one of your marginals. Since $q$ is the marginal of the joint distribution it must hold that $\int p(x,y) dy = p(x) = q(x)$. This is not necessarily the case since you used two different estimators.
Another thing that might get you in trouble is that you interpolate. If you interpolate a density, it will most certainly not integrate to one anymore (that, of course, depends on the type of interpolation you use). This means that you don't even use proper densities anymore.
The easiest solution you could just try out is to use a 2d histogram $H_{ij}$. Let $h_{ij}=H_{ij}/\sum_{ij}H_{ij}$ be the normalized histogram. Then you get the marginals via $h_i = \sum_j h_{ij}$ and $h_j = \sum_i h_{ij}$. The mutual information can then be computed via
lh1 = log(sum(h,1));
lh2 = log(sum(h,2));
I = sum(sum(h .* bsxfun(@minus,bsxfun(@minus,log(h),lh1),lh2) ));
For increasing number of datapoints and smaller bins, this should converge to the correct value. In our case, I would try to use different bin sizes, compute the MI and take a bin size from the region where the value of the MI seems stable. Alternatively, you could use the heuristic by
Scott, D. W. (1979). On optimal and data-based histograms. Biometrika, 66(3), 605-610. doi:10.1093/biomet/66.3.605
to choose the bin size.
If you really want to use kernel density estimators, I would adapt the approach from above and
compute the marginals from the joint estimation
not use interpolation. As I understand, the KDE give you the values of the density at arbitrary base points anyway. Better use those.
One final note, while the MI converges to the true MI with the histogram approach, the entropy does not. So if you want to estimate the differential entropy with a histogram approach, you have to correct for the bin size. | Estimating the mutual information for two signal samples
I cannot immediately see a bug in your program. However, I see I few things that might alter the outcome of our results.
First of all, I would use $\sum_{ij} p_{ij} \left(\log p_{ij} - \log p_i - \lo |
54,577 | Logistic regression: Why we don't plot the residuals against the fitted values? | With this diagnostic plot, we're just looking at the residuals to see if anything leaps out at us - a clump of outliers, or, as happens with this data, a clear separation of the residuals into groups. It's merely one of several diagnostic plots you can, and should, do. We might suspect the two groups correspond to sex, then plot the residuals vs sex:
plot(residuals(result,type="pearson") ~ donner$sex,
main="pearson residual vs sex plot")
which would tell us everything we wanted to know; plotting residuals vs. the explanatory variables (well, in this case a left-out variable) can inform us about possible nonlinearities and other problems with the model.
I suspect you were given this as part of an exercise in using diagnostic plots as tools to help indicate potential model improvements, rather than as a sort of sine qua non of diagnostic plots - although it's definitely a useful plot in its own right. | Logistic regression: Why we don't plot the residuals against the fitted values? | With this diagnostic plot, we're just looking at the residuals to see if anything leaps out at us - a clump of outliers, or, as happens with this data, a clear separation of the residuals into groups. | Logistic regression: Why we don't plot the residuals against the fitted values?
With this diagnostic plot, we're just looking at the residuals to see if anything leaps out at us - a clump of outliers, or, as happens with this data, a clear separation of the residuals into groups. It's merely one of several diagnostic plots you can, and should, do. We might suspect the two groups correspond to sex, then plot the residuals vs sex:
plot(residuals(result,type="pearson") ~ donner$sex,
main="pearson residual vs sex plot")
which would tell us everything we wanted to know; plotting residuals vs. the explanatory variables (well, in this case a left-out variable) can inform us about possible nonlinearities and other problems with the model.
I suspect you were given this as part of an exercise in using diagnostic plots as tools to help indicate potential model improvements, rather than as a sort of sine qua non of diagnostic plots - although it's definitely a useful plot in its own right. | Logistic regression: Why we don't plot the residuals against the fitted values?
With this diagnostic plot, we're just looking at the residuals to see if anything leaps out at us - a clump of outliers, or, as happens with this data, a clear separation of the residuals into groups. |
54,578 | How to generate random variates from random variables with known density? | If you know the pdf's for both, and the distribution from which you can sample, $f(x)$, encloses the distribution from which you want to sample, $g(x)$ (or can be made to do so by multiplying the likelihoods by some constant $c$), you can use an accept-reject algorithm. The gist of this approach is as follows:
Draw a value from $f(x)$
At that x-value, form a ratio, $r=g(x)/f(x)$
Draw a value, $u$, from a uniform distribution on the interval (0,1)
If $u\le r$, then accept that $x$ and store it
If $u>r$, then reject that $x$ and start over
Continue until you have $N$ realized values
Note that accept-reject algorithms are notoriously sluggish, even if you end up accepting all x-values, there are several extra steps for each draw. To optimize the performance of this approach, try to pick an $f(x)$ that is as close to (i.e., as little above) $g(x)$ as possible, so that you accept as high a percentage as you can. | How to generate random variates from random variables with known density? | If you know the pdf's for both, and the distribution from which you can sample, $f(x)$, encloses the distribution from which you want to sample, $g(x)$ (or can be made to do so by multiplying the like | How to generate random variates from random variables with known density?
If you know the pdf's for both, and the distribution from which you can sample, $f(x)$, encloses the distribution from which you want to sample, $g(x)$ (or can be made to do so by multiplying the likelihoods by some constant $c$), you can use an accept-reject algorithm. The gist of this approach is as follows:
Draw a value from $f(x)$
At that x-value, form a ratio, $r=g(x)/f(x)$
Draw a value, $u$, from a uniform distribution on the interval (0,1)
If $u\le r$, then accept that $x$ and store it
If $u>r$, then reject that $x$ and start over
Continue until you have $N$ realized values
Note that accept-reject algorithms are notoriously sluggish, even if you end up accepting all x-values, there are several extra steps for each draw. To optimize the performance of this approach, try to pick an $f(x)$ that is as close to (i.e., as little above) $g(x)$ as possible, so that you accept as high a percentage as you can. | How to generate random variates from random variables with known density?
If you know the pdf's for both, and the distribution from which you can sample, $f(x)$, encloses the distribution from which you want to sample, $g(x)$ (or can be made to do so by multiplying the like |
54,579 | Multiple regression and OLS. How to choose the best "non-linear" specification? | I've become rather enamoured of late with generalized additive modelling to handle non-linearity. The gam() function from the mgcv package for R makes things very easy as it incorporates automated generalized cross-validation to avoid overfitting. | Multiple regression and OLS. How to choose the best "non-linear" specification? | I've become rather enamoured of late with generalized additive modelling to handle non-linearity. The gam() function from the mgcv package for R makes things very easy as it incorporates automated gen | Multiple regression and OLS. How to choose the best "non-linear" specification?
I've become rather enamoured of late with generalized additive modelling to handle non-linearity. The gam() function from the mgcv package for R makes things very easy as it incorporates automated generalized cross-validation to avoid overfitting. | Multiple regression and OLS. How to choose the best "non-linear" specification?
I've become rather enamoured of late with generalized additive modelling to handle non-linearity. The gam() function from the mgcv package for R makes things very easy as it incorporates automated gen |
54,580 | Multiple regression and OLS. How to choose the best "non-linear" specification? | I've never heard of gretl but a parametric version of the excellent gam suggestion by Mike is to use additive regressions such as restricted cubic splines (natural splines). R and Stata make this easy to do. With regression splines (piecewise polynomials) you can model almost any relationship that is fairly smooth, and you still get all the advantages of ordinary models (confidence limits, predictions, formulas, etc.). A good default strategy is to figure what complexity the sample size and signal:noise ratio will support, translate that to the number of knots (join points) in the spline functions, and to fit those functions without later trying to simplify the model. In R that would like like
require(rms)
f <- ols(y ~ rcs(x1,4) + rcs(x2,4)) # 4 knots for x1,x2 at default locations
Very few relationships in nature are linear, so it is good to learn about flexible nonlinear modeling. | Multiple regression and OLS. How to choose the best "non-linear" specification? | I've never heard of gretl but a parametric version of the excellent gam suggestion by Mike is to use additive regressions such as restricted cubic splines (natural splines). R and Stata make this eas | Multiple regression and OLS. How to choose the best "non-linear" specification?
I've never heard of gretl but a parametric version of the excellent gam suggestion by Mike is to use additive regressions such as restricted cubic splines (natural splines). R and Stata make this easy to do. With regression splines (piecewise polynomials) you can model almost any relationship that is fairly smooth, and you still get all the advantages of ordinary models (confidence limits, predictions, formulas, etc.). A good default strategy is to figure what complexity the sample size and signal:noise ratio will support, translate that to the number of knots (join points) in the spline functions, and to fit those functions without later trying to simplify the model. In R that would like like
require(rms)
f <- ols(y ~ rcs(x1,4) + rcs(x2,4)) # 4 knots for x1,x2 at default locations
Very few relationships in nature are linear, so it is good to learn about flexible nonlinear modeling. | Multiple regression and OLS. How to choose the best "non-linear" specification?
I've never heard of gretl but a parametric version of the excellent gam suggestion by Mike is to use additive regressions such as restricted cubic splines (natural splines). R and Stata make this eas |
54,581 | How to perform model selection in GEE in R | If you want to select amongst pre-specified models, this should work the same with GEE as elsewhere. For example, if you were comparing a nested model to a full model, you could test that. If the models weren't nested, you could use an informational criterion (such as the QIC) to help adjudicate between them. Another approach is to use the Parametric Bootstrap Cross-fitting Method; this is a very solid approach, but computationally very expensive. | How to perform model selection in GEE in R | If you want to select amongst pre-specified models, this should work the same with GEE as elsewhere. For example, if you were comparing a nested model to a full model, you could test that. If the mo | How to perform model selection in GEE in R
If you want to select amongst pre-specified models, this should work the same with GEE as elsewhere. For example, if you were comparing a nested model to a full model, you could test that. If the models weren't nested, you could use an informational criterion (such as the QIC) to help adjudicate between them. Another approach is to use the Parametric Bootstrap Cross-fitting Method; this is a very solid approach, but computationally very expensive. | How to perform model selection in GEE in R
If you want to select amongst pre-specified models, this should work the same with GEE as elsewhere. For example, if you were comparing a nested model to a full model, you could test that. If the mo |
54,582 | How to perform model selection in GEE in R | [UPDATE: I improved on the code below and made a small R package hosted on GitHub: https://github.com/djhocking/qicpack]
I figured out a solution for calculating QIC from geepack package output. My code is below. This is one of the first functions I've ever written, so I apologize if it's messy but hopefully others find it useful. I definitely recommend reading gung's thoughts on model selection (linked above) before using this or any other information criterion model selection techniques (e.g. AIC, BIC, DIC). Also much of this code was pieced together from other sources, which I tried to reference at the start. I also received valuable input from Jun Yan, the geepack author.
######################################################################################
# QIC for GEE models
# Daniel J. Hocking
# 07 February 2012
# Refs:
# Pan (2001)
# Liang and Zeger (1986)
# Zeger and Liang (1986)
# Hardin and Hilbe (2003)
# Dornmann et al 2007
# # http://www.unc.edu/courses/2010spring/ecol/562/001/docs/lectures/lecture14.htm
######################################################################################
# Poisson QIC for geeglm{geepack} output
# Ref: Pan (2001)
QIC.pois.geese <- function(model.R, model.indep) {
library(MASS)
# Fitted and observed values for quasi likelihood
mu.R <- model.R$fitted.values
# alt: X <- model.matrix(model.R)
# names(model.R$coefficients) <- NULL
# beta.R <- model.R$coefficients
# mu.R <- exp(X %*% beta.R)
y <- model.R$y
# Quasi Likelihood for Poisson
quasi.R <- sum((y*log(mu.R)) - mu.R) # poisson()$dev.resids - scale and weights = 1
# Trace Term (penalty for model complexity)
AIinverse <- ginv(model.indep$geese$vbeta.naiv) # Omega-hat(I) via Moore-Penrose generalized inverse of a matrix in MASS package
# Alt: AIinverse <- solve(model.indep$geese$vbeta.naiv) # solve via indenity
Vr <- model.R$geese$vbeta
trace.R <- sum(diag(AIinverse %*% Vr))
px <- length(mu.R) # number non-redunant columns in design matrix
# QIC
QIC <- (-2)*quasi.R + 2*trace.R
QICu <- (-2)*quasi.R + 2*px # Approximation assuming model structured correctly
output <- c(QIC, QICu, quasi.R, trace.R, px)
names(output) <- c('QIC', 'QICu', 'Quasi Lik', 'Trace', 'px')
output
} | How to perform model selection in GEE in R | [UPDATE: I improved on the code below and made a small R package hosted on GitHub: https://github.com/djhocking/qicpack]
I figured out a solution for calculating QIC from geepack package output. My c | How to perform model selection in GEE in R
[UPDATE: I improved on the code below and made a small R package hosted on GitHub: https://github.com/djhocking/qicpack]
I figured out a solution for calculating QIC from geepack package output. My code is below. This is one of the first functions I've ever written, so I apologize if it's messy but hopefully others find it useful. I definitely recommend reading gung's thoughts on model selection (linked above) before using this or any other information criterion model selection techniques (e.g. AIC, BIC, DIC). Also much of this code was pieced together from other sources, which I tried to reference at the start. I also received valuable input from Jun Yan, the geepack author.
######################################################################################
# QIC for GEE models
# Daniel J. Hocking
# 07 February 2012
# Refs:
# Pan (2001)
# Liang and Zeger (1986)
# Zeger and Liang (1986)
# Hardin and Hilbe (2003)
# Dornmann et al 2007
# # http://www.unc.edu/courses/2010spring/ecol/562/001/docs/lectures/lecture14.htm
######################################################################################
# Poisson QIC for geeglm{geepack} output
# Ref: Pan (2001)
QIC.pois.geese <- function(model.R, model.indep) {
library(MASS)
# Fitted and observed values for quasi likelihood
mu.R <- model.R$fitted.values
# alt: X <- model.matrix(model.R)
# names(model.R$coefficients) <- NULL
# beta.R <- model.R$coefficients
# mu.R <- exp(X %*% beta.R)
y <- model.R$y
# Quasi Likelihood for Poisson
quasi.R <- sum((y*log(mu.R)) - mu.R) # poisson()$dev.resids - scale and weights = 1
# Trace Term (penalty for model complexity)
AIinverse <- ginv(model.indep$geese$vbeta.naiv) # Omega-hat(I) via Moore-Penrose generalized inverse of a matrix in MASS package
# Alt: AIinverse <- solve(model.indep$geese$vbeta.naiv) # solve via indenity
Vr <- model.R$geese$vbeta
trace.R <- sum(diag(AIinverse %*% Vr))
px <- length(mu.R) # number non-redunant columns in design matrix
# QIC
QIC <- (-2)*quasi.R + 2*trace.R
QICu <- (-2)*quasi.R + 2*px # Approximation assuming model structured correctly
output <- c(QIC, QICu, quasi.R, trace.R, px)
names(output) <- c('QIC', 'QICu', 'Quasi Lik', 'Trace', 'px')
output
} | How to perform model selection in GEE in R
[UPDATE: I improved on the code below and made a small R package hosted on GitHub: https://github.com/djhocking/qicpack]
I figured out a solution for calculating QIC from geepack package output. My c |
54,583 | How to perform model selection in GEE in R | You can use the model.sel command from the MuMIn package:
library(MuMIn)
model.sel(gee.0, gee.1, gee.2, gee.3, rank = QIC)
This uses MSE of prediction for model selection (Mean square error of prediction)--The smaller the better! | How to perform model selection in GEE in R | You can use the model.sel command from the MuMIn package:
library(MuMIn)
model.sel(gee.0, gee.1, gee.2, gee.3, rank = QIC)
This uses MSE of prediction for model selection (Mean square error of predic | How to perform model selection in GEE in R
You can use the model.sel command from the MuMIn package:
library(MuMIn)
model.sel(gee.0, gee.1, gee.2, gee.3, rank = QIC)
This uses MSE of prediction for model selection (Mean square error of prediction)--The smaller the better! | How to perform model selection in GEE in R
You can use the model.sel command from the MuMIn package:
library(MuMIn)
model.sel(gee.0, gee.1, gee.2, gee.3, rank = QIC)
This uses MSE of prediction for model selection (Mean square error of predic |
54,584 | Convergence results for block-gibbs sampling? | Much faster than what? Univariate Gibbs sampling?
The two stage Gibbs sampling is certainly the most studied type of Gibbs sampling starting with Tanner and Wong (1987, JASA). There is in particular a very achieved paper by Liu, Wong and Kong (1994, Biometrika), which shows that the correlation between the $X_t$'s (and the $Y_t$'s) is (a) positive and (b) going down to zero monotonically.
Blocked Gibbs sampling is usually more efficient than one-at-a-time Gibbs sampling but I do not know of a general result that would say so. In particular, augmenting the dimension with auxiliary variables may improve convergence, see the recent work of Xiao-Li Meng in JCGS as an illustration.
Here is an entry on this other forum that brings additional references. | Convergence results for block-gibbs sampling? | Much faster than what? Univariate Gibbs sampling?
The two stage Gibbs sampling is certainly the most studied type of Gibbs sampling starting with Tanner and Wong (1987, JASA). There is in particular a | Convergence results for block-gibbs sampling?
Much faster than what? Univariate Gibbs sampling?
The two stage Gibbs sampling is certainly the most studied type of Gibbs sampling starting with Tanner and Wong (1987, JASA). There is in particular a very achieved paper by Liu, Wong and Kong (1994, Biometrika), which shows that the correlation between the $X_t$'s (and the $Y_t$'s) is (a) positive and (b) going down to zero monotonically.
Blocked Gibbs sampling is usually more efficient than one-at-a-time Gibbs sampling but I do not know of a general result that would say so. In particular, augmenting the dimension with auxiliary variables may improve convergence, see the recent work of Xiao-Li Meng in JCGS as an illustration.
Here is an entry on this other forum that brings additional references. | Convergence results for block-gibbs sampling?
Much faster than what? Univariate Gibbs sampling?
The two stage Gibbs sampling is certainly the most studied type of Gibbs sampling starting with Tanner and Wong (1987, JASA). There is in particular a |
54,585 | Jaccard Indexes and PCA | the Jaccard index is a positive definite kernel as can be checked in A Short Tour of Kernel Methods for Graphs, by GΓ€rtner, Le, and Smola; see definition 1.4 and references below.
Doing a PCA on a matrix of Jaccard similarities is akin to doing kernel PCA,
that is doing PCA in the reproducing kernel Hilbert space of functions (on sets) induced by the Jaccard similarity (or better said, kernel). There's a relatively good understanding of such a method for data analysis. | Jaccard Indexes and PCA | the Jaccard index is a positive definite kernel as can be checked in A Short Tour of Kernel Methods for Graphs, by GΓ€rtner, Le, and Smola; see definition 1.4 and references below.
Doing a PCA on a mat | Jaccard Indexes and PCA
the Jaccard index is a positive definite kernel as can be checked in A Short Tour of Kernel Methods for Graphs, by GΓ€rtner, Le, and Smola; see definition 1.4 and references below.
Doing a PCA on a matrix of Jaccard similarities is akin to doing kernel PCA,
that is doing PCA in the reproducing kernel Hilbert space of functions (on sets) induced by the Jaccard similarity (or better said, kernel). There's a relatively good understanding of such a method for data analysis. | Jaccard Indexes and PCA
the Jaccard index is a positive definite kernel as can be checked in A Short Tour of Kernel Methods for Graphs, by GΓ€rtner, Le, and Smola; see definition 1.4 and references below.
Doing a PCA on a mat |
54,586 | Jaccard Indexes and PCA | Linear Principal Component or Factor analyses are based on linear regression model and this implies that the input similarities must be covariances, correlations, cosines, or sum-of-cross-products (all these similarities are known as scalar products). You may input any other sort of similarity, such as Jaccard measure or Kendall correlation, but only keeping in mind that the analysis will "think" it is scalar product, i.e. usual Pearson correlation or cosine, in this case.
When applied to true Pearson correlations or other type of scalar product, PCA reduces dimensionality with minimal distortion of data cloud's shape in terms of sum of squared euclidean distances between the data points. With Jaccard measure or such, you can't say that PCA reduces dimensionality with the above mentioned objective function. | Jaccard Indexes and PCA | Linear Principal Component or Factor analyses are based on linear regression model and this implies that the input similarities must be covariances, correlations, cosines, or sum-of-cross-products (al | Jaccard Indexes and PCA
Linear Principal Component or Factor analyses are based on linear regression model and this implies that the input similarities must be covariances, correlations, cosines, or sum-of-cross-products (all these similarities are known as scalar products). You may input any other sort of similarity, such as Jaccard measure or Kendall correlation, but only keeping in mind that the analysis will "think" it is scalar product, i.e. usual Pearson correlation or cosine, in this case.
When applied to true Pearson correlations or other type of scalar product, PCA reduces dimensionality with minimal distortion of data cloud's shape in terms of sum of squared euclidean distances between the data points. With Jaccard measure or such, you can't say that PCA reduces dimensionality with the above mentioned objective function. | Jaccard Indexes and PCA
Linear Principal Component or Factor analyses are based on linear regression model and this implies that the input similarities must be covariances, correlations, cosines, or sum-of-cross-products (al |
54,587 | Jaccard Indexes and PCA | In PCA, we attempt to "concisely" explain the variation in POSSIBLY CORRELATED data using principal components which are pairwise orthogonal to each other. The variation in data is represented by the variance-covariance matrix.
On the other hand, a jaccard index is a similarity coefficient. Similarity and correlation are pretty different concepts. According to your description, if we take a matrix of Jaccard indices, the eigenvectors will be orthogonal to each other-that is fine: but will we be in a position to say what fraction of variation is explained by a given $"Jaccard PC"$, so to speak? In case of regular PC, we can surely say what fraction of the variation in data is represented by a given PCA, which is nothing but the ratio of corresponding eigenvalue and sum of all eigenvalues.
To summarize, the fundamentally different concepts in jaccard index makes it not sensible for use in PCA. | Jaccard Indexes and PCA | In PCA, we attempt to "concisely" explain the variation in POSSIBLY CORRELATED data using principal components which are pairwise orthogonal to each other. The variation in data is represented by the | Jaccard Indexes and PCA
In PCA, we attempt to "concisely" explain the variation in POSSIBLY CORRELATED data using principal components which are pairwise orthogonal to each other. The variation in data is represented by the variance-covariance matrix.
On the other hand, a jaccard index is a similarity coefficient. Similarity and correlation are pretty different concepts. According to your description, if we take a matrix of Jaccard indices, the eigenvectors will be orthogonal to each other-that is fine: but will we be in a position to say what fraction of variation is explained by a given $"Jaccard PC"$, so to speak? In case of regular PC, we can surely say what fraction of the variation in data is represented by a given PCA, which is nothing but the ratio of corresponding eigenvalue and sum of all eigenvalues.
To summarize, the fundamentally different concepts in jaccard index makes it not sensible for use in PCA. | Jaccard Indexes and PCA
In PCA, we attempt to "concisely" explain the variation in POSSIBLY CORRELATED data using principal components which are pairwise orthogonal to each other. The variation in data is represented by the |
54,588 | How to transform data to normality? | For financial data I have successfully used heavy-tail Lambert W x Gaussian transformations.
Pyhon: gaussianize is an sklearn-type implementation of the IGMM algorithm in Python.
C++: the lamW R package has an elegant (and fast) C++ implementation of Lambert's W function. This can be a starting point for a full C++ implementation of IGMM or MLE for Lambert W x Gaussian transformations.
R: the LambertW R package is a full implementation of the Lambert W x F framework (simulation, estimation, plotting, transformation, testing).
As an illustration consider the SP500 return series in R.
library(MASS)
data(SP500)
yy <- ts(SP500)
library(LambertW)
test_norm(yy)
## $seed
## [1] 516797
##
## $shapiro.wilk
##
## Shapiro-Wilk normality test
##
## data: data.test
## W = 1, p-value <2e-16
##
##
## $shapiro.francia
##
## Shapiro-Francia normality test
##
## data: data.test
## W = 1, p-value <2e-16
##
##
## $anderson.darling
##
## Anderson-Darling normality test
##
## data: data
## A = 20, p-value <2e-16
As is well-known, financial data typically have fat tails and are sometimes negatively skewed. For the SP500 case, skewness is not too large, but it exhibits high kurtosis (7.7). Also several normality tests clearly reject the null hypothesis of a marginal Gaussian distribution.
Since we only have to deal with heavy tails, but not skewness, let's fit a heavy-tailed Lambert W x Gaussian distribution using a method of moments estimator (one could also use the maximum likelihood estimator (MLE) with MLE_LambertW()).
# fit a heavy tailed Lambert W x Gaussian
mod <- IGMM(yy, type = "h")
mod
## Call: IGMM(y = yy, type = "h")
## Estimation method: IGMM
## Input distribution: Any distribution with finite mean & variance and kurtosis = 3.
## mean-variance Lambert W x F type ('h' same tails; 'hh' different tails; 's' skewed): h
##
## Parameter estimates:
## mu_x sigma_x delta
## 0.05 0.72 0.16
##
## Obtained after 4 iterations.
The heavy tail parameter $\widehat{\delta} = 0.16$ is significantly different from zero and implies heavy tails. For such a $\delta$ moments up to order $1 / \widehat{\delta} = 6.29$ exist.
The model check question is of course if the back-transformed data does indeed have a Gaussian distribution. Let's check again using test_norm():
# transform data to input data (which presumably should have Normal distribution); use return.u = TRUE to get zero-mean, unit variance data
xx <- get_input(mod, return.u = FALSE)
test_norm(xx)
## $seed
## [1] 268951
##
## $shapiro.wilk
##
## Shapiro-Wilk normality test
##
## data: data.test
## W = 1, p-value = 0.2
##
##
## $shapiro.francia
##
## Shapiro-Francia normality test
##
## data: data.test
## W = 1, p-value = 0.2
##
##
## $anderson.darling
##
## Anderson-Darling normality test
##
## data: data
## A = 0.7, p-value = 0.07
I think the plot and normality test results speak for themselves.
The package also provides a single function that does all these steps at once: Gaussianize() (this is also what the Python package implements). | How to transform data to normality? | For financial data I have successfully used heavy-tail Lambert W x Gaussian transformations.
Pyhon: gaussianize is an sklearn-type implementation of the IGMM algorithm in Python.
C++: the lamW R pack | How to transform data to normality?
For financial data I have successfully used heavy-tail Lambert W x Gaussian transformations.
Pyhon: gaussianize is an sklearn-type implementation of the IGMM algorithm in Python.
C++: the lamW R package has an elegant (and fast) C++ implementation of Lambert's W function. This can be a starting point for a full C++ implementation of IGMM or MLE for Lambert W x Gaussian transformations.
R: the LambertW R package is a full implementation of the Lambert W x F framework (simulation, estimation, plotting, transformation, testing).
As an illustration consider the SP500 return series in R.
library(MASS)
data(SP500)
yy <- ts(SP500)
library(LambertW)
test_norm(yy)
## $seed
## [1] 516797
##
## $shapiro.wilk
##
## Shapiro-Wilk normality test
##
## data: data.test
## W = 1, p-value <2e-16
##
##
## $shapiro.francia
##
## Shapiro-Francia normality test
##
## data: data.test
## W = 1, p-value <2e-16
##
##
## $anderson.darling
##
## Anderson-Darling normality test
##
## data: data
## A = 20, p-value <2e-16
As is well-known, financial data typically have fat tails and are sometimes negatively skewed. For the SP500 case, skewness is not too large, but it exhibits high kurtosis (7.7). Also several normality tests clearly reject the null hypothesis of a marginal Gaussian distribution.
Since we only have to deal with heavy tails, but not skewness, let's fit a heavy-tailed Lambert W x Gaussian distribution using a method of moments estimator (one could also use the maximum likelihood estimator (MLE) with MLE_LambertW()).
# fit a heavy tailed Lambert W x Gaussian
mod <- IGMM(yy, type = "h")
mod
## Call: IGMM(y = yy, type = "h")
## Estimation method: IGMM
## Input distribution: Any distribution with finite mean & variance and kurtosis = 3.
## mean-variance Lambert W x F type ('h' same tails; 'hh' different tails; 's' skewed): h
##
## Parameter estimates:
## mu_x sigma_x delta
## 0.05 0.72 0.16
##
## Obtained after 4 iterations.
The heavy tail parameter $\widehat{\delta} = 0.16$ is significantly different from zero and implies heavy tails. For such a $\delta$ moments up to order $1 / \widehat{\delta} = 6.29$ exist.
The model check question is of course if the back-transformed data does indeed have a Gaussian distribution. Let's check again using test_norm():
# transform data to input data (which presumably should have Normal distribution); use return.u = TRUE to get zero-mean, unit variance data
xx <- get_input(mod, return.u = FALSE)
test_norm(xx)
## $seed
## [1] 268951
##
## $shapiro.wilk
##
## Shapiro-Wilk normality test
##
## data: data.test
## W = 1, p-value = 0.2
##
##
## $shapiro.francia
##
## Shapiro-Francia normality test
##
## data: data.test
## W = 1, p-value = 0.2
##
##
## $anderson.darling
##
## Anderson-Darling normality test
##
## data: data
## A = 0.7, p-value = 0.07
I think the plot and normality test results speak for themselves.
The package also provides a single function that does all these steps at once: Gaussianize() (this is also what the Python package implements). | How to transform data to normality?
For financial data I have successfully used heavy-tail Lambert W x Gaussian transformations.
Pyhon: gaussianize is an sklearn-type implementation of the IGMM algorithm in Python.
C++: the lamW R pack |
54,589 | How to transform data to normality? | It appears that you are just asking for a test for normality. If so, Shapiro-Wilk is hard to beat. This is not, however, the easiest test in the pantheon to implement.
Why not just use R? The shapiro.test function will do the work for you. | How to transform data to normality? | It appears that you are just asking for a test for normality. If so, Shapiro-Wilk is hard to beat. This is not, however, the easiest test in the pantheon to implement.
Why not just use R? The sha | How to transform data to normality?
It appears that you are just asking for a test for normality. If so, Shapiro-Wilk is hard to beat. This is not, however, the easiest test in the pantheon to implement.
Why not just use R? The shapiro.test function will do the work for you. | How to transform data to normality?
It appears that you are just asking for a test for normality. If so, Shapiro-Wilk is hard to beat. This is not, however, the easiest test in the pantheon to implement.
Why not just use R? The sha |
54,590 | Simpson's paradox or confounding? | Simpson's paradox is an extreme form of confounding where the apparent sign of correlation is reversed; you haven't said this is the position here.
I can see at least three possibilities here: the heterogenity between the subgroups, the reduction in sample sizes in each, and poor definition of the subgroups which presuppose the results. Ignoring the third, both of the first two can have an impact: from past experience it is often the small sample size which lead to non-significance in the smaller subgroup and heterogenity which causes the whole group to produce a significant result wile the large subgroup does not.
That was an over-generalisation - each case will have its own issues. | Simpson's paradox or confounding? | Simpson's paradox is an extreme form of confounding where the apparent sign of correlation is reversed; you haven't said this is the position here.
I can see at least three possibilities here: the het | Simpson's paradox or confounding?
Simpson's paradox is an extreme form of confounding where the apparent sign of correlation is reversed; you haven't said this is the position here.
I can see at least three possibilities here: the heterogenity between the subgroups, the reduction in sample sizes in each, and poor definition of the subgroups which presuppose the results. Ignoring the third, both of the first two can have an impact: from past experience it is often the small sample size which lead to non-significance in the smaller subgroup and heterogenity which causes the whole group to produce a significant result wile the large subgroup does not.
That was an over-generalisation - each case will have its own issues. | Simpson's paradox or confounding?
Simpson's paradox is an extreme form of confounding where the apparent sign of correlation is reversed; you haven't said this is the position here.
I can see at least three possibilities here: the het |
54,591 | Transform log posteriors to the original posteriors | Consider the expression:
$$\frac{exp(A)}{exp(A)+exp(B)}$$
The generic strategy to compute the above expression when $exp(A)$ overflows would be to transform as follows:
$$\frac{1}{1+exp(B-A)}$$
For example R chokes on:
$$\frac{exp(1100)}{exp(1100)+exp(1104)}$$
But, happily computes the following transformation to yield a value of 0.01798621:
$$\frac{1}{1+exp(1104-1100)}$$
You may still encounter issues of overflow or underflow when you compute $exp(B-A)$ but that should no loner pose a problem as the transformed expression will still be well defined. | Transform log posteriors to the original posteriors | Consider the expression:
$$\frac{exp(A)}{exp(A)+exp(B)}$$
The generic strategy to compute the above expression when $exp(A)$ overflows would be to transform as follows:
$$\frac{1}{1+exp(B-A)}$$
For ex | Transform log posteriors to the original posteriors
Consider the expression:
$$\frac{exp(A)}{exp(A)+exp(B)}$$
The generic strategy to compute the above expression when $exp(A)$ overflows would be to transform as follows:
$$\frac{1}{1+exp(B-A)}$$
For example R chokes on:
$$\frac{exp(1100)}{exp(1100)+exp(1104)}$$
But, happily computes the following transformation to yield a value of 0.01798621:
$$\frac{1}{1+exp(1104-1100)}$$
You may still encounter issues of overflow or underflow when you compute $exp(B-A)$ but that should no loner pose a problem as the transformed expression will still be well defined. | Transform log posteriors to the original posteriors
Consider the expression:
$$\frac{exp(A)}{exp(A)+exp(B)}$$
The generic strategy to compute the above expression when $exp(A)$ overflows would be to transform as follows:
$$\frac{1}{1+exp(B-A)}$$
For ex |
54,592 | Transform log posteriors to the original posteriors | In the general case, use
$$
\dfrac{ \exp\{A_i-\max_j(A_j)\} }{ \sum_k \exp\{A_k-\max_j(A_j)\} }
$$
to avoid overflows. I always use this approach when computing Bayes factors and probabilities. | Transform log posteriors to the original posteriors | In the general case, use
$$
\dfrac{ \exp\{A_i-\max_j(A_j)\} }{ \sum_k \exp\{A_k-\max_j(A_j)\} }
$$
to avoid overflows. I always use this approach when computing Bayes factors and probabilities. | Transform log posteriors to the original posteriors
In the general case, use
$$
\dfrac{ \exp\{A_i-\max_j(A_j)\} }{ \sum_k \exp\{A_k-\max_j(A_j)\} }
$$
to avoid overflows. I always use this approach when computing Bayes factors and probabilities. | Transform log posteriors to the original posteriors
In the general case, use
$$
\dfrac{ \exp\{A_i-\max_j(A_j)\} }{ \sum_k \exp\{A_k-\max_j(A_j)\} }
$$
to avoid overflows. I always use this approach when computing Bayes factors and probabilities. |
54,593 | How to plot multiple users' deviations from predictions of bandwidth consumption over time? | You might want to plot this as the cumulated deviation from the predicted values. Whether this makes sense depends on what the billing/analysis period is: If the group has to stay below a certain limit for each quarter, this would allow them to see whether they're on track for reaching that goal. If their account balance is reset to zero every week, however, this kind of graph would be less useful.
Alternatively, an exceedance curve might be useful. This sorts each user's weekly deviations by magnitude. It allows to assess how much of the time each user was above or below their target. In the chart below, you can see that although all users apart from user5 stayed below their limit more than 50% of the time, the group as a whole was above their limit almost 60% of the time.
And for a completely different way of showing the data, a Waterfall Chart could be interesting for you. It shows the breakdown of the weekly values, but with five variables it already becomes quite cluttered:
Here's the R code for the charts.
Cumulative Deviation
library(ggplot2)
data <- cumsum(
data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
)
data$week=c(1:13)
molten <- melt(data,id.vars="week")
p <- ggplot(molten, aes(x=week, y=value, colour=variable)) +
geom_line(aes(group=variable)) +
scale_colour_hue(h=c(100,250)) +
geom_line(aes(y=molten$value[molten$variable=="group"]), colour="orange", size=1.5) +
theme_bw() + opts(legend.position = "none") +
geom_text(data=molten[molten$week==13,], aes(label=variable), colour="black", hjust=-0.2, size=4) +
xlim(0,13.9) + xlab("Week") + ylab("Cumulated Deviation")
print(p)
Exceedance Curve
library(ggplot2)
data <- data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
data_sorted <- data.frame(apply(data,2,sort,decreasing=T))
data_sorted$exceedance_prob=c(0:12)/12
molten <- melt(data_sorted,id.vars="exceedance_prob")
p <- ggplot(molten, aes(x=exceedance_prob, y=value, colour=variable)) +
geom_line(aes(group=variable)) +
scale_colour_hue(h=c(100,250)) +
geom_line(aes(y=molten$value[molten$variable=="group"]), colour="orange", size=1.5) +
theme_bw() + opts(legend.position = "none") +
geom_text(data=molten[molten$exceedance_prob==0,], aes(label=variable), colour="black", hjust=1.2, size=4) +
xlim(-0.1,1) + xlab("Exceedance Probability") + ylab("Deviation")
print(p)
Exceedance Curve
library(ggplot2)
data <- data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
originaldata <- data
data <- data[,1:5]
lower<-as.data.frame(t(apply(data,1,"cumsum")))
data$week=c(1:13)
lower$week=c(1:13)
molten <- melt(data,id.vars="week")
moltenlower <- melt(lower,id.vars="week")
molten$lower <- moltenlower$value
p <- ggplot(molten, aes(x=week, y=value, fill=variable)) +
geom_rect(aes(
xmin=week+as.numeric(variable)/6-0.5,
xmax=week+as.numeric(variable)/6-0.35,
ymin=lower,
ymax=lower-value,
group=variable),
colour="black") +
scale_fill_brewer()+
theme_bw() +
xlim(0.5,13.5) + xlab("Week") + ylab("Cumulated Deviation") + ylim(-2,2)
print(p) | How to plot multiple users' deviations from predictions of bandwidth consumption over time? | You might want to plot this as the cumulated deviation from the predicted values. Whether this makes sense depends on what the billing/analysis period is: If the group has to stay below a certain limi | How to plot multiple users' deviations from predictions of bandwidth consumption over time?
You might want to plot this as the cumulated deviation from the predicted values. Whether this makes sense depends on what the billing/analysis period is: If the group has to stay below a certain limit for each quarter, this would allow them to see whether they're on track for reaching that goal. If their account balance is reset to zero every week, however, this kind of graph would be less useful.
Alternatively, an exceedance curve might be useful. This sorts each user's weekly deviations by magnitude. It allows to assess how much of the time each user was above or below their target. In the chart below, you can see that although all users apart from user5 stayed below their limit more than 50% of the time, the group as a whole was above their limit almost 60% of the time.
And for a completely different way of showing the data, a Waterfall Chart could be interesting for you. It shows the breakdown of the weekly values, but with five variables it already becomes quite cluttered:
Here's the R code for the charts.
Cumulative Deviation
library(ggplot2)
data <- cumsum(
data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
)
data$week=c(1:13)
molten <- melt(data,id.vars="week")
p <- ggplot(molten, aes(x=week, y=value, colour=variable)) +
geom_line(aes(group=variable)) +
scale_colour_hue(h=c(100,250)) +
geom_line(aes(y=molten$value[molten$variable=="group"]), colour="orange", size=1.5) +
theme_bw() + opts(legend.position = "none") +
geom_text(data=molten[molten$week==13,], aes(label=variable), colour="black", hjust=-0.2, size=4) +
xlim(0,13.9) + xlab("Week") + ylab("Cumulated Deviation")
print(p)
Exceedance Curve
library(ggplot2)
data <- data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
data_sorted <- data.frame(apply(data,2,sort,decreasing=T))
data_sorted$exceedance_prob=c(0:12)/12
molten <- melt(data_sorted,id.vars="exceedance_prob")
p <- ggplot(molten, aes(x=exceedance_prob, y=value, colour=variable)) +
geom_line(aes(group=variable)) +
scale_colour_hue(h=c(100,250)) +
geom_line(aes(y=molten$value[molten$variable=="group"]), colour="orange", size=1.5) +
theme_bw() + opts(legend.position = "none") +
geom_text(data=molten[molten$exceedance_prob==0,], aes(label=variable), colour="black", hjust=1.2, size=4) +
xlim(-0.1,1) + xlab("Exceedance Probability") + ylab("Deviation")
print(p)
Exceedance Curve
library(ggplot2)
data <- data.frame(
user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516),
group = c(-0.509, -0.427, 0.022, -0.737, -1.466, -0.796, 1.514, 0.764, 0.072, 0.727, 0.997, 4.977, 5.696)
)
originaldata <- data
data <- data[,1:5]
lower<-as.data.frame(t(apply(data,1,"cumsum")))
data$week=c(1:13)
lower$week=c(1:13)
molten <- melt(data,id.vars="week")
moltenlower <- melt(lower,id.vars="week")
molten$lower <- moltenlower$value
p <- ggplot(molten, aes(x=week, y=value, fill=variable)) +
geom_rect(aes(
xmin=week+as.numeric(variable)/6-0.5,
xmax=week+as.numeric(variable)/6-0.35,
ymin=lower,
ymax=lower-value,
group=variable),
colour="black") +
scale_fill_brewer()+
theme_bw() +
xlim(0.5,13.5) + xlab("Week") + ylab("Cumulated Deviation") + ylim(-2,2)
print(p) | How to plot multiple users' deviations from predictions of bandwidth consumption over time?
You might want to plot this as the cumulated deviation from the predicted values. Whether this makes sense depends on what the billing/analysis period is: If the group has to stay below a certain limi |
54,594 | How to plot multiple users' deviations from predictions of bandwidth consumption over time? | For these particular data, I would make a line plot, like the following.
Here's the R code I used:
dat <- data.frame(user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516))
plot(dat[,1], xlab="Time", ylab="Outcome", ylim=range(dat),
type="l", lwd=2, xlim=c(1, nrow(dat)+2), las=1, xaxt="n")
axis(side=1, at=seq(1, 13, by=2))
abline(h=0, lty=2, col="gray40")
col <- c("black", "blue", "red", "orange", "green")
for(i in 2:5)
lines(dat[,i], col=col[i], lwd=2)
text(13.2, dat[13,]+c(0, 0.1, 0.1, -0.1, 0), paste("user", 1:5, sep=""),
col=col, adj=c(0, 0.5)) | How to plot multiple users' deviations from predictions of bandwidth consumption over time? | For these particular data, I would make a line plot, like the following.
Here's the R code I used:
dat <- data.frame(user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.0 | How to plot multiple users' deviations from predictions of bandwidth consumption over time?
For these particular data, I would make a line plot, like the following.
Here's the R code I used:
dat <- data.frame(user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.043, 1.296, 0.075, -0.373),
user2 = c(-0.009, -0.013, -0.01, -0.008, -0.008, -0.01, -0.005, -0.02, 0.287, 0.345, -0.104, -0.324, 0.144),
user3 = c(-0.197, -0.271, -0.153, -0.621, -0.549, -0.09, 1.745, 0.436, -0.271, 0.093, 0.085, 0.211, 0.331),
user4 = c(-0.005, -0.005, -0.006, -0.006, -0.006, -0.005, -0.006, -0.086, -0.171, -0.15, -0.175, -0.067, 0.078),
user5 = c(-0.223, -0.048, -0.129, 0.14, -0.535, -0.29, 0.51, 0.801, 0.521, 0.482, -0.105, 5.082, 5.516))
plot(dat[,1], xlab="Time", ylab="Outcome", ylim=range(dat),
type="l", lwd=2, xlim=c(1, nrow(dat)+2), las=1, xaxt="n")
axis(side=1, at=seq(1, 13, by=2))
abline(h=0, lty=2, col="gray40")
col <- c("black", "blue", "red", "orange", "green")
for(i in 2:5)
lines(dat[,i], col=col[i], lwd=2)
text(13.2, dat[13,]+c(0, 0.1, 0.1, -0.1, 0), paste("user", 1:5, sep=""),
col=col, adj=c(0, 0.5)) | How to plot multiple users' deviations from predictions of bandwidth consumption over time?
For these particular data, I would make a line plot, like the following.
Here's the R code I used:
dat <- data.frame(user1 = c(-0.075, -0.09, 0.32, -0.242, -0.368, -0.401, -0.73, -0.367, -0.294, -0.0 |
54,595 | How to define marked point processes? | A point process is a collection of random variables that are positions in some space (like locations on a plane). A marked point process is a point process in which some additional features are measured at each point.
For your situation, the locations of the points are by design rather than random, and so while you could call it a marked point process, the point process part isn't particularly interesting, and probably thinking along the marked point process line isn't going to be particularly useful.
I take it that you are trying to characterize the distribution of the measured features across the region. In this case, I would look at other kinds of spatial analysis. | How to define marked point processes? | A point process is a collection of random variables that are positions in some space (like locations on a plane). A marked point process is a point process in which some additional features are measu | How to define marked point processes?
A point process is a collection of random variables that are positions in some space (like locations on a plane). A marked point process is a point process in which some additional features are measured at each point.
For your situation, the locations of the points are by design rather than random, and so while you could call it a marked point process, the point process part isn't particularly interesting, and probably thinking along the marked point process line isn't going to be particularly useful.
I take it that you are trying to characterize the distribution of the measured features across the region. In this case, I would look at other kinds of spatial analysis. | How to define marked point processes?
A point process is a collection of random variables that are positions in some space (like locations on a plane). A marked point process is a point process in which some additional features are measu |
54,596 | How to define marked point processes? | The aim of such cases is to model the measured characteristic in spatial locations. These type of data are called geostatistical data. So you have to apply the geostatistics theory as the approperiate methods for modeling geostatistical data. Then, the answer of your question is no, this approach could not to be a marked point process. | How to define marked point processes? | The aim of such cases is to model the measured characteristic in spatial locations. These type of data are called geostatistical data. So you have to apply the geostatistics theory as the approperiate | How to define marked point processes?
The aim of such cases is to model the measured characteristic in spatial locations. These type of data are called geostatistical data. So you have to apply the geostatistics theory as the approperiate methods for modeling geostatistical data. Then, the answer of your question is no, this approach could not to be a marked point process. | How to define marked point processes?
The aim of such cases is to model the measured characteristic in spatial locations. These type of data are called geostatistical data. So you have to apply the geostatistics theory as the approperiate |
54,597 | Looking for estimates for my data using cumulative beta distribution | First remark : your data is nowhere near a distribution, and definitely not the beta function. As I see it, you see your boot.mean as 'density' and your x-axis (the index?) as value. The beta function is limited between 0 and 1, and as the area under the curve of any density function should equal 1, your data doesn't come close. Good point of @whuber: fit a scaled version. Alternatively: Scale to the sum of the data, as @iterator said. Now as the beta function requires scaling twice (both on the X-axis, so the indices and on the Y-axis, being the actual data)
Now you talk about the beta function and you talked about the inverse of the cumulative normal distribution somewhere else. I suppose you mean 'when that distribution is looking in the mirror, it sees what I want to see...' ;-)
So an ad-hoc way of doing this (without any theoretical background, as that background is not the one you need here), is given below. Apart from what everybody else said here, I just want to point out the optim() function, which is doing basically what you're looking for. Regardless whether you fit a scaled and mirrored beta distribution or something that looks close to an inverse normal cumulative distribution for some value of close...
customFit <- function(x, data) {
d.data <- rev(cumsum(dnorm(1:length(data), x[1], x[2]))) * max(data)
SS <- sum((d.data - data)^2)
return(SS)
}
fit.optim <- optim(c(5, 8), customFit, data = boot.mean)
plot(boot.mean)
lines(rev(cumsum(dnorm(1:length(boot.mean),
fit.optim$par[1], fit.optim$par[2]))) * max(boot.mean),
col = "red")
Note of warning: apart from having a defined function that fits your data, there's little you can do with this result... | Looking for estimates for my data using cumulative beta distribution | First remark : your data is nowhere near a distribution, and definitely not the beta function. As I see it, you see your boot.mean as 'density' and your x-axis (the index?) as value. The beta function | Looking for estimates for my data using cumulative beta distribution
First remark : your data is nowhere near a distribution, and definitely not the beta function. As I see it, you see your boot.mean as 'density' and your x-axis (the index?) as value. The beta function is limited between 0 and 1, and as the area under the curve of any density function should equal 1, your data doesn't come close. Good point of @whuber: fit a scaled version. Alternatively: Scale to the sum of the data, as @iterator said. Now as the beta function requires scaling twice (both on the X-axis, so the indices and on the Y-axis, being the actual data)
Now you talk about the beta function and you talked about the inverse of the cumulative normal distribution somewhere else. I suppose you mean 'when that distribution is looking in the mirror, it sees what I want to see...' ;-)
So an ad-hoc way of doing this (without any theoretical background, as that background is not the one you need here), is given below. Apart from what everybody else said here, I just want to point out the optim() function, which is doing basically what you're looking for. Regardless whether you fit a scaled and mirrored beta distribution or something that looks close to an inverse normal cumulative distribution for some value of close...
customFit <- function(x, data) {
d.data <- rev(cumsum(dnorm(1:length(data), x[1], x[2]))) * max(data)
SS <- sum((d.data - data)^2)
return(SS)
}
fit.optim <- optim(c(5, 8), customFit, data = boot.mean)
plot(boot.mean)
lines(rev(cumsum(dnorm(1:length(boot.mean),
fit.optim$par[1], fit.optim$par[2]))) * max(boot.mean),
col = "red")
Note of warning: apart from having a defined function that fits your data, there's little you can do with this result... | Looking for estimates for my data using cumulative beta distribution
First remark : your data is nowhere near a distribution, and definitely not the beta function. As I see it, you see your boot.mean as 'density' and your x-axis (the index?) as value. The beta function |
54,598 | Looking for estimates for my data using cumulative beta distribution | It's not a good idea to rescale the data in this ad hoc way, because it can result in an inferior fit (and ruins any chance of estimating the sampling variance of the scale parameter): just fit a scaled Beta distribution to the data themselves.
You do have to assign percentage points to the data; below I have used $p(i) = (i-1/2)/n$ for the $i^\text{th}$ smallest of $n$ values, sorted as $x_1 \le x_2 \le \cdots \le x_n$. Fit a rescaled CDF to the empirical distribution, $\{(x_i, p_i)\}$. The fit ideally would account for the correlations and heteroscedasticity of the values, but in this case nonlinear least squares does fine:
This particular fit is $F(x/\gamma)$ where $F$ is the CDF of a Beta($\alpha,\beta$) distribution with $\alpha=0.59$, $\beta=0.87$, and $\gamma=39.2$. This is a U-shaped distribution (i.e., it has modes at both tails). (The correlations and heteroscedasticity indicate that least squares confidence intervals for the parameters cannot be trusted; bootstrap them instead. I haven't carried out the calculation and so will only report the untrustworthy standard errors: they're about $0.06$ for $\alpha$, $0.15$ for $\beta$, and $2.4$ for $\gamma$.)
Consider following this up with a goodness of fit test. Even a simple $\chi^2$ test will give some useful hints about lack of fit. For these data, the graph indicates this fit works pretty well, regardless. A residual-vs.-fit plot suggests the fit is a little better at the high end of the data, but otherwise looks sufficiently random with small residuals:
This is consistent with a model in which the data have a little bit of measurement error: that would ruin the fit more where the CDF is steep (at the low values) than at other places (the middle to high values). | Looking for estimates for my data using cumulative beta distribution | It's not a good idea to rescale the data in this ad hoc way, because it can result in an inferior fit (and ruins any chance of estimating the sampling variance of the scale parameter): just fit a scal | Looking for estimates for my data using cumulative beta distribution
It's not a good idea to rescale the data in this ad hoc way, because it can result in an inferior fit (and ruins any chance of estimating the sampling variance of the scale parameter): just fit a scaled Beta distribution to the data themselves.
You do have to assign percentage points to the data; below I have used $p(i) = (i-1/2)/n$ for the $i^\text{th}$ smallest of $n$ values, sorted as $x_1 \le x_2 \le \cdots \le x_n$. Fit a rescaled CDF to the empirical distribution, $\{(x_i, p_i)\}$. The fit ideally would account for the correlations and heteroscedasticity of the values, but in this case nonlinear least squares does fine:
This particular fit is $F(x/\gamma)$ where $F$ is the CDF of a Beta($\alpha,\beta$) distribution with $\alpha=0.59$, $\beta=0.87$, and $\gamma=39.2$. This is a U-shaped distribution (i.e., it has modes at both tails). (The correlations and heteroscedasticity indicate that least squares confidence intervals for the parameters cannot be trusted; bootstrap them instead. I haven't carried out the calculation and so will only report the untrustworthy standard errors: they're about $0.06$ for $\alpha$, $0.15$ for $\beta$, and $2.4$ for $\gamma$.)
Consider following this up with a goodness of fit test. Even a simple $\chi^2$ test will give some useful hints about lack of fit. For these data, the graph indicates this fit works pretty well, regardless. A residual-vs.-fit plot suggests the fit is a little better at the high end of the data, but otherwise looks sufficiently random with small residuals:
This is consistent with a model in which the data have a little bit of measurement error: that would ruin the fit more where the CDF is steep (at the low values) than at other places (the middle to high values). | Looking for estimates for my data using cumulative beta distribution
It's not a good idea to rescale the data in this ad hoc way, because it can result in an inferior fit (and ruins any chance of estimating the sampling variance of the scale parameter): just fit a scal |
54,599 | Whether to enter all predictors at once or perform a hierarchical regression? | Whether to use hierarchical regression or enter all predictors at once
As a starting point, the final block of a hierarchical regression is the same as if you had entered all predictors at once.
If you have an hypothesis that is aligned with hierarchical regression, then you should perform a hierarchical regression. Your hypothesis is phrased in terms of one set of variables explaining variance over and above another set. Therefore, you have an hypothesis aligned with hierarchical regression.
Issues with non significant bivariate correlations
If all correlations between each predictor and the dependent variable are non-significant, then it is quite likely, although as discussed here not necessarily the case, that your overall regression model, and the r-square changes in a hierarchical regression will all be non-significant. Thus, as you have gathered, a quick look at the correlations can give you a sense of what the answer is likely to be to your hierarchical regression question.
Nonetheless, multiple regressions can vary in the degree to which they are performed for exploratory versus confirmatory purposes. Thus, if you are being confirmatory, then the fact that the predictors are not significantly correlated with the dependent variable should not stop you from performing the hierarchical regression. | Whether to enter all predictors at once or perform a hierarchical regression? | Whether to use hierarchical regression or enter all predictors at once
As a starting point, the final block of a hierarchical regression is the same as if you had entered all predictors at once.
If y | Whether to enter all predictors at once or perform a hierarchical regression?
Whether to use hierarchical regression or enter all predictors at once
As a starting point, the final block of a hierarchical regression is the same as if you had entered all predictors at once.
If you have an hypothesis that is aligned with hierarchical regression, then you should perform a hierarchical regression. Your hypothesis is phrased in terms of one set of variables explaining variance over and above another set. Therefore, you have an hypothesis aligned with hierarchical regression.
Issues with non significant bivariate correlations
If all correlations between each predictor and the dependent variable are non-significant, then it is quite likely, although as discussed here not necessarily the case, that your overall regression model, and the r-square changes in a hierarchical regression will all be non-significant. Thus, as you have gathered, a quick look at the correlations can give you a sense of what the answer is likely to be to your hierarchical regression question.
Nonetheless, multiple regressions can vary in the degree to which they are performed for exploratory versus confirmatory purposes. Thus, if you are being confirmatory, then the fact that the predictors are not significantly correlated with the dependent variable should not stop you from performing the hierarchical regression. | Whether to enter all predictors at once or perform a hierarchical regression?
Whether to use hierarchical regression or enter all predictors at once
As a starting point, the final block of a hierarchical regression is the same as if you had entered all predictors at once.
If y |
54,600 | Whether to enter all predictors at once or perform a hierarchical regression? | The rule of thumb is 10 cases for each IV. You have (if I counted right) 11 IVs. Not too far over that. Surely the two SES variables are highly correlated? They could be combined and that gets you down to 10 IVs.
Regarding dependent vs. independent - you haven't said what your sampling plan is. You can do SNA and still get independent observations if the SNA aspect is only in the variables and not in the sample. But if all the students were from one or a few classes, then you probably do have dependent data.
Regarding your questions: You can do either. Each will be just as much a violation of assumptions as the other. This doesn't depend on significance or even effect size. Since you want to report change in R squared, I would go with your first option. However, this may cause other people (editors, professors, whoever) to look askance. | Whether to enter all predictors at once or perform a hierarchical regression? | The rule of thumb is 10 cases for each IV. You have (if I counted right) 11 IVs. Not too far over that. Surely the two SES variables are highly correlated? They could be combined and that gets you dow | Whether to enter all predictors at once or perform a hierarchical regression?
The rule of thumb is 10 cases for each IV. You have (if I counted right) 11 IVs. Not too far over that. Surely the two SES variables are highly correlated? They could be combined and that gets you down to 10 IVs.
Regarding dependent vs. independent - you haven't said what your sampling plan is. You can do SNA and still get independent observations if the SNA aspect is only in the variables and not in the sample. But if all the students were from one or a few classes, then you probably do have dependent data.
Regarding your questions: You can do either. Each will be just as much a violation of assumptions as the other. This doesn't depend on significance or even effect size. Since you want to report change in R squared, I would go with your first option. However, this may cause other people (editors, professors, whoever) to look askance. | Whether to enter all predictors at once or perform a hierarchical regression?
The rule of thumb is 10 cases for each IV. You have (if I counted right) 11 IVs. Not too far over that. Surely the two SES variables are highly correlated? They could be combined and that gets you dow |
Subsets and Splits