idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
51,901 | If X and Y are correlated, but Y and Z are independent, is X and Z always independent? | No.
Counterexample: Suppose $Y, Z \sim {\rm Bernoulli}(0.5)$ are IID. Now let $X = YZ$. $X$ clearly is not independent of either $Y$ or $Z$. | If X and Y are correlated, but Y and Z are independent, is X and Z always independent? | No.
Counterexample: Suppose $Y, Z \sim {\rm Bernoulli}(0.5)$ are IID. Now let $X = YZ$. $X$ clearly is not independent of either $Y$ or $Z$. | If X and Y are correlated, but Y and Z are independent, is X and Z always independent?
No.
Counterexample: Suppose $Y, Z \sim {\rm Bernoulli}(0.5)$ are IID. Now let $X = YZ$. $X$ clearly is not independent of either $Y$ or $Z$. | If X and Y are correlated, but Y and Z are independent, is X and Z always independent?
No.
Counterexample: Suppose $Y, Z \sim {\rm Bernoulli}(0.5)$ are IID. Now let $X = YZ$. $X$ clearly is not independent of either $Y$ or $Z$. |
51,902 | Explanatory variables may bias predictions | The point is that you don't code levels of categorical variables as 1,2,3 even if you call them that. You code them using dummy variables, e.g.
$$\begin{array}{l c c}
&x_1&x_2\\
\text{level 1} &0 & 0\\
\text{level 2} &1 & 0\\
\text{level 3} &0 & 1\\
\end{array}$$
so that when the linear predictor is given by
$$\eta = \beta_0 +\beta_1 x_1 +\beta_2 x_2$$
$\beta_0$ is the log odds for level 1, $\beta_1$ is the log odds ratio between level 2 & level 1, & $\beta_2$ is the log odds ratio between level 3 & level 1. Different coding schemes can be used (see here), changing the interpretation of the coefficients but not materially changing the model (i.e. the same predictor values give the same predicted response). | Explanatory variables may bias predictions | The point is that you don't code levels of categorical variables as 1,2,3 even if you call them that. You code them using dummy variables, e.g.
$$\begin{array}{l c c}
&x_1&x_2\\
\text{level 1} &0 & 0 | Explanatory variables may bias predictions
The point is that you don't code levels of categorical variables as 1,2,3 even if you call them that. You code them using dummy variables, e.g.
$$\begin{array}{l c c}
&x_1&x_2\\
\text{level 1} &0 & 0\\
\text{level 2} &1 & 0\\
\text{level 3} &0 & 1\\
\end{array}$$
so that when the linear predictor is given by
$$\eta = \beta_0 +\beta_1 x_1 +\beta_2 x_2$$
$\beta_0$ is the log odds for level 1, $\beta_1$ is the log odds ratio between level 2 & level 1, & $\beta_2$ is the log odds ratio between level 3 & level 1. Different coding schemes can be used (see here), changing the interpretation of the coefficients but not materially changing the model (i.e. the same predictor values give the same predicted response). | Explanatory variables may bias predictions
The point is that you don't code levels of categorical variables as 1,2,3 even if you call them that. You code them using dummy variables, e.g.
$$\begin{array}{l c c}
&x_1&x_2\\
\text{level 1} &0 & 0 |
51,903 | Explanatory variables may bias predictions | If your software treats a factor variable with factor levels 1, 2, 3 as numeric, your model and your predictions will be garbage, simply because the difference (on the link scale) between groups 1 and 3 will be fitted to be double the difference between groups 1 and 2, which will usually not make sense.
So: tell your software that 1, 2, 3 are factors. Better still, use A, B, C. No honest software will misunderstand A, B, C as numeric. | Explanatory variables may bias predictions | If your software treats a factor variable with factor levels 1, 2, 3 as numeric, your model and your predictions will be garbage, simply because the difference (on the link scale) between groups 1 and | Explanatory variables may bias predictions
If your software treats a factor variable with factor levels 1, 2, 3 as numeric, your model and your predictions will be garbage, simply because the difference (on the link scale) between groups 1 and 3 will be fitted to be double the difference between groups 1 and 2, which will usually not make sense.
So: tell your software that 1, 2, 3 are factors. Better still, use A, B, C. No honest software will misunderstand A, B, C as numeric. | Explanatory variables may bias predictions
If your software treats a factor variable with factor levels 1, 2, 3 as numeric, your model and your predictions will be garbage, simply because the difference (on the link scale) between groups 1 and |
51,904 | How to test a curvilinear relationship in a logistic regression | "curvilinear" could mean anything geometrically not a straight line on the scale being used. So, that could mean many things, including behaviour best tackled with powers of another variable, exponentials, logarithms, trigonometric and hyperbolic functions, etc., etc.
Using logistic regression does not change what is standard in any kind of regression-like modelling: You can have whatever predictors (so-called independent variables) in your model that make sense, so long as there are sufficient data.
Those general statements aside, trying a quadratic term in your model as well as a linear term is often a good simple way of adding some curvature. Because you are using a logit scale, intuition needs refining here. In particular, if your coefficient on the squared term is negative, you are fitting a kind of bell shape on the probability scale. This is often a feature in e.g. ecology where probability of occurrence of organisms is greatest for some intermediate value of an environmental predictor. In simple terms, it can be too hot, about right, too cold, and so forth. See http://www.cambridge.org/gb/knowledge/isbn/item5708032/ for one good account.
I trust that others will add advice about SPSS. | How to test a curvilinear relationship in a logistic regression | "curvilinear" could mean anything geometrically not a straight line on the scale being used. So, that could mean many things, including behaviour best tackled with powers of another variable, exponent | How to test a curvilinear relationship in a logistic regression
"curvilinear" could mean anything geometrically not a straight line on the scale being used. So, that could mean many things, including behaviour best tackled with powers of another variable, exponentials, logarithms, trigonometric and hyperbolic functions, etc., etc.
Using logistic regression does not change what is standard in any kind of regression-like modelling: You can have whatever predictors (so-called independent variables) in your model that make sense, so long as there are sufficient data.
Those general statements aside, trying a quadratic term in your model as well as a linear term is often a good simple way of adding some curvature. Because you are using a logit scale, intuition needs refining here. In particular, if your coefficient on the squared term is negative, you are fitting a kind of bell shape on the probability scale. This is often a feature in e.g. ecology where probability of occurrence of organisms is greatest for some intermediate value of an environmental predictor. In simple terms, it can be too hot, about right, too cold, and so forth. See http://www.cambridge.org/gb/knowledge/isbn/item5708032/ for one good account.
I trust that others will add advice about SPSS. | How to test a curvilinear relationship in a logistic regression
"curvilinear" could mean anything geometrically not a straight line on the scale being used. So, that could mean many things, including behaviour best tackled with powers of another variable, exponent |
51,905 | How to test a curvilinear relationship in a logistic regression | In addition to @Nick's excellent answer, let me just add some practical things about the modelling of nonlinear relationships that I've come across in my work. In epidemiology, for example, we are often faced with nonlinear dose-response relationships. An example would be the relationship between number of cigarettes smoked per day and the risk of death. One common approach is to categorize the exposure, but this is suboptimal. Two relatively quite common methods to fit nonlinear relationships are fractional polynomials and splines. These three papers offer a very good introduction to both methods: First, second and third. There is also a book. These methods are very flexible to model nonlinear relationships and they are not limited to applications in epidemiology and can be applied in other frameworks. As @Nick said: nonlinear relationships are not limited to linear regression and can be used in logistic regression too (and others, of course). Just pay attention that the scale is different (logit).
Regarding SPSS: SPSS doesn't seem to support fractional polynomials at the moment but Stata, R and SAS do. Splines on the other hand seem to be supported. | How to test a curvilinear relationship in a logistic regression | In addition to @Nick's excellent answer, let me just add some practical things about the modelling of nonlinear relationships that I've come across in my work. In epidemiology, for example, we are oft | How to test a curvilinear relationship in a logistic regression
In addition to @Nick's excellent answer, let me just add some practical things about the modelling of nonlinear relationships that I've come across in my work. In epidemiology, for example, we are often faced with nonlinear dose-response relationships. An example would be the relationship between number of cigarettes smoked per day and the risk of death. One common approach is to categorize the exposure, but this is suboptimal. Two relatively quite common methods to fit nonlinear relationships are fractional polynomials and splines. These three papers offer a very good introduction to both methods: First, second and third. There is also a book. These methods are very flexible to model nonlinear relationships and they are not limited to applications in epidemiology and can be applied in other frameworks. As @Nick said: nonlinear relationships are not limited to linear regression and can be used in logistic regression too (and others, of course). Just pay attention that the scale is different (logit).
Regarding SPSS: SPSS doesn't seem to support fractional polynomials at the moment but Stata, R and SAS do. Splines on the other hand seem to be supported. | How to test a curvilinear relationship in a logistic regression
In addition to @Nick's excellent answer, let me just add some practical things about the modelling of nonlinear relationships that I've come across in my work. In epidemiology, for example, we are oft |
51,906 | Why can't I trim the the dependent variable in a regression? Or can I? | There is a simulated data set called outliers in the TeachingDemos package for R. If you remove the "outliers" using a common rule of thumb, then relook at the data and remove the points that are now outliers and continue until you have no "outliers" you end up trowing away 75% of the data as "outliers". Are they really unusual if they are the majority of the data? The examples on the help page also show using this data for a regression model and how throwing away half of the data as outliers does not make much difference.
This is intended as an illustration against using automated rules for throwing away data.
Actually the discovery of penicillian was an outlier, consider what the world would be like if that data point had been discarded instead of investigated.
There are more acceptable routines such as M-estimation or other robust regression techniques that downweight unusual observations rather than throwing them out. | Why can't I trim the the dependent variable in a regression? Or can I? | There is a simulated data set called outliers in the TeachingDemos package for R. If you remove the "outliers" using a common rule of thumb, then relook at the data and remove the points that are now | Why can't I trim the the dependent variable in a regression? Or can I?
There is a simulated data set called outliers in the TeachingDemos package for R. If you remove the "outliers" using a common rule of thumb, then relook at the data and remove the points that are now outliers and continue until you have no "outliers" you end up trowing away 75% of the data as "outliers". Are they really unusual if they are the majority of the data? The examples on the help page also show using this data for a regression model and how throwing away half of the data as outliers does not make much difference.
This is intended as an illustration against using automated rules for throwing away data.
Actually the discovery of penicillian was an outlier, consider what the world would be like if that data point had been discarded instead of investigated.
There are more acceptable routines such as M-estimation or other robust regression techniques that downweight unusual observations rather than throwing them out. | Why can't I trim the the dependent variable in a regression? Or can I?
There is a simulated data set called outliers in the TeachingDemos package for R. If you remove the "outliers" using a common rule of thumb, then relook at the data and remove the points that are now |
51,907 | Why can't I trim the the dependent variable in a regression? Or can I? | Rejecting to predict particular $X$s that do not fit your specification can be seen as part of the model:
$ Y = \begin{cases}
f (X) & \text{if } X_{min } \leq X \leq X_{max}\\
\text{refuse prediction, e.g. NA} & \text{else}
\end{cases}
$
It is not sensible to formulate such a condition on $Y$. You model the $Y$ as the unknown. But you cannot base a decision on a property you do not know.
You could trace this condition on $Y$ back to certain $X$, but this corresponds again to posing a constraint on the known variates (which are all you have when evaluating the model). And both for your modeling and for judging the quality of your model, you need all data points that are inside the specification for the knowns ($X$), so trimming $Y$ would violate this condition.
Whether it is sensible to pose such a constraint on $X$ will depend on your application, there is nothing automatic about this.
Note also that I formulated the constraint with values $X_{min}$ and $X_{max}$, not with quantiles. This is important. You may deterimine these values by looking at the quantiles in the modeling process, but in the ready-to-use model, you need quantities of $X$ to compare against: the model must be usable for a single prediction, where you cannot compute a quantile.
However, you need to take into account not only the direction of your model $Y$ ~ $X$, i.e. $X$ predicts $Y$ but also the purpose of the model.
A restriction on certain ranges of $X$ is sensible for predictive models. E.g. in my field (chemometrics) this is rather emphasized. The default restriction is that a model should never be used outside the $X$ range covered during both modeling and validation. In other words, no extrapolation. But you may define more restricrive $X$ values. Note however, that we chemometrician can usually point out physical and/or chemical reasons why we can reasonably assume a e.g. linear relationship for intermediate $X$, but neither for too small, nor for too large $X$s.
This is "automated" only in the sense that one of the first things to think about when setting up a chemometric regression is the range you need to cover for your application, and/or measurin the range you are able to cover.
Whether restricting the $X$ can be justified for a descriptive model is IMHO a more difficult question. In any case I'd at least report what the function does with the excluded $X$s, and how this compares to their $Y$s. | Why can't I trim the the dependent variable in a regression? Or can I? | Rejecting to predict particular $X$s that do not fit your specification can be seen as part of the model:
$ Y = \begin{cases}
f (X) & \text{if } X_{min } \leq X \leq X_{max}\\
\text{refuse p | Why can't I trim the the dependent variable in a regression? Or can I?
Rejecting to predict particular $X$s that do not fit your specification can be seen as part of the model:
$ Y = \begin{cases}
f (X) & \text{if } X_{min } \leq X \leq X_{max}\\
\text{refuse prediction, e.g. NA} & \text{else}
\end{cases}
$
It is not sensible to formulate such a condition on $Y$. You model the $Y$ as the unknown. But you cannot base a decision on a property you do not know.
You could trace this condition on $Y$ back to certain $X$, but this corresponds again to posing a constraint on the known variates (which are all you have when evaluating the model). And both for your modeling and for judging the quality of your model, you need all data points that are inside the specification for the knowns ($X$), so trimming $Y$ would violate this condition.
Whether it is sensible to pose such a constraint on $X$ will depend on your application, there is nothing automatic about this.
Note also that I formulated the constraint with values $X_{min}$ and $X_{max}$, not with quantiles. This is important. You may deterimine these values by looking at the quantiles in the modeling process, but in the ready-to-use model, you need quantities of $X$ to compare against: the model must be usable for a single prediction, where you cannot compute a quantile.
However, you need to take into account not only the direction of your model $Y$ ~ $X$, i.e. $X$ predicts $Y$ but also the purpose of the model.
A restriction on certain ranges of $X$ is sensible for predictive models. E.g. in my field (chemometrics) this is rather emphasized. The default restriction is that a model should never be used outside the $X$ range covered during both modeling and validation. In other words, no extrapolation. But you may define more restricrive $X$ values. Note however, that we chemometrician can usually point out physical and/or chemical reasons why we can reasonably assume a e.g. linear relationship for intermediate $X$, but neither for too small, nor for too large $X$s.
This is "automated" only in the sense that one of the first things to think about when setting up a chemometric regression is the range you need to cover for your application, and/or measurin the range you are able to cover.
Whether restricting the $X$ can be justified for a descriptive model is IMHO a more difficult question. In any case I'd at least report what the function does with the excluded $X$s, and how this compares to their $Y$s. | Why can't I trim the the dependent variable in a regression? Or can I?
Rejecting to predict particular $X$s that do not fit your specification can be seen as part of the model:
$ Y = \begin{cases}
f (X) & \text{if } X_{min } \leq X \leq X_{max}\\
\text{refuse p |
51,908 | Why can't I trim the the dependent variable in a regression? Or can I? | Regression is carried out conditional on the observed values of the independent variables. So if you only want to model over a certain range and you're not confident in the assumption of a linear relationship with the response (or another assumption such as constant variance) outside this range it can be reasonable to exclude observations with predictor values outside it - & the analysis will be valid just as if you'd fixed the dependent variables at those values in an experiment. (All the same, the practice of automatically excluding a fixed proportion of the observations with the most extreme predictor values seems hard to justify.)
What's wrong with 'trimming' the dependent variable is that you're introducing something into the data-generating process that you don't take into account in the analysis, & moreover losing the information, whether about the parameter or about fit, carried by those excluded observations. I'm not sure whether you meant excluding outliers in the dependent variable or in the residuals - whuber & Greg have described the problems with both practices. | Why can't I trim the the dependent variable in a regression? Or can I? | Regression is carried out conditional on the observed values of the independent variables. So if you only want to model over a certain range and you're not confident in the assumption of a linear rela | Why can't I trim the the dependent variable in a regression? Or can I?
Regression is carried out conditional on the observed values of the independent variables. So if you only want to model over a certain range and you're not confident in the assumption of a linear relationship with the response (or another assumption such as constant variance) outside this range it can be reasonable to exclude observations with predictor values outside it - & the analysis will be valid just as if you'd fixed the dependent variables at those values in an experiment. (All the same, the practice of automatically excluding a fixed proportion of the observations with the most extreme predictor values seems hard to justify.)
What's wrong with 'trimming' the dependent variable is that you're introducing something into the data-generating process that you don't take into account in the analysis, & moreover losing the information, whether about the parameter or about fit, carried by those excluded observations. I'm not sure whether you meant excluding outliers in the dependent variable or in the residuals - whuber & Greg have described the problems with both practices. | Why can't I trim the the dependent variable in a regression? Or can I?
Regression is carried out conditional on the observed values of the independent variables. So if you only want to model over a certain range and you're not confident in the assumption of a linear rela |
51,909 | Why can't I trim the the dependent variable in a regression? Or can I? | I don't know about financial economics, but there are instances when our understanding is so embarrassingly inadequate that discarding data is the only thing we know how to do. For example, the field of energy intake/consumption and calories sometimes entails using data driven approaches to discard extreme observations that we know are not plausible.
If there are valid reasons to routinely discard data in financial economics (and with valid I mean that such data points are not plausible), surely there are better approaches then simply remove 0.5 or 1% of the data at either end. And if the data points are simply large/extreme but plausible, there are robust methods, transformations etc. | Why can't I trim the the dependent variable in a regression? Or can I? | I don't know about financial economics, but there are instances when our understanding is so embarrassingly inadequate that discarding data is the only thing we know how to do. For example, the field | Why can't I trim the the dependent variable in a regression? Or can I?
I don't know about financial economics, but there are instances when our understanding is so embarrassingly inadequate that discarding data is the only thing we know how to do. For example, the field of energy intake/consumption and calories sometimes entails using data driven approaches to discard extreme observations that we know are not plausible.
If there are valid reasons to routinely discard data in financial economics (and with valid I mean that such data points are not plausible), surely there are better approaches then simply remove 0.5 or 1% of the data at either end. And if the data points are simply large/extreme but plausible, there are robust methods, transformations etc. | Why can't I trim the the dependent variable in a regression? Or can I?
I don't know about financial economics, but there are instances when our understanding is so embarrassingly inadequate that discarding data is the only thing we know how to do. For example, the field |
51,910 | How to reduce the number of variables in cluster analysis? | The problem with dimensionality reduction and number of variables >> number of observations is that the $k$ observations that you have define an at most $k-1$ dimensional hyperplane on which the objects perfectly are located on.
So yes, anything more than 9 dimensions still has proven redundancies.
Many dimension reduction techniques - in particular PCA, SVD, but probably also MDS etc. - will essentially try to preserve this hyperplane.
Don't you have a way to reduce the number of dimensions that uses domain information that you have? I.e. if you know that your dimensions are expected to be highly correlated, remove dimensions that are the most correlated (pairwise is probably best). But note that even correlation is not very stable to compute when you have just 10 observations. You lose one degree of freedom for the mean, for example, which you can't really afford. | How to reduce the number of variables in cluster analysis? | The problem with dimensionality reduction and number of variables >> number of observations is that the $k$ observations that you have define an at most $k-1$ dimensional hyperplane on which the objec | How to reduce the number of variables in cluster analysis?
The problem with dimensionality reduction and number of variables >> number of observations is that the $k$ observations that you have define an at most $k-1$ dimensional hyperplane on which the objects perfectly are located on.
So yes, anything more than 9 dimensions still has proven redundancies.
Many dimension reduction techniques - in particular PCA, SVD, but probably also MDS etc. - will essentially try to preserve this hyperplane.
Don't you have a way to reduce the number of dimensions that uses domain information that you have? I.e. if you know that your dimensions are expected to be highly correlated, remove dimensions that are the most correlated (pairwise is probably best). But note that even correlation is not very stable to compute when you have just 10 observations. You lose one degree of freedom for the mean, for example, which you can't really afford. | How to reduce the number of variables in cluster analysis?
The problem with dimensionality reduction and number of variables >> number of observations is that the $k$ observations that you have define an at most $k-1$ dimensional hyperplane on which the objec |
51,911 | How to reduce the number of variables in cluster analysis? | Here's something else you can try.
This data is similar to what they see in genomics, so you could look to that field for ideas of analysis. In genomics there are lots of variables (20,000+), many of which are highly correlated with each other, and a relatively small number of rows.
If this were a genomics problem, 5 of your rows would be healthy controls and 5 would have some kind of malady and you'd want to find the genes (variables) which help you identify the disease - i.e. feature selection is the main problem.
In your case if you don't know that some of your rows are "good" and some is "bad", you could still use a technique that is good for datasets like this - a random forest.
R's randomForest library does unsupervised clustering. In a nutshell, it will combine your 10x1000 matrix with another 10x1000 matrix consisting of random noise. It then tries to build a model to differentiate between your matrix and the noise.
If you do know some of the rows are "good", then you could still use randomForest - just use it in "supervised" mode.
Regardless, a nice "side effect" of a random forest is you could then examine the importance() of each variable - a variable's importance is measured by averaging its performance across all the trees in used in the random forest.
You could sort this list in descending order of importance, take the top x number of variables, and consider these to be the ones accounting for the most variance in your matrix.
You'd also want to check out the importance metric itself - plot it maybe. If it's flat across the entire range of variables, then no one variable(s) is more predictive than any other. But if, as you suspect, some variables account for more variance, you should see a scree type plot.
I love Random Forests. They are really fast and "embarassingly parallel". They have a weakness of over emphasizing discrete variables with lots of values (e.g. State). That doesn't seem to apply here.
EDIT:
Link to Breiman's site. A pretty good explanation. | How to reduce the number of variables in cluster analysis? | Here's something else you can try.
This data is similar to what they see in genomics, so you could look to that field for ideas of analysis. In genomics there are lots of variables (20,000+), many | How to reduce the number of variables in cluster analysis?
Here's something else you can try.
This data is similar to what they see in genomics, so you could look to that field for ideas of analysis. In genomics there are lots of variables (20,000+), many of which are highly correlated with each other, and a relatively small number of rows.
If this were a genomics problem, 5 of your rows would be healthy controls and 5 would have some kind of malady and you'd want to find the genes (variables) which help you identify the disease - i.e. feature selection is the main problem.
In your case if you don't know that some of your rows are "good" and some is "bad", you could still use a technique that is good for datasets like this - a random forest.
R's randomForest library does unsupervised clustering. In a nutshell, it will combine your 10x1000 matrix with another 10x1000 matrix consisting of random noise. It then tries to build a model to differentiate between your matrix and the noise.
If you do know some of the rows are "good", then you could still use randomForest - just use it in "supervised" mode.
Regardless, a nice "side effect" of a random forest is you could then examine the importance() of each variable - a variable's importance is measured by averaging its performance across all the trees in used in the random forest.
You could sort this list in descending order of importance, take the top x number of variables, and consider these to be the ones accounting for the most variance in your matrix.
You'd also want to check out the importance metric itself - plot it maybe. If it's flat across the entire range of variables, then no one variable(s) is more predictive than any other. But if, as you suspect, some variables account for more variance, you should see a scree type plot.
I love Random Forests. They are really fast and "embarassingly parallel". They have a weakness of over emphasizing discrete variables with lots of values (e.g. State). That doesn't seem to apply here.
EDIT:
Link to Breiman's site. A pretty good explanation. | How to reduce the number of variables in cluster analysis?
Here's something else you can try.
This data is similar to what they see in genomics, so you could look to that field for ideas of analysis. In genomics there are lots of variables (20,000+), many |
51,912 | How to reduce the number of variables in cluster analysis? | You could try transposing the data and computing principal components to see which cases load on which components. It might be necessary to rotate the results to get clearer clusters, but ideally you could end up with three good components with each representing the groups you are expecting. Even if that doesn't work, you could then use the principal component scores to cluster your compounds into a smaller number of groups and select the compound closest to the centroid in that group to represent the group. | How to reduce the number of variables in cluster analysis? | You could try transposing the data and computing principal components to see which cases load on which components. It might be necessary to rotate the results to get clearer clusters, but ideally you | How to reduce the number of variables in cluster analysis?
You could try transposing the data and computing principal components to see which cases load on which components. It might be necessary to rotate the results to get clearer clusters, but ideally you could end up with three good components with each representing the groups you are expecting. Even if that doesn't work, you could then use the principal component scores to cluster your compounds into a smaller number of groups and select the compound closest to the centroid in that group to represent the group. | How to reduce the number of variables in cluster analysis?
You could try transposing the data and computing principal components to see which cases load on which components. It might be necessary to rotate the results to get clearer clusters, but ideally you |
51,913 | Based only on these sensitivity and specificity values, what is the best decision method? | To make an optimal decision you need to know all relevant data about an individual (used to estimate the probability of an outcome), and the utility (cost, loss function) of making each decision. Sensitivity and specificity do not provide this information. That's why direct probability models such as the binary logistic model are so popular. For example, if you estimated that the probability of a disease given age, sex, and symptoms is 0.1 and the "cost" of a false positive equaled the "cost" of a false negative, you would act as if the person does not have the disease. Given other utilities you might make different decisions. If the utilities are unknown, you give the best estimate of the probability of the outcome to the decision maker and let her incorporate her own unspoken utilities in making an optimum decision for her.
Besides the fact that cutoffs do not apply to individuals, only to groups, individual decision making does not utilize sensitivity and specificity. For an individual we can compute $\textrm{Prob}(Y=1 | X=x)$; we don't care about $\textrm{Prob}(Y=1 | X>c)$, and an individual having $X=x$ would be quite puzzled if you gave her $\textrm{Prob}(X>c | \textrm{future unknown Y})$ when they already know $X=x$ so it is no longer a random variable.
Even when group decision making is needed, sensitivity and specificity can be bypassed. For mass marketing, for example, you can rank order individuals by the estimated probability of buying the product, to create a lift curve. This is then used to target the $k$ most likely buyers where $k$ is chosen to meet total program cost constraints. | Based only on these sensitivity and specificity values, what is the best decision method? | To make an optimal decision you need to know all relevant data about an individual (used to estimate the probability of an outcome), and the utility (cost, loss function) of making each decision. Sen | Based only on these sensitivity and specificity values, what is the best decision method?
To make an optimal decision you need to know all relevant data about an individual (used to estimate the probability of an outcome), and the utility (cost, loss function) of making each decision. Sensitivity and specificity do not provide this information. That's why direct probability models such as the binary logistic model are so popular. For example, if you estimated that the probability of a disease given age, sex, and symptoms is 0.1 and the "cost" of a false positive equaled the "cost" of a false negative, you would act as if the person does not have the disease. Given other utilities you might make different decisions. If the utilities are unknown, you give the best estimate of the probability of the outcome to the decision maker and let her incorporate her own unspoken utilities in making an optimum decision for her.
Besides the fact that cutoffs do not apply to individuals, only to groups, individual decision making does not utilize sensitivity and specificity. For an individual we can compute $\textrm{Prob}(Y=1 | X=x)$; we don't care about $\textrm{Prob}(Y=1 | X>c)$, and an individual having $X=x$ would be quite puzzled if you gave her $\textrm{Prob}(X>c | \textrm{future unknown Y})$ when they already know $X=x$ so it is no longer a random variable.
Even when group decision making is needed, sensitivity and specificity can be bypassed. For mass marketing, for example, you can rank order individuals by the estimated probability of buying the product, to create a lift curve. This is then used to target the $k$ most likely buyers where $k$ is chosen to meet total program cost constraints. | Based only on these sensitivity and specificity values, what is the best decision method?
To make an optimal decision you need to know all relevant data about an individual (used to estimate the probability of an outcome), and the utility (cost, loss function) of making each decision. Sen |
51,914 | Based only on these sensitivity and specificity values, what is the best decision method? | As Nick has already pointed out the answer depends on context. However, if you only had to judge based on the values of sensitivity and specificity values that you provided, then a good strategy is to plot the sensitivity on the y-axis and $(100\%-$ specificity$)$ on the x-axis and look for the highest leftmost point. This point would be the sensitivity and specificity pair that you might want to choose. But please remember that domain knowledge plays a huge impact on the final decision. Here is the R-code that does the work for you:
R> sens <- c(66.3, 87.2, 56.4, 79.5)
R> spec <- c(74.7, 65.9, 76.4, 94.3)
R> df <- data.frame(y=sens, x=(100-spec))
R> df
y x
1 66.3 25.3
2 87.2 34.1
3 56.4 23.6
4 79.5 5.7
R> df <- df[order(df$x), ]
R> df
y x
4 79.5 5.7
3 56.4 23.6
1 66.3 25.3
2 87.2 34.1
R> plot(x = df$x, y = df$y, type = "b",
pch = 20, lty="solid", lwd = 2,
main = "Sensitiviy and (100-Specificity) Curve",
xlab = "100 - Specificity", ylab = "Sensitivity")
Therefore, you'd choose 79.5, 94.3 sensitivity and specificity pair.
Basic Idea: High number for true positives and low number for false positives! | Based only on these sensitivity and specificity values, what is the best decision method? | As Nick has already pointed out the answer depends on context. However, if you only had to judge based on the values of sensitivity and specificity values that you provided, then a good strategy is to | Based only on these sensitivity and specificity values, what is the best decision method?
As Nick has already pointed out the answer depends on context. However, if you only had to judge based on the values of sensitivity and specificity values that you provided, then a good strategy is to plot the sensitivity on the y-axis and $(100\%-$ specificity$)$ on the x-axis and look for the highest leftmost point. This point would be the sensitivity and specificity pair that you might want to choose. But please remember that domain knowledge plays a huge impact on the final decision. Here is the R-code that does the work for you:
R> sens <- c(66.3, 87.2, 56.4, 79.5)
R> spec <- c(74.7, 65.9, 76.4, 94.3)
R> df <- data.frame(y=sens, x=(100-spec))
R> df
y x
1 66.3 25.3
2 87.2 34.1
3 56.4 23.6
4 79.5 5.7
R> df <- df[order(df$x), ]
R> df
y x
4 79.5 5.7
3 56.4 23.6
1 66.3 25.3
2 87.2 34.1
R> plot(x = df$x, y = df$y, type = "b",
pch = 20, lty="solid", lwd = 2,
main = "Sensitiviy and (100-Specificity) Curve",
xlab = "100 - Specificity", ylab = "Sensitivity")
Therefore, you'd choose 79.5, 94.3 sensitivity and specificity pair.
Basic Idea: High number for true positives and low number for false positives! | Based only on these sensitivity and specificity values, what is the best decision method?
As Nick has already pointed out the answer depends on context. However, if you only had to judge based on the values of sensitivity and specificity values that you provided, then a good strategy is to |
51,915 | Why are the correlations in two groups less than the correlation when the groups are combined? | Here are just a couple of ideas:
Range restriction is one explanation. Check out this simulation; and this explanation.
Correlated group mean differences is another related idea. Say group 1 has a mean two standard deviations higher than group 2 on both X and Y, but that there is no correlation between X and Y within each group. When you combine the two groups there would be a strong correlation.
And just for fun, here's a little R simulation
# Setup Data
x1 <- rnorm(200, 0, 1)
x2 <- rnorm(200, 2, 1)
y1 <- rnorm(200, 0, 1)
y2 <- rnorm(200, 2, 1)
grp <- rep(1:2, each=200)
x <- data.frame(grp, x=c(x1,x2), y=c(y1,y2))
# Plot
library(lattice)
xyplot(y~x, group=grp, data=x)
# Correlations
cor(x1, y1)
cor(x2, y2)
cor(x$x, x$y)
Which produced these three correlations respectively on my run of the simulation
[1] 0.1248730
[1] 0.1027219
[1] 0.56244
And the following graph | Why are the correlations in two groups less than the correlation when the groups are combined? | Here are just a couple of ideas:
Range restriction is one explanation. Check out this simulation; and this explanation.
Correlated group mean differences is another related idea. Say group 1 has a | Why are the correlations in two groups less than the correlation when the groups are combined?
Here are just a couple of ideas:
Range restriction is one explanation. Check out this simulation; and this explanation.
Correlated group mean differences is another related idea. Say group 1 has a mean two standard deviations higher than group 2 on both X and Y, but that there is no correlation between X and Y within each group. When you combine the two groups there would be a strong correlation.
And just for fun, here's a little R simulation
# Setup Data
x1 <- rnorm(200, 0, 1)
x2 <- rnorm(200, 2, 1)
y1 <- rnorm(200, 0, 1)
y2 <- rnorm(200, 2, 1)
grp <- rep(1:2, each=200)
x <- data.frame(grp, x=c(x1,x2), y=c(y1,y2))
# Plot
library(lattice)
xyplot(y~x, group=grp, data=x)
# Correlations
cor(x1, y1)
cor(x2, y2)
cor(x$x, x$y)
Which produced these three correlations respectively on my run of the simulation
[1] 0.1248730
[1] 0.1027219
[1] 0.56244
And the following graph | Why are the correlations in two groups less than the correlation when the groups are combined?
Here are just a couple of ideas:
Range restriction is one explanation. Check out this simulation; and this explanation.
Correlated group mean differences is another related idea. Say group 1 has a |
51,916 | Why are the correlations in two groups less than the correlation when the groups are combined? | Sounds like Simpson's Paradox. | Why are the correlations in two groups less than the correlation when the groups are combined? | Sounds like Simpson's Paradox. | Why are the correlations in two groups less than the correlation when the groups are combined?
Sounds like Simpson's Paradox. | Why are the correlations in two groups less than the correlation when the groups are combined?
Sounds like Simpson's Paradox. |
51,917 | Why are the correlations in two groups less than the correlation when the groups are combined? | During this analysis I ran into a situation where the $R^2$ for two groups was smaller in each individual group as opposed to when they are grouped together. Is there any straight forward explanation for how this could happen?
Yes, this simply means that knowing to which group the observation belongs to explains part of the variation of $y$. You should consider interacting $x$ with the group variable in your modeling. For instance, in Jeromy's example, once conditioning on group, $x$ is independent of $y$ --- the group is the main explanatory variable of the variation of both variables. | Why are the correlations in two groups less than the correlation when the groups are combined? | During this analysis I ran into a situation where the $R^2$ for two groups was smaller in each individual group as opposed to when they are grouped together. Is there any straight forward explanation | Why are the correlations in two groups less than the correlation when the groups are combined?
During this analysis I ran into a situation where the $R^2$ for two groups was smaller in each individual group as opposed to when they are grouped together. Is there any straight forward explanation for how this could happen?
Yes, this simply means that knowing to which group the observation belongs to explains part of the variation of $y$. You should consider interacting $x$ with the group variable in your modeling. For instance, in Jeromy's example, once conditioning on group, $x$ is independent of $y$ --- the group is the main explanatory variable of the variation of both variables. | Why are the correlations in two groups less than the correlation when the groups are combined?
During this analysis I ran into a situation where the $R^2$ for two groups was smaller in each individual group as opposed to when they are grouped together. Is there any straight forward explanation |
51,918 | Why are the correlations in two groups less than the correlation when the groups are combined? | The phenomenon relates to the geometry of linear regression analysis
This result is quite common in regression, and it reflects the fact that each new explanatory variables generally gives some additional information about the response variable, so that a model that combines variables from two other models will give a higher coefficient of determination than the latter models. That is, if you have two separate models with coefficients of determination $R_1^2$ and $R_2^2$ (using different explanatory variables) and the combined model has coefficient of determination $R^2$ then you have:
$$R^2 \geqslant \max(R_1^2,R_2^2).$$
Usually the coefficient of determination in the combined model is no greater than the sum of the parts, so you will usually have:
$$R^2 \leqslant R_1^2+R_2^2,
\quad \quad \quad \quad \quad (\text{Common case - no enhancement}),$$
but it is actually possible that it may even be higher than this
$$R^2 > R_1^2+R_2^2,
\quad \quad \quad \quad \quad (\text{Rare case - enhancement}). \quad \quad \quad$$
You can read more about the geometric properties of regression models and the resulting effects of combinining variables in O'Neill (2019). That paper discusses the relationship between the coefficient of determination in the combined model and the coefficients of determination in the individual models using the same explanatory variables. The paper also discusses the phenomenon of "enhancement" where the coefficient of determination in the combined model is more than the sum of its parts. These relationships depend on the eigendecomposition of the correlation matrix for the set of variables in the regression, so there are some quite complicated relationships at issue. | Why are the correlations in two groups less than the correlation when the groups are combined? | The phenomenon relates to the geometry of linear regression analysis
This result is quite common in regression, and it reflects the fact that each new explanatory variables generally gives some additi | Why are the correlations in two groups less than the correlation when the groups are combined?
The phenomenon relates to the geometry of linear regression analysis
This result is quite common in regression, and it reflects the fact that each new explanatory variables generally gives some additional information about the response variable, so that a model that combines variables from two other models will give a higher coefficient of determination than the latter models. That is, if you have two separate models with coefficients of determination $R_1^2$ and $R_2^2$ (using different explanatory variables) and the combined model has coefficient of determination $R^2$ then you have:
$$R^2 \geqslant \max(R_1^2,R_2^2).$$
Usually the coefficient of determination in the combined model is no greater than the sum of the parts, so you will usually have:
$$R^2 \leqslant R_1^2+R_2^2,
\quad \quad \quad \quad \quad (\text{Common case - no enhancement}),$$
but it is actually possible that it may even be higher than this
$$R^2 > R_1^2+R_2^2,
\quad \quad \quad \quad \quad (\text{Rare case - enhancement}). \quad \quad \quad$$
You can read more about the geometric properties of regression models and the resulting effects of combinining variables in O'Neill (2019). That paper discusses the relationship between the coefficient of determination in the combined model and the coefficients of determination in the individual models using the same explanatory variables. The paper also discusses the phenomenon of "enhancement" where the coefficient of determination in the combined model is more than the sum of its parts. These relationships depend on the eigendecomposition of the correlation matrix for the set of variables in the regression, so there are some quite complicated relationships at issue. | Why are the correlations in two groups less than the correlation when the groups are combined?
The phenomenon relates to the geometry of linear regression analysis
This result is quite common in regression, and it reflects the fact that each new explanatory variables generally gives some additi |
51,919 | R packages for seasonality analysis [closed] | You don't need to install any packages because this is possible with base-R functions. Have a look at the arima function.
This is a basic function of Box-Jenkins analysis, so you should consider reading one of the R time series text-books for an overview; my favorite is Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples". | R packages for seasonality analysis [closed] | You don't need to install any packages because this is possible with base-R functions. Have a look at the arima function.
This is a basic function of Box-Jenkins analysis, so you should consider re | R packages for seasonality analysis [closed]
You don't need to install any packages because this is possible with base-R functions. Have a look at the arima function.
This is a basic function of Box-Jenkins analysis, so you should consider reading one of the R time series text-books for an overview; my favorite is Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples". | R packages for seasonality analysis [closed]
You don't need to install any packages because this is possible with base-R functions. Have a look at the arima function.
This is a basic function of Box-Jenkins analysis, so you should consider re |
51,920 | R packages for seasonality analysis [closed] | Try using the stl() function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series. | R packages for seasonality analysis [closed] | Try using the stl() function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series. | R packages for seasonality analysis [closed]
Try using the stl() function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series. | R packages for seasonality analysis [closed]
Try using the stl() function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series. |
51,921 | R packages for seasonality analysis [closed] | I build/published an R package named seas for my M.Sc. work a few years ago. The package is good for discretizing a time-series over years into seasonal divisions, such as months or 11-day periods. These divisions can then be applied to continuous variables (e.g., temperature, water levels) or discontinuous variables (e.g., precipitation, groundwater recharge rates). | R packages for seasonality analysis [closed] | I build/published an R package named seas for my M.Sc. work a few years ago. The package is good for discretizing a time-series over years into seasonal divisions, such as months or 11-day periods. Th | R packages for seasonality analysis [closed]
I build/published an R package named seas for my M.Sc. work a few years ago. The package is good for discretizing a time-series over years into seasonal divisions, such as months or 11-day periods. These divisions can then be applied to continuous variables (e.g., temperature, water levels) or discontinuous variables (e.g., precipitation, groundwater recharge rates). | R packages for seasonality analysis [closed]
I build/published an R package named seas for my M.Sc. work a few years ago. The package is good for discretizing a time-series over years into seasonal divisions, such as months or 11-day periods. Th |
51,922 | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than P(Y>x)? | The conclusion does not follow. The family of distributions described at How is the kurtosis of a distribution related to the geometry of the density function? gives a counterexample. These are densities $f_{k,s}$ all of which have identical moments. If we were, then, to shift one of them by some positive amount, all its moments would (strictly) increase. Here is a plot of their distribution functions $F_{k,s}$ (evaluated numerically):
Clearly neither dominates the other, even though all the moments of the red distribution exceed those of the black distribution.
For those who would like to experiment with this family, here is the R code used to create these plots.
f <- function(x, k=0, s=0) {
ifelse(x <= 0, 0, dnorm(log(x)) * (1 + s * sin(2 * k * pi * log(x))) / x)
}
ff <- Vectorize(function(x, k=0, s=0, ...) {
integrate(\(y) f(y, k, s), 0, x, ...)$value
})
curve(ff(x), 0, 4, lwd = 2, ylim = 0:1,
ylab = "Probability", xlab = expression(italic(x)), cex.lab = 1.25)
curve(ff(x - 0.01, 1, 3/4), add = TRUE, lwd = 2, col = "Red") | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than | The conclusion does not follow. The family of distributions described at How is the kurtosis of a distribution related to the geometry of the density function? gives a counterexample. These are dens | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than P(Y>x)?
The conclusion does not follow. The family of distributions described at How is the kurtosis of a distribution related to the geometry of the density function? gives a counterexample. These are densities $f_{k,s}$ all of which have identical moments. If we were, then, to shift one of them by some positive amount, all its moments would (strictly) increase. Here is a plot of their distribution functions $F_{k,s}$ (evaluated numerically):
Clearly neither dominates the other, even though all the moments of the red distribution exceed those of the black distribution.
For those who would like to experiment with this family, here is the R code used to create these plots.
f <- function(x, k=0, s=0) {
ifelse(x <= 0, 0, dnorm(log(x)) * (1 + s * sin(2 * k * pi * log(x))) / x)
}
ff <- Vectorize(function(x, k=0, s=0, ...) {
integrate(\(y) f(y, k, s), 0, x, ...)$value
})
curve(ff(x), 0, 4, lwd = 2, ylim = 0:1,
ylab = "Probability", xlab = expression(italic(x)), cex.lab = 1.25)
curve(ff(x - 0.01, 1, 3/4), add = TRUE, lwd = 2, col = "Red") | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than
The conclusion does not follow. The family of distributions described at How is the kurtosis of a distribution related to the geometry of the density function? gives a counterexample. These are dens |
51,923 | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than P(Y>x)? | The higher moments say relatively little about the behaviour of the distribution at small values. The tails have a lot of influence on a distribution.
You can see this in particular with distributions that have infinite higher moments.
For example compare a Frechet distribution with location = 1, scale = 1 and shape = 1.5 (this has finite mean but infinite variance) to an exponential distribution with equal mean. | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than | The higher moments say relatively little about the behaviour of the distribution at small values. The tails have a lot of influence on a distribution.
You can see this in particular with distributions | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than P(Y>x)?
The higher moments say relatively little about the behaviour of the distribution at small values. The tails have a lot of influence on a distribution.
You can see this in particular with distributions that have infinite higher moments.
For example compare a Frechet distribution with location = 1, scale = 1 and shape = 1.5 (this has finite mean but infinite variance) to an exponential distribution with equal mean. | If all moments of a non-negative random variable X are larger than those of Y, is P(X>x) larger than
The higher moments say relatively little about the behaviour of the distribution at small values. The tails have a lot of influence on a distribution.
You can see this in particular with distributions |
51,924 | Estimating the parameter $\beta$ | To estimate the $\beta$ by the maximum likelihood method, let $Y_1,\ldots, Y_n$ be the sample of lifetimes, with $Y_i\sim \text{Exp}(\beta/s_i)$, s.t. $E(Y_i) = \beta/s_i$ and independently for each $i$, where $s_i$ is the monitor brightness.
Then the likelihood function for $\beta$ is given by
$$
L(\beta) = \prod_{i=1}^n (s_i/\beta) e^{-\frac{1}{\beta}s_i Y_i} \propto \beta^{-n} e^{-\frac{1}{\beta}\sum_is_iY_i}.
$$
I leave the rest to you... | Estimating the parameter $\beta$ | To estimate the $\beta$ by the maximum likelihood method, let $Y_1,\ldots, Y_n$ be the sample of lifetimes, with $Y_i\sim \text{Exp}(\beta/s_i)$, s.t. $E(Y_i) = \beta/s_i$ and independently for each $ | Estimating the parameter $\beta$
To estimate the $\beta$ by the maximum likelihood method, let $Y_1,\ldots, Y_n$ be the sample of lifetimes, with $Y_i\sim \text{Exp}(\beta/s_i)$, s.t. $E(Y_i) = \beta/s_i$ and independently for each $i$, where $s_i$ is the monitor brightness.
Then the likelihood function for $\beta$ is given by
$$
L(\beta) = \prod_{i=1}^n (s_i/\beta) e^{-\frac{1}{\beta}s_i Y_i} \propto \beta^{-n} e^{-\frac{1}{\beta}\sum_is_iY_i}.
$$
I leave the rest to you... | Estimating the parameter $\beta$
To estimate the $\beta$ by the maximum likelihood method, let $Y_1,\ldots, Y_n$ be the sample of lifetimes, with $Y_i\sim \text{Exp}(\beta/s_i)$, s.t. $E(Y_i) = \beta/s_i$ and independently for each $ |
51,925 | Estimating the parameter $\beta$ | $
L(\beta) = P(\mathrm{Data} \mid \beta) = \prod_i f(x_i)
$ and $\lambda = \frac{1}{\mu} = \frac{s}{\beta}$
Log is increasing so maximizing log is same as maximizing likelihood:
$\ell(\beta) = \sum \log f(x_i).$
$
\log f(x_i) = \log \lambda e^{-\lambda x_i} = \log s - \log \beta - \frac{s}{\beta}x_i$
and
\frac{d}{d \beta} \log f = -\frac{1}{\beta} - \frac{s}{\beta^2}x_i.
So,
\frac{d}{d \beta}\ell(\beta) = -\frac{n}{\beta} - \frac{s}{\beta^2}\sum x_i = 0
gives
\hat{\beta} = \frac{s}{n}\sum x_i = s \bar{x}.
Note: the second derivative of $\ell$ is positive since $x_i, s, \beta>0$ so LL is concave. | Estimating the parameter $\beta$ | $
L(\beta) = P(\mathrm{Data} \mid \beta) = \prod_i f(x_i)
$ and $\lambda = \frac{1}{\mu} = \frac{s}{\beta}$
Log is increasing so maximizing log is same as maximizing likelihood:
$\ell(\beta) = \sum \l | Estimating the parameter $\beta$
$
L(\beta) = P(\mathrm{Data} \mid \beta) = \prod_i f(x_i)
$ and $\lambda = \frac{1}{\mu} = \frac{s}{\beta}$
Log is increasing so maximizing log is same as maximizing likelihood:
$\ell(\beta) = \sum \log f(x_i).$
$
\log f(x_i) = \log \lambda e^{-\lambda x_i} = \log s - \log \beta - \frac{s}{\beta}x_i$
and
\frac{d}{d \beta} \log f = -\frac{1}{\beta} - \frac{s}{\beta^2}x_i.
So,
\frac{d}{d \beta}\ell(\beta) = -\frac{n}{\beta} - \frac{s}{\beta^2}\sum x_i = 0
gives
\hat{\beta} = \frac{s}{n}\sum x_i = s \bar{x}.
Note: the second derivative of $\ell$ is positive since $x_i, s, \beta>0$ so LL is concave. | Estimating the parameter $\beta$
$
L(\beta) = P(\mathrm{Data} \mid \beta) = \prod_i f(x_i)
$ and $\lambda = \frac{1}{\mu} = \frac{s}{\beta}$
Log is increasing so maximizing log is same as maximizing likelihood:
$\ell(\beta) = \sum \l |
51,926 | Estimating the parameter $\beta$ | Assise from using a direct derivation, leading to a closed form expression, you can do this with a generalized linear model a description is given in the question
Fitting exponential (regression) model by MLE?
With that approach you have more flexibility (e.g. change the function $\mu(s)$)
Example
In R you would use the function glm with the family gamma and specify a dispersion parameter dispersion=1 (the exponential distribution is a special case of the gamma distribution for which the dispersion parameter is equal to 1).
### generate and plot data
set.seed(1)
n = 100
x = runif(n,0,10)
mu = 2/x
rate = 1/mu
y = rexp(n,rate)
plot(x,y)
#### fit glm model with Gamma distribution that has k=1 (ie. the exponential distribution)
fit = glm(y ~ 0+x, family = Gamma())
summary(fit,dispersion=1)
# This gives as result
# a rate parameter of lambda = 0.49868 x
# which relates to
# a mean that is mu = 1/lambda = 2.005294/x | Estimating the parameter $\beta$ | Assise from using a direct derivation, leading to a closed form expression, you can do this with a generalized linear model a description is given in the question
Fitting exponential (regression) mode | Estimating the parameter $\beta$
Assise from using a direct derivation, leading to a closed form expression, you can do this with a generalized linear model a description is given in the question
Fitting exponential (regression) model by MLE?
With that approach you have more flexibility (e.g. change the function $\mu(s)$)
Example
In R you would use the function glm with the family gamma and specify a dispersion parameter dispersion=1 (the exponential distribution is a special case of the gamma distribution for which the dispersion parameter is equal to 1).
### generate and plot data
set.seed(1)
n = 100
x = runif(n,0,10)
mu = 2/x
rate = 1/mu
y = rexp(n,rate)
plot(x,y)
#### fit glm model with Gamma distribution that has k=1 (ie. the exponential distribution)
fit = glm(y ~ 0+x, family = Gamma())
summary(fit,dispersion=1)
# This gives as result
# a rate parameter of lambda = 0.49868 x
# which relates to
# a mean that is mu = 1/lambda = 2.005294/x | Estimating the parameter $\beta$
Assise from using a direct derivation, leading to a closed form expression, you can do this with a generalized linear model a description is given in the question
Fitting exponential (regression) mode |
51,927 | How to apply the formula for Shannon entropy to a 4-sided die? | The distribution of each outcome would be $[1/4,1/4/,1/4,1/4]$, i.e. $p_i=1/4$ for each $i\in \{1,2,3,4\}$. This yields $$H(p)=-4\times \frac{1}{4}\times\log_2\frac{1}{4}=2$$
As pointed out in the comments, the answer for an N-sided fair die would be $\log N$:
$$H(p)=-\sum_{i=1}^N\frac{1}{N} \log\frac{1}{N}=-\log \frac{1}{N}=\log N$$
This is true for log with any base, depending on the entropy definition used. In this particular problem, it is base $2$. | How to apply the formula for Shannon entropy to a 4-sided die? | The distribution of each outcome would be $[1/4,1/4/,1/4,1/4]$, i.e. $p_i=1/4$ for each $i\in \{1,2,3,4\}$. This yields $$H(p)=-4\times \frac{1}{4}\times\log_2\frac{1}{4}=2$$
As pointed out in the com | How to apply the formula for Shannon entropy to a 4-sided die?
The distribution of each outcome would be $[1/4,1/4/,1/4,1/4]$, i.e. $p_i=1/4$ for each $i\in \{1,2,3,4\}$. This yields $$H(p)=-4\times \frac{1}{4}\times\log_2\frac{1}{4}=2$$
As pointed out in the comments, the answer for an N-sided fair die would be $\log N$:
$$H(p)=-\sum_{i=1}^N\frac{1}{N} \log\frac{1}{N}=-\log \frac{1}{N}=\log N$$
This is true for log with any base, depending on the entropy definition used. In this particular problem, it is base $2$. | How to apply the formula for Shannon entropy to a 4-sided die?
The distribution of each outcome would be $[1/4,1/4/,1/4,1/4]$, i.e. $p_i=1/4$ for each $i\in \{1,2,3,4\}$. This yields $$H(p)=-4\times \frac{1}{4}\times\log_2\frac{1}{4}=2$$
As pointed out in the com |
51,928 | How to apply the formula for Shannon entropy to a 4-sided die? | I think it's worth mentioning that not only does the formula reduce to $\log N$ ($\log_2N$ if we're specifically talking about entropy measured in bits) in the case of $N$ different equally likely possibilities, but the formula for different probabilities is derived from the equally likely possibilities case, rather than the other way around. Shannon entropy is the number of bits it takes, on average, to specify a state. If all the states are equally likely, we can drop the "on average" part. Then we have that $S$ bits can specify $2^S$ states, or $\log_2 N$ bits are required to specify $N$ states. Specifying $4$ different states can clearly be done with $2$ bits; there are $4$ different $2$-digit binary numbers.
As for cases where the probabilities are different, suppose we have one state with $p=0.5$ and two states with $p = 0.25$. Then we can assign the first state the label $0$ and the other two states $10$ and $11$. Half the time we're using $1$ bit, and half the time we're using $2$ bits, so we have $1.5$ bits of entropy. It's more complicated when the probabilities aren't powers of $2$, but we can still give a similar calculation in the limit. That is, if we have a string of length $l$ consisting of independently random letters, and individual letters have an entropy of $S$, then we can on average represent the string with less than $S(l+1)$ bits, and the bits per letter goes to $S$ in the limit. | How to apply the formula for Shannon entropy to a 4-sided die? | I think it's worth mentioning that not only does the formula reduce to $\log N$ ($\log_2N$ if we're specifically talking about entropy measured in bits) in the case of $N$ different equally likely pos | How to apply the formula for Shannon entropy to a 4-sided die?
I think it's worth mentioning that not only does the formula reduce to $\log N$ ($\log_2N$ if we're specifically talking about entropy measured in bits) in the case of $N$ different equally likely possibilities, but the formula for different probabilities is derived from the equally likely possibilities case, rather than the other way around. Shannon entropy is the number of bits it takes, on average, to specify a state. If all the states are equally likely, we can drop the "on average" part. Then we have that $S$ bits can specify $2^S$ states, or $\log_2 N$ bits are required to specify $N$ states. Specifying $4$ different states can clearly be done with $2$ bits; there are $4$ different $2$-digit binary numbers.
As for cases where the probabilities are different, suppose we have one state with $p=0.5$ and two states with $p = 0.25$. Then we can assign the first state the label $0$ and the other two states $10$ and $11$. Half the time we're using $1$ bit, and half the time we're using $2$ bits, so we have $1.5$ bits of entropy. It's more complicated when the probabilities aren't powers of $2$, but we can still give a similar calculation in the limit. That is, if we have a string of length $l$ consisting of independently random letters, and individual letters have an entropy of $S$, then we can on average represent the string with less than $S(l+1)$ bits, and the bits per letter goes to $S$ in the limit. | How to apply the formula for Shannon entropy to a 4-sided die?
I think it's worth mentioning that not only does the formula reduce to $\log N$ ($\log_2N$ if we're specifically talking about entropy measured in bits) in the case of $N$ different equally likely pos |
51,929 | How to apply the formula for Shannon entropy to a 4-sided die? | The sum is over all possible outcomes, and $p_i$ are the probabilities for each. There are 4 outcomes, so the sum will run over each one of those - that is, $i$ will stand for a face of the die, and what will go in for $p_i$ will be the probability for the die to land with that face sticking up. For a fair die, that will be $\frac{1}{4}$ and then you will be right, the entropy will be $4 \left(-\frac{1}{4} \lg \frac{1}{4}\right) = \lg 4$ or 2 bits. | How to apply the formula for Shannon entropy to a 4-sided die? | The sum is over all possible outcomes, and $p_i$ are the probabilities for each. There are 4 outcomes, so the sum will run over each one of those - that is, $i$ will stand for a face of the die, and w | How to apply the formula for Shannon entropy to a 4-sided die?
The sum is over all possible outcomes, and $p_i$ are the probabilities for each. There are 4 outcomes, so the sum will run over each one of those - that is, $i$ will stand for a face of the die, and what will go in for $p_i$ will be the probability for the die to land with that face sticking up. For a fair die, that will be $\frac{1}{4}$ and then you will be right, the entropy will be $4 \left(-\frac{1}{4} \lg \frac{1}{4}\right) = \lg 4$ or 2 bits. | How to apply the formula for Shannon entropy to a 4-sided die?
The sum is over all possible outcomes, and $p_i$ are the probabilities for each. There are 4 outcomes, so the sum will run over each one of those - that is, $i$ will stand for a face of the die, and w |
51,930 | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)? | Your constant prediction of $0.5$ is a great benchmark. Meteorological forecasters would call it a "climatological forecast", i.e., one that only relies on the overall and unconditional distribution of your target variable.
Any other model should improve on this benchmark. If it doesn't, you are better off with the simple benchmark.
"Of any use" implies, well, putting predictions to use. In some applications, even tiny improvements over trivial benchmarks can be valuable (e.g., in stock price predictions). In other applications, the improvement would need to be larger for us to use the model over the benchmark.
Whether you should switch to a softmax depends on the quality of your final output. If a model with softmax yields better predictions, then by all means, use it. | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func | Your constant prediction of $0.5$ is a great benchmark. Meteorological forecasters would call it a "climatological forecast", i.e., one that only relies on the overall and unconditional distribution o | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)?
Your constant prediction of $0.5$ is a great benchmark. Meteorological forecasters would call it a "climatological forecast", i.e., one that only relies on the overall and unconditional distribution of your target variable.
Any other model should improve on this benchmark. If it doesn't, you are better off with the simple benchmark.
"Of any use" implies, well, putting predictions to use. In some applications, even tiny improvements over trivial benchmarks can be valuable (e.g., in stock price predictions). In other applications, the improvement would need to be larger for us to use the model over the benchmark.
Whether you should switch to a softmax depends on the quality of your final output. If a model with softmax yields better predictions, then by all means, use it. | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func
Your constant prediction of $0.5$ is a great benchmark. Meteorological forecasters would call it a "climatological forecast", i.e., one that only relies on the overall and unconditional distribution o |
51,931 | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)? | It’s hard to say what constitutes acceptable performance. For instance, it might sound like awesome performance if your classifier gets $90\%$ of its predictions right, but if the data are the MNIST handwritten digits, such performance is rather pedestrian.
(Note, however, that “classification accuracy” is more problematic than it first appears.)
Being able to beat some baseline is a good start, however, and it mimics $R^2$ in linear regression. In linear regression, the goal is to predict what value you would expect, given some values of the features. The most naïve way of guessing such a value that is sensible is to guess the mean of your $y$ every time. If you can’t do better than that, then why is your boss paying you when she can call np.mean(y) in Python and do better? (This is the “overall and unconditional distribution” in Stephan Kolassa’s answer.)
What you propose uses the same idea. If you know there is a 50/50 chance of each outcome, for your model to be worth using, your model ought to be able to outperform randomly guessing based on that 50/50 distribution of labels.
In the kind of problem you are solving, there are many options for analogues of $R^2$. It is not clear how large any of them should be for your model to satisfy business needs, as this depends on the problem and business requirements (customer demands, regulator demands, investor demands, etc). However, if they show your model is outperformed by a naïve model that always guesses the same value, then you are not making a strong case for your modeling skills.
Links of possible interest:
Why to put variance around the mean line to the definition of $R^2$? By what is this particular choice dictated?
Why getting very high values for MSE/MAE/MAPE when R2 score is very good
Why does putting a 1/2 in front of the squared error make the math easier? (Data Science) | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func | It’s hard to say what constitutes acceptable performance. For instance, it might sound like awesome performance if your classifier gets $90\%$ of its predictions right, but if the data are the MNIST h | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)?
It’s hard to say what constitutes acceptable performance. For instance, it might sound like awesome performance if your classifier gets $90\%$ of its predictions right, but if the data are the MNIST handwritten digits, such performance is rather pedestrian.
(Note, however, that “classification accuracy” is more problematic than it first appears.)
Being able to beat some baseline is a good start, however, and it mimics $R^2$ in linear regression. In linear regression, the goal is to predict what value you would expect, given some values of the features. The most naïve way of guessing such a value that is sensible is to guess the mean of your $y$ every time. If you can’t do better than that, then why is your boss paying you when she can call np.mean(y) in Python and do better? (This is the “overall and unconditional distribution” in Stephan Kolassa’s answer.)
What you propose uses the same idea. If you know there is a 50/50 chance of each outcome, for your model to be worth using, your model ought to be able to outperform randomly guessing based on that 50/50 distribution of labels.
In the kind of problem you are solving, there are many options for analogues of $R^2$. It is not clear how large any of them should be for your model to satisfy business needs, as this depends on the problem and business requirements (customer demands, regulator demands, investor demands, etc). However, if they show your model is outperformed by a naïve model that always guesses the same value, then you are not making a strong case for your modeling skills.
Links of possible interest:
Why to put variance around the mean line to the definition of $R^2$? By what is this particular choice dictated?
Why getting very high values for MSE/MAE/MAPE when R2 score is very good
Why does putting a 1/2 in front of the squared error make the math easier? (Data Science) | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func
It’s hard to say what constitutes acceptable performance. For instance, it might sound like awesome performance if your classifier gets $90\%$ of its predictions right, but if the data are the MNIST h |
51,932 | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)? | In addition to the other answers, I'll add that usually with machine learning we fit the model by finding the weights that minimise the loss, but we then usually evaluate the model using a different metric that has more "intuitive" meaning (and to be robust this should be on a different dataset that wasn't used to train the model, i.e. a train-test split).
For example with classification we often use Cross Entropy as the loss, however when we evaluate the model we use either accuracy, F1 score or some other metric to actually decide if our model is "good enough". For regression problems such as the one you're working on, we more commonly use squared error loss, and it's fairly common to use something like r-squared as a metric (although it's certainly not a perfect one). | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func | In addition to the other answers, I'll add that usually with machine learning we fit the model by finding the weights that minimise the loss, but we then usually evaluate the model using a different m | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss function)?
In addition to the other answers, I'll add that usually with machine learning we fit the model by finding the weights that minimise the loss, but we then usually evaluate the model using a different metric that has more "intuitive" meaning (and to be robust this should be on a different dataset that wasn't used to train the model, i.e. a train-test split).
For example with classification we often use Cross Entropy as the loss, however when we evaluate the model we use either accuracy, F1 score or some other metric to actually decide if our model is "good enough". For regression problems such as the one you're working on, we more commonly use squared error loss, and it's fairly common to use something like r-squared as a metric (although it's certainly not a perfect one). | What is an acceptable value of square loss in machine learning (using mxnet gluon's square loss func
In addition to the other answers, I'll add that usually with machine learning we fit the model by finding the weights that minimise the loss, but we then usually evaluate the model using a different m |
51,933 | GAM and multiple continuous-continuous interactions/tensor smooths | Nice question! Here are some hints to get you started. Your questions are not numbered well (question 2 appears twice), so you may want to renumber them to avoid confusion.
Question 1
The general rule is that you would use an isotropic smooth s(x, cov1) if x and cov1 had the same units and you would use an anisotropic smooth te(x, cov1) if x and cov1 had different units. See Simon Wood's slides at https://www.maths.ed.ac.uk/~swood34/talks/gam-mgcv.pdf.
An isotropic bivariate smooth would use the same degree of smoothness along both of its dimensions; an anisotropic bivariate smooth would use different degrees of smoothness along its two dimensions.
Question 2
Yes, you can include all 2-way interactions between x and your 3 covariates in the same model using te() terms (and presumably test whether you actually need all of them or just some of them in the model).
In principle, you can fit all of these models to your data:
y ~ te(x,cov1) + blocking_var
y ~ te(x,cov2) + blocking_var
y ~ te(x,cov3) + blocking_var
y ~ te(x,cov1) + te(x, cov2) + blocking_var
y ~ te(x,cov1) + te(x, cov3) + blocking_var
y ~ te(x,cov2) + te(x, cov3) + blocking_var
y ~ te(x,cov1) + te(x, cov2) + te(x, cov3) + blocking_var
and compare them to see which one receives most support from your data. When comparing them, make sure all models use the same number of observations so that the comparison is fair (which would not be the case if some of your covariates included missing data) and are fitted using the option method = "ML", as clarified by Dr. Gavin Simpson in his comments.
The model comparison can rely on comparing the AIC or BIC values, the adjusted R squared, the model diagnostics, etc.
Note that if it is important to control for all 3 covariates, cov1 through cov3, in your model, you would consider only the last of the above models and forego the model comparison.
Question 3
The help file for the te function states the following:
"te produces a full tensor product smooth, while ti produces a tensor product interaction, appropriate when the main effects (and any lower interactions) are also present."
There are differences between te and ti with respect with how you specify interactions. For example, you would specify a 2-way smooth interaction between x and cov1 like this using te:
y ~ te(x,cov1) + blocking_var
and like this using ti:
y ~ ti(x) + ti(cov1) + ti(x, cov1) + blocking_var
The ti formulation assumes that you can split the effect of x on y into an effect purely due to x (which will be captured by ti(x)) and an effect due to the combination of x and cov1 (which will be captured by ti(x, cov1)). The te formulation implies that such a split may not necessarily be possible (though you could check to see if it is).
See Dr. Gavin Simpson's answer on this forum: R/mgcv: Why do te() and ti() tensor products produce different surfaces?.
Note that an alternative way to formulate the model below which includes ti terms:
y ~ ti(x) + ti(cov1) + ti(x, cov1) + blocking_var
is this:
y ~ s(x) + s(cov1) + ti(x, cov1) + blocking_var
Indirectly, this suggests that you should NOT include s(x) and/or s(cov1) in a model of the form:
y ~ te(x, cov1) + blocking_var
Addendum
Collinearity has an extension to GAM models which is called concurvity. See the help for the concurvity() function in the mgcv
package for details on how to check for the presence of concurvity in your model(s).
With cov2 and cov3 being defined as derivatives of x, you might need to worry about a phenomenon called mathematical coupling. | GAM and multiple continuous-continuous interactions/tensor smooths | Nice question! Here are some hints to get you started. Your questions are not numbered well (question 2 appears twice), so you may want to renumber them to avoid confusion.
Question 1
The general rul | GAM and multiple continuous-continuous interactions/tensor smooths
Nice question! Here are some hints to get you started. Your questions are not numbered well (question 2 appears twice), so you may want to renumber them to avoid confusion.
Question 1
The general rule is that you would use an isotropic smooth s(x, cov1) if x and cov1 had the same units and you would use an anisotropic smooth te(x, cov1) if x and cov1 had different units. See Simon Wood's slides at https://www.maths.ed.ac.uk/~swood34/talks/gam-mgcv.pdf.
An isotropic bivariate smooth would use the same degree of smoothness along both of its dimensions; an anisotropic bivariate smooth would use different degrees of smoothness along its two dimensions.
Question 2
Yes, you can include all 2-way interactions between x and your 3 covariates in the same model using te() terms (and presumably test whether you actually need all of them or just some of them in the model).
In principle, you can fit all of these models to your data:
y ~ te(x,cov1) + blocking_var
y ~ te(x,cov2) + blocking_var
y ~ te(x,cov3) + blocking_var
y ~ te(x,cov1) + te(x, cov2) + blocking_var
y ~ te(x,cov1) + te(x, cov3) + blocking_var
y ~ te(x,cov2) + te(x, cov3) + blocking_var
y ~ te(x,cov1) + te(x, cov2) + te(x, cov3) + blocking_var
and compare them to see which one receives most support from your data. When comparing them, make sure all models use the same number of observations so that the comparison is fair (which would not be the case if some of your covariates included missing data) and are fitted using the option method = "ML", as clarified by Dr. Gavin Simpson in his comments.
The model comparison can rely on comparing the AIC or BIC values, the adjusted R squared, the model diagnostics, etc.
Note that if it is important to control for all 3 covariates, cov1 through cov3, in your model, you would consider only the last of the above models and forego the model comparison.
Question 3
The help file for the te function states the following:
"te produces a full tensor product smooth, while ti produces a tensor product interaction, appropriate when the main effects (and any lower interactions) are also present."
There are differences between te and ti with respect with how you specify interactions. For example, you would specify a 2-way smooth interaction between x and cov1 like this using te:
y ~ te(x,cov1) + blocking_var
and like this using ti:
y ~ ti(x) + ti(cov1) + ti(x, cov1) + blocking_var
The ti formulation assumes that you can split the effect of x on y into an effect purely due to x (which will be captured by ti(x)) and an effect due to the combination of x and cov1 (which will be captured by ti(x, cov1)). The te formulation implies that such a split may not necessarily be possible (though you could check to see if it is).
See Dr. Gavin Simpson's answer on this forum: R/mgcv: Why do te() and ti() tensor products produce different surfaces?.
Note that an alternative way to formulate the model below which includes ti terms:
y ~ ti(x) + ti(cov1) + ti(x, cov1) + blocking_var
is this:
y ~ s(x) + s(cov1) + ti(x, cov1) + blocking_var
Indirectly, this suggests that you should NOT include s(x) and/or s(cov1) in a model of the form:
y ~ te(x, cov1) + blocking_var
Addendum
Collinearity has an extension to GAM models which is called concurvity. See the help for the concurvity() function in the mgcv
package for details on how to check for the presence of concurvity in your model(s).
With cov2 and cov3 being defined as derivatives of x, you might need to worry about a phenomenon called mathematical coupling. | GAM and multiple continuous-continuous interactions/tensor smooths
Nice question! Here are some hints to get you started. Your questions are not numbered well (question 2 appears twice), so you may want to renumber them to avoid confusion.
Question 1
The general rul |
51,934 | GAM and multiple continuous-continuous interactions/tensor smooths | Q1
Almost invariably yes, you should use te() for smooth interactions. The only situation where you might not want to do that is where you are smoothing in multiple dimensions but everything is in the same units, like space. In such circumstances an isotropic smooth may make sense via s().
Q2
My personal belief is that you should fit the full model and do any required inference on that full model. As you have the same term appearing in multiple smooths, you should probably decompose those smooths into main effects and interactions via:
y ~ s(x) + s(cov1) + s(cov2) + s(cov3) + ## main effects
ti(x,cov1) + ti(x,cov2) + ti(x,cov3) + ## interactions
blocking_var ## other stuff
If you want to do selection on that model, you could add select = TRUE to add an extra penalty to all smooths such that they can be penalised out of the model entirely, or change the basis to one of the shrinkage smooths via the bs argument to s(), te() etc., e.g. s(x, bs = 'ts') for shrinkage thin plate splines and te(x,z, bs = c("cs", "ts")) for a tensor product of x and z with a cubic regression shrinkage spline marginal basis for x and a shrinkage thin plate spline marginal basis for z.
If you are going to fit and compare nested models explicitly, make sure you use method = "ML" because the corrections used to compute the REML uses information from the fixed effects and if you models include different fixed effects terms — and the ones in the example do — you re making different corrections to the likelihood to compute REML and that renders the REML likelihoods incomparable.
Q3
You include single terms with s() and pure interactions with ti(). While {mgcv} will try to make the models identifiable if you do things like s(x) + te(x, z), it is better if you can decompose the effects manually as that will give you the most stable fit (by "that" I mean answer to Q2).
Q4
te(x, z) includes both the smooth main effects of x and z, plus their smooth interaction. ti() is just the pure smooth interaction of x and z.
If you are familiar with R's linear modelling formulas, then
te() is like x*Z or x + z + x:z if you want to write it all out
ti() is like X:Z only
Q5
plot.gam() is producing partial effect plots for each term separately. vis.gam() is showing model fitted values (expectations of the response, $E(\hat{y})$). The reason you are will be seeing very different plots is that in the example from [so] (y ~ s(x) + te(x, cov1) + other_stuff) with plot.gam() for the tensor product you are just getting a plot of that tensor product smooth. When you use vis.gam() varying x and cov1 you are getting a plot of the expected response from the whole model s(x) + te(x, cov1) + other_stuff, when you vary input values x and cov1 while holding other_stuff at representative (or user supplied) values.
Q6
Think of the te() as a single term that just happens to include both smooth main effects and smooth interactions. Hence in the anova() or similar output, you are just setting this entire term to 0 and comparing the fits. This is not like a model with x * z where that actually implies three terms x + z + x:z and say you are testing x:z, you are just setting this pure interaction term to zero in anova().
If you want an ANOVA-like decomposition then use s(x) + s(z) + ti(x, z) as you will now have separate functions for the three things specified. Now you can compare that model with s(x) + s(z) by setting the ti() term to zero so you get the proper nesting.
A significant te() just means that the estimated smooth function differs from a flat surface or constant function.
You could interpret that smooth by visualising it and see how the expected value of the response varies over the surface. If you want to do more formal inference, you can do the ANOVA-like decomposition of the te(x,z) term into s(x) + s(z) + ti(x,z) and now you have a term that comes with a formal test of whether the pure interaction (for example) is consistent with a flat or constant functions.
However, you should note that the te(x,z) and s(x) + s(z) + ti(x,z) models are not strictly equivalent; the latter has more smoothness parameters to estimate — IIRC the te() one would have 2 smoothness parameters and the s(x) + s(z) + ti(x,z) version 4 smoothness parameters.
The reason we would prefer a model specification like this
y ~ s(x) + s(cov1) + s(cov2) + s(cov3) + ## main effects
ti(x,cov1) + ti(x,cov2) + ti(x,cov3) + ## interactions
blocking_var ## other stuff
is that if you have say two terms te(x,z) + te(x,v) then the basis for te(x,z) contains functions for x that overlap with basis functions for x from the other te(x,v) term. What this means is that you are effectively including the same variables in the model twice and you can't uniquely identify such duplicated terms — adding a constant to one can be offset by subtracting the same constant from the other and as the constant could be any number you have an infinity of models that are all the same. {mgcv} will try to remove these rank deficiencies from the model matrix, but it will do so by making the te() terms difficult to interpret because one of them will have had some of the main effects of x removed from it.
Hence it is better to manually decompose the fit into main effects and interactions, plus doing so should result in a model that is much easier to fit and hence the fitting process should be stable.
Finally, I prefer to use s(x) + s(z) + ti(x,z) even though you can use ti(x) + ti(z) + ti(x,z) because a ti means a tensor interaction and it is just confusing to think of a tensor interaction of a single variable — there is no interaction going on — plus at one point Simon indicated that he was going to remove the ability to fit tensor products of single terms, including in ti(), though he seems to have removed that comment from the changelog since. | GAM and multiple continuous-continuous interactions/tensor smooths | Q1
Almost invariably yes, you should use te() for smooth interactions. The only situation where you might not want to do that is where you are smoothing in multiple dimensions but everything is in the | GAM and multiple continuous-continuous interactions/tensor smooths
Q1
Almost invariably yes, you should use te() for smooth interactions. The only situation where you might not want to do that is where you are smoothing in multiple dimensions but everything is in the same units, like space. In such circumstances an isotropic smooth may make sense via s().
Q2
My personal belief is that you should fit the full model and do any required inference on that full model. As you have the same term appearing in multiple smooths, you should probably decompose those smooths into main effects and interactions via:
y ~ s(x) + s(cov1) + s(cov2) + s(cov3) + ## main effects
ti(x,cov1) + ti(x,cov2) + ti(x,cov3) + ## interactions
blocking_var ## other stuff
If you want to do selection on that model, you could add select = TRUE to add an extra penalty to all smooths such that they can be penalised out of the model entirely, or change the basis to one of the shrinkage smooths via the bs argument to s(), te() etc., e.g. s(x, bs = 'ts') for shrinkage thin plate splines and te(x,z, bs = c("cs", "ts")) for a tensor product of x and z with a cubic regression shrinkage spline marginal basis for x and a shrinkage thin plate spline marginal basis for z.
If you are going to fit and compare nested models explicitly, make sure you use method = "ML" because the corrections used to compute the REML uses information from the fixed effects and if you models include different fixed effects terms — and the ones in the example do — you re making different corrections to the likelihood to compute REML and that renders the REML likelihoods incomparable.
Q3
You include single terms with s() and pure interactions with ti(). While {mgcv} will try to make the models identifiable if you do things like s(x) + te(x, z), it is better if you can decompose the effects manually as that will give you the most stable fit (by "that" I mean answer to Q2).
Q4
te(x, z) includes both the smooth main effects of x and z, plus their smooth interaction. ti() is just the pure smooth interaction of x and z.
If you are familiar with R's linear modelling formulas, then
te() is like x*Z or x + z + x:z if you want to write it all out
ti() is like X:Z only
Q5
plot.gam() is producing partial effect plots for each term separately. vis.gam() is showing model fitted values (expectations of the response, $E(\hat{y})$). The reason you are will be seeing very different plots is that in the example from [so] (y ~ s(x) + te(x, cov1) + other_stuff) with plot.gam() for the tensor product you are just getting a plot of that tensor product smooth. When you use vis.gam() varying x and cov1 you are getting a plot of the expected response from the whole model s(x) + te(x, cov1) + other_stuff, when you vary input values x and cov1 while holding other_stuff at representative (or user supplied) values.
Q6
Think of the te() as a single term that just happens to include both smooth main effects and smooth interactions. Hence in the anova() or similar output, you are just setting this entire term to 0 and comparing the fits. This is not like a model with x * z where that actually implies three terms x + z + x:z and say you are testing x:z, you are just setting this pure interaction term to zero in anova().
If you want an ANOVA-like decomposition then use s(x) + s(z) + ti(x, z) as you will now have separate functions for the three things specified. Now you can compare that model with s(x) + s(z) by setting the ti() term to zero so you get the proper nesting.
A significant te() just means that the estimated smooth function differs from a flat surface or constant function.
You could interpret that smooth by visualising it and see how the expected value of the response varies over the surface. If you want to do more formal inference, you can do the ANOVA-like decomposition of the te(x,z) term into s(x) + s(z) + ti(x,z) and now you have a term that comes with a formal test of whether the pure interaction (for example) is consistent with a flat or constant functions.
However, you should note that the te(x,z) and s(x) + s(z) + ti(x,z) models are not strictly equivalent; the latter has more smoothness parameters to estimate — IIRC the te() one would have 2 smoothness parameters and the s(x) + s(z) + ti(x,z) version 4 smoothness parameters.
The reason we would prefer a model specification like this
y ~ s(x) + s(cov1) + s(cov2) + s(cov3) + ## main effects
ti(x,cov1) + ti(x,cov2) + ti(x,cov3) + ## interactions
blocking_var ## other stuff
is that if you have say two terms te(x,z) + te(x,v) then the basis for te(x,z) contains functions for x that overlap with basis functions for x from the other te(x,v) term. What this means is that you are effectively including the same variables in the model twice and you can't uniquely identify such duplicated terms — adding a constant to one can be offset by subtracting the same constant from the other and as the constant could be any number you have an infinity of models that are all the same. {mgcv} will try to remove these rank deficiencies from the model matrix, but it will do so by making the te() terms difficult to interpret because one of them will have had some of the main effects of x removed from it.
Hence it is better to manually decompose the fit into main effects and interactions, plus doing so should result in a model that is much easier to fit and hence the fitting process should be stable.
Finally, I prefer to use s(x) + s(z) + ti(x,z) even though you can use ti(x) + ti(z) + ti(x,z) because a ti means a tensor interaction and it is just confusing to think of a tensor interaction of a single variable — there is no interaction going on — plus at one point Simon indicated that he was going to remove the ability to fit tensor products of single terms, including in ti(), though he seems to have removed that comment from the changelog since. | GAM and multiple continuous-continuous interactions/tensor smooths
Q1
Almost invariably yes, you should use te() for smooth interactions. The only situation where you might not want to do that is where you are smoothing in multiple dimensions but everything is in the |
51,935 | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$? | I will describe every possible solution. This gives you maximal freedom to craft distributions that meet your needs.
Basically, sketch any curve you like for the density function $f$ that meets your requirements. Separately scale the heights of the left and right halves of it (on either side of $c$) to make their masses balance, then scale the heights overall to make it a probability density.
Here are some details.
Because the distribution is continuous, it has a density function $f$ with finite integral. By splitting $f$ at $c$ we can construct all such functions out of two separately chosen non-negative non-decreasing, not identically zero, integrable functions $f_1$ and $f_2$ defined on $[0,1].$ Here, for example, are two such functions:
Set $f$ to agree affinely with $f_1$ on $[a,c]$ and with the reversal of $f_2$ on $[c,b].$ This means there are two positive numbers $\pi_i$ for which
$$f\mid_{[a,c]}(x) = \pi_1 f_1\left(\frac{x-a}{c-a}\right);\quad f\mid_{[c,b]}(x) = \pi_2 f_2\left(\frac{b-x}{b-c}\right).$$
This construction guarantees $c$ is the unique mode. Moreover, if you want the tails to taper to zero, just choose $f_i$ that approach $0$ continuously at the origin.
For $f$ to be a probability density it must integrate to unity:
$$1 = \int_a^b f(x)\,\mathrm{d}x =\pi_1(c-a) \mu_1^{(0)} + \pi_2(b-c) \mu_2^{(0)}\tag{*}$$
where
$$\mu_i^{(k)} = \int_0^1 x^k f_i(x)\,\mathrm{d}x.$$
Moreover, $c$ is intended to be the mean, whence
$$\begin{aligned}
c &= \int_a^b x f(x)\,\mathrm{d}x \\
&= \pi_1(c-a)((c-a)\mu_1^{(1)} + a\mu_1^{(0)}) + \pi_2(b-c)(-(b-c)\mu_2^{(1)} + b\mu_2^{(0)}).
\end{aligned}\tag{**}$$
$(*)$ and $(**)$ establish a system of two linear equations in the $\pi_i.$ You can check it has nonzero determinant and a unit positive solution $(\pi_1,\pi_2).$ (This checking comes down to the fact that after normalizing the $f_i$ to be probability densities, their means $\mu_i^{(1)}$ must be in the interval $[0,1].$)
This figure plots the solution $f$ for the previous two functions where $[a,b]=[-1,3]$ and $c=1/2:$
By design, $f$ looks like $f_1$ to the left of $c$ and like the reversal of $f_2$ to the right of $c.$
The spike at the mode might bother you, but notice that it was never assumed or required that $f$ must be continuous. Most examples will be discontinuous. However, every distribution $f$ meeting your specifications can be constructed this way (by reversing the process: split $f$ into two halves, which obviously determine the $f_i$).
To demonstrate how general the $f_i$ can be, here is the same construction where the $f_i$ are (inverse) Cantor functions (as implemented in binary.to.ternary at https://stats.stackexchange.com/a/229561/919). These are not continuous anywhere.
NB: $f_1$ is binary.to.ternary; $f_2(x) = f_1(x^{2.28370312}).$ This illustrates one way to eliminate the discontinuity at $c:$ $f_2$ was selected from the one-parameter family of functions $f_1(x^p),$ $p \gt 0,$ and a value of $p$ was identified to make the left and right limits of $f$ at $c$ equal. This family "pushes" the mass of $f_1$ left and right by a controllable amount, thereby modifying the amount by which the right tail is scaled (vertically) in constructing $f.$
For those who would like to experiment, here is an R function to create $f$ out of the $f_i$ and code to create the figures. Three commands at the end check whether (1) $f$ is a pdf, (2) its mean is $c,$ and (3) its mode is $c.$
#
# Given numbers a. < b. < c. and non-negative, non-decreasing, not identically
# zero functions f.1 and f.2 defined on [0,1], construct a density function f
# on the interval [a., b.] with mean c. and unique mode c. that behaves like f.1
# to the left of c. and like the reversal of f.2 to the right of c.
#
# `...` are optional arguments passed along to `integrate`.
#
create <- function(a., b., c., f.1, f.2, ...) {
cat("Create\n")
p.1 <- integrate(f.1, 0, 1, ...)$value
p.2 <- integrate(f.2, 0, 1, ...)$value
mu.1 <- integrate(function(u) u * f.1(u), 0, 1, ...)$value
mu.2 <- integrate(function(u) u * f.2(u), 0, 1, ...)$value
A <- matrix(c(p.1 * (c.-a.), p.2 * (b.-c.),
(c.-a.) * ((c.-a.) * mu.1 + a.*p.1),
(b.-c.) * (-(b.-c.) * mu.2 + b.*p.2)),
2)
pi. <- solve(t(A), c(1, c.))
function(x) {
ifelse(a. <= x & x <= c., 1, 0) * pi.[1] * f.1((x-a.)/(c.-a.)) +
ifelse(c. < x & x <= b., 1, 0) * pi.[2] * f.2((b.-x)/(b.-c.))
}
}
#
# Example.
#
a. <- -1
b. <- 3
c. <- 1/2
f.2 <- function(x) x^(2/3)
f.1 <- function(x) exp(2 * x) - 1
f <- create(a., b., c., f.1, f.2)
#
# Display f.1 and f.2.
#
par(mfrow=c(1,2))
curve(f.1(x), 0, 1, lwd=2, ylab="", main=expression(paste(f[1], ": ", x^{2/3})))
curve(f.2(x), 0, 1, lwd=2, ylab="", main=expression(paste(f[2], ": ", e^{2*x}-1)))
#
# Display f.
#
x <- c(seq(a., c., length.out=601), seq(c.+1e-12*(b.-a.), b., length.out=601))
y <- f(x)
par(mfrow=c(1,1))
plot(c(x, b., a.), c(y, 0, 0), type="n", xlab="x", ylab="Density", main=expression(f))
polygon(c(x, b., a.), c(y, 0, 0), col="#f0f0f0", border=NA)
curve(f(x), b., a., lwd=2, n=1201, add=TRUE)
#
# Checks.
#
integrate(f, a., b.) # Should be 1
integrate(function(x) x * f(x), a., b.) # Should be c.
x[which.max(y)] # Should be close to c. | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$? | I will describe every possible solution. This gives you maximal freedom to craft distributions that meet your needs.
Basically, sketch any curve you like for the density function $f$ that meets your | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$?
I will describe every possible solution. This gives you maximal freedom to craft distributions that meet your needs.
Basically, sketch any curve you like for the density function $f$ that meets your requirements. Separately scale the heights of the left and right halves of it (on either side of $c$) to make their masses balance, then scale the heights overall to make it a probability density.
Here are some details.
Because the distribution is continuous, it has a density function $f$ with finite integral. By splitting $f$ at $c$ we can construct all such functions out of two separately chosen non-negative non-decreasing, not identically zero, integrable functions $f_1$ and $f_2$ defined on $[0,1].$ Here, for example, are two such functions:
Set $f$ to agree affinely with $f_1$ on $[a,c]$ and with the reversal of $f_2$ on $[c,b].$ This means there are two positive numbers $\pi_i$ for which
$$f\mid_{[a,c]}(x) = \pi_1 f_1\left(\frac{x-a}{c-a}\right);\quad f\mid_{[c,b]}(x) = \pi_2 f_2\left(\frac{b-x}{b-c}\right).$$
This construction guarantees $c$ is the unique mode. Moreover, if you want the tails to taper to zero, just choose $f_i$ that approach $0$ continuously at the origin.
For $f$ to be a probability density it must integrate to unity:
$$1 = \int_a^b f(x)\,\mathrm{d}x =\pi_1(c-a) \mu_1^{(0)} + \pi_2(b-c) \mu_2^{(0)}\tag{*}$$
where
$$\mu_i^{(k)} = \int_0^1 x^k f_i(x)\,\mathrm{d}x.$$
Moreover, $c$ is intended to be the mean, whence
$$\begin{aligned}
c &= \int_a^b x f(x)\,\mathrm{d}x \\
&= \pi_1(c-a)((c-a)\mu_1^{(1)} + a\mu_1^{(0)}) + \pi_2(b-c)(-(b-c)\mu_2^{(1)} + b\mu_2^{(0)}).
\end{aligned}\tag{**}$$
$(*)$ and $(**)$ establish a system of two linear equations in the $\pi_i.$ You can check it has nonzero determinant and a unit positive solution $(\pi_1,\pi_2).$ (This checking comes down to the fact that after normalizing the $f_i$ to be probability densities, their means $\mu_i^{(1)}$ must be in the interval $[0,1].$)
This figure plots the solution $f$ for the previous two functions where $[a,b]=[-1,3]$ and $c=1/2:$
By design, $f$ looks like $f_1$ to the left of $c$ and like the reversal of $f_2$ to the right of $c.$
The spike at the mode might bother you, but notice that it was never assumed or required that $f$ must be continuous. Most examples will be discontinuous. However, every distribution $f$ meeting your specifications can be constructed this way (by reversing the process: split $f$ into two halves, which obviously determine the $f_i$).
To demonstrate how general the $f_i$ can be, here is the same construction where the $f_i$ are (inverse) Cantor functions (as implemented in binary.to.ternary at https://stats.stackexchange.com/a/229561/919). These are not continuous anywhere.
NB: $f_1$ is binary.to.ternary; $f_2(x) = f_1(x^{2.28370312}).$ This illustrates one way to eliminate the discontinuity at $c:$ $f_2$ was selected from the one-parameter family of functions $f_1(x^p),$ $p \gt 0,$ and a value of $p$ was identified to make the left and right limits of $f$ at $c$ equal. This family "pushes" the mass of $f_1$ left and right by a controllable amount, thereby modifying the amount by which the right tail is scaled (vertically) in constructing $f.$
For those who would like to experiment, here is an R function to create $f$ out of the $f_i$ and code to create the figures. Three commands at the end check whether (1) $f$ is a pdf, (2) its mean is $c,$ and (3) its mode is $c.$
#
# Given numbers a. < b. < c. and non-negative, non-decreasing, not identically
# zero functions f.1 and f.2 defined on [0,1], construct a density function f
# on the interval [a., b.] with mean c. and unique mode c. that behaves like f.1
# to the left of c. and like the reversal of f.2 to the right of c.
#
# `...` are optional arguments passed along to `integrate`.
#
create <- function(a., b., c., f.1, f.2, ...) {
cat("Create\n")
p.1 <- integrate(f.1, 0, 1, ...)$value
p.2 <- integrate(f.2, 0, 1, ...)$value
mu.1 <- integrate(function(u) u * f.1(u), 0, 1, ...)$value
mu.2 <- integrate(function(u) u * f.2(u), 0, 1, ...)$value
A <- matrix(c(p.1 * (c.-a.), p.2 * (b.-c.),
(c.-a.) * ((c.-a.) * mu.1 + a.*p.1),
(b.-c.) * (-(b.-c.) * mu.2 + b.*p.2)),
2)
pi. <- solve(t(A), c(1, c.))
function(x) {
ifelse(a. <= x & x <= c., 1, 0) * pi.[1] * f.1((x-a.)/(c.-a.)) +
ifelse(c. < x & x <= b., 1, 0) * pi.[2] * f.2((b.-x)/(b.-c.))
}
}
#
# Example.
#
a. <- -1
b. <- 3
c. <- 1/2
f.2 <- function(x) x^(2/3)
f.1 <- function(x) exp(2 * x) - 1
f <- create(a., b., c., f.1, f.2)
#
# Display f.1 and f.2.
#
par(mfrow=c(1,2))
curve(f.1(x), 0, 1, lwd=2, ylab="", main=expression(paste(f[1], ": ", x^{2/3})))
curve(f.2(x), 0, 1, lwd=2, ylab="", main=expression(paste(f[2], ": ", e^{2*x}-1)))
#
# Display f.
#
x <- c(seq(a., c., length.out=601), seq(c.+1e-12*(b.-a.), b., length.out=601))
y <- f(x)
par(mfrow=c(1,1))
plot(c(x, b., a.), c(y, 0, 0), type="n", xlab="x", ylab="Density", main=expression(f))
polygon(c(x, b., a.), c(y, 0, 0), col="#f0f0f0", border=NA)
curve(f(x), b., a., lwd=2, n=1201, add=TRUE)
#
# Checks.
#
integrate(f, a., b.) # Should be 1
integrate(function(x) x * f(x), a., b.) # Should be c.
x[which.max(y)] # Should be close to c. | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$?
I will describe every possible solution. This gives you maximal freedom to craft distributions that meet your needs.
Basically, sketch any curve you like for the density function $f$ that meets your |
51,936 | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$? | All you need to do is come up with a two-parameter family of distributions, express the mean and mode in terms of those parameters, and then solve for them both being $c$. Working off your triangle idea, we can have a pentagon instead (straight lines from $a$ to $c$ and from $c$ to $b$), giving us three parameters ($y_1$ at $a$, $y_2$ at $b$, and $y_3$ at $c$), but we don't really have three degrees of freedom. We have that the total probability mass from $a$ to $c$ is $\frac 1 2 (y_1+y_3)(c-a)$, and the mass from $c$ to $b$ is $\frac 1 2 (y_2+y_3)(c-b)$, so we need to have $\frac 1 2 (y_1+y_3)(c-a)+\frac 1 2 (y_2+y_3)(c-b)=1$. Then you just need to make sure that the mean is $c$ as well. | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$? | All you need to do is come up with a two-parameter family of distributions, express the mean and mode in terms of those parameters, and then solve for them both being $c$. Working off your triangle id | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$?
All you need to do is come up with a two-parameter family of distributions, express the mean and mode in terms of those parameters, and then solve for them both being $c$. Working off your triangle idea, we can have a pentagon instead (straight lines from $a$ to $c$ and from $c$ to $b$), giving us three parameters ($y_1$ at $a$, $y_2$ at $b$, and $y_3$ at $c$), but we don't really have three degrees of freedom. We have that the total probability mass from $a$ to $c$ is $\frac 1 2 (y_1+y_3)(c-a)$, and the mass from $c$ to $b$ is $\frac 1 2 (y_2+y_3)(c-b)$, so we need to have $\frac 1 2 (y_1+y_3)(c-a)+\frac 1 2 (y_2+y_3)(c-b)=1$. Then you just need to make sure that the mean is $c$ as well. | How to create a continuous distribution on $[a, b]$ with $\text{mean} = \text{mode} = c$?
All you need to do is come up with a two-parameter family of distributions, express the mean and mode in terms of those parameters, and then solve for them both being $c$. Working off your triangle id |
51,937 | Why is betareg() giving "invalid dependent variable" error? | The error message posted by R tells you what the problem is:
invalid dependent variable, all observations must be in (0,1)
In other words, your dependent variable should take values that are strictly greater than 0 and strictly less than 1; they cannot be equal to 0 or equal to 1.
In your case, you have several response values that are equal to 1, so R throws the invalid response variable message.
Beta regression would work for response values in (0,1); because you have response values in (0,1] (with a fair number of these values equal to 1), you will need to consider one-inflated beta regression. If your response had values in [0,1) (with a fair number of these values equal to 0), you could consider a zero-inflated beta regression if that made sense for your data - based on Dr. Zeileis response to your question, you might not be able to consider this type of model for your data. See the 2012 article A general class of zero-or-one inflated beta regression models by Ospina and Ferrari for further details on these types of models.
You can fit a one inflated regression model in R using the gamlss and gamlss.dist packages, for example. The latter includes a BEOI() function which defines the one-inflated beta distribution, a three parameter distribution, for a gamlss.family object to be used in GAMLSS fitting using the function gamlss(). (It also includes a BEZI() function for a zero-inflated beta distribution.). Your most complete regression model would be specified like this:
library(gamlss)
library(gamlss.dist)
model <- gamlss(response ~ 1 + predictor,
sigma.formula = ~ 1 + predictor,
nu.formula = ~ 1 + predictor,
family=BEOI,
data=yourdata)
See http://www.gamlss.com/wp-content/uploads/2018/01/InflatedDistributioninR.pdf. | Why is betareg() giving "invalid dependent variable" error? | The error message posted by R tells you what the problem is:
invalid dependent variable, all observations must be in (0,1)
In other words, your dependent variable should take values that are strictly | Why is betareg() giving "invalid dependent variable" error?
The error message posted by R tells you what the problem is:
invalid dependent variable, all observations must be in (0,1)
In other words, your dependent variable should take values that are strictly greater than 0 and strictly less than 1; they cannot be equal to 0 or equal to 1.
In your case, you have several response values that are equal to 1, so R throws the invalid response variable message.
Beta regression would work for response values in (0,1); because you have response values in (0,1] (with a fair number of these values equal to 1), you will need to consider one-inflated beta regression. If your response had values in [0,1) (with a fair number of these values equal to 0), you could consider a zero-inflated beta regression if that made sense for your data - based on Dr. Zeileis response to your question, you might not be able to consider this type of model for your data. See the 2012 article A general class of zero-or-one inflated beta regression models by Ospina and Ferrari for further details on these types of models.
You can fit a one inflated regression model in R using the gamlss and gamlss.dist packages, for example. The latter includes a BEOI() function which defines the one-inflated beta distribution, a three parameter distribution, for a gamlss.family object to be used in GAMLSS fitting using the function gamlss(). (It also includes a BEZI() function for a zero-inflated beta distribution.). Your most complete regression model would be specified like this:
library(gamlss)
library(gamlss.dist)
model <- gamlss(response ~ 1 + predictor,
sigma.formula = ~ 1 + predictor,
nu.formula = ~ 1 + predictor,
family=BEOI,
data=yourdata)
See http://www.gamlss.com/wp-content/uploads/2018/01/InflatedDistributioninR.pdf. | Why is betareg() giving "invalid dependent variable" error?
The error message posted by R tells you what the problem is:
invalid dependent variable, all observations must be in (0,1)
In other words, your dependent variable should take values that are strictly |
51,938 | Why is betareg() giving "invalid dependent variable" error? | As the error message tells you betareg() requires responses to be greater than $0$ and less than $1$, i.e., the open interval $(0, 1)$. That is the support of the beta distribution which is modeled by beta regression. Your data has a very high share of $1$ observations (40 out of 71, excluding the NA) and consequently does not seem to be suitable for beta regression.
Some authors suggest to use a 1-inflated beta regression in such scenarios but I doubt that this is the right model in your case. If you look at the data:
plot(response ~ predictor, data = ex.dat.new)
Then you see that many groups have no variation at all in the response (cyro, pano, sida, soda) and others have very little variation in the response (bida, oeno). First, modeling something as stochastic that has no variation at all is challenging without further assumptions. Second, you need to consider carefully what exactly you want to model. The details will depend on your specific setup and what these data mean etc. | Why is betareg() giving "invalid dependent variable" error? | As the error message tells you betareg() requires responses to be greater than $0$ and less than $1$, i.e., the open interval $(0, 1)$. That is the support of the beta distribution which is modeled by | Why is betareg() giving "invalid dependent variable" error?
As the error message tells you betareg() requires responses to be greater than $0$ and less than $1$, i.e., the open interval $(0, 1)$. That is the support of the beta distribution which is modeled by beta regression. Your data has a very high share of $1$ observations (40 out of 71, excluding the NA) and consequently does not seem to be suitable for beta regression.
Some authors suggest to use a 1-inflated beta regression in such scenarios but I doubt that this is the right model in your case. If you look at the data:
plot(response ~ predictor, data = ex.dat.new)
Then you see that many groups have no variation at all in the response (cyro, pano, sida, soda) and others have very little variation in the response (bida, oeno). First, modeling something as stochastic that has no variation at all is challenging without further assumptions. Second, you need to consider carefully what exactly you want to model. The details will depend on your specific setup and what these data mean etc. | Why is betareg() giving "invalid dependent variable" error?
As the error message tells you betareg() requires responses to be greater than $0$ and less than $1$, i.e., the open interval $(0, 1)$. That is the support of the beta distribution which is modeled by |
51,939 | Can I resample two datasets and then perform a t-test? | No, you will assure yourself of eventually rejecting the null hypothesis of equality for a large enough sample size (1000 ought to do the trick unless the difference between sample means is tiny tiny tiny). All this would be doing is confirming your observation that the sample means are different, which you already know. | Can I resample two datasets and then perform a t-test? | No, you will assure yourself of eventually rejecting the null hypothesis of equality for a large enough sample size (1000 ought to do the trick unless the difference between sample means is tiny tiny | Can I resample two datasets and then perform a t-test?
No, you will assure yourself of eventually rejecting the null hypothesis of equality for a large enough sample size (1000 ought to do the trick unless the difference between sample means is tiny tiny tiny). All this would be doing is confirming your observation that the sample means are different, which you already know. | Can I resample two datasets and then perform a t-test?
No, you will assure yourself of eventually rejecting the null hypothesis of equality for a large enough sample size (1000 ought to do the trick unless the difference between sample means is tiny tiny |
51,940 | Can I resample two datasets and then perform a t-test? | But...why? Your data is as ideal as could be. It satisfies nearly every assumption of sophomore stats. People only write about this kind of problem.
Resampling opens you up to simulation noise in which you could falsely reject/fail to reject simply because of simulation error. The statistical significance would not itself be significant. | Can I resample two datasets and then perform a t-test? | But...why? Your data is as ideal as could be. It satisfies nearly every assumption of sophomore stats. People only write about this kind of problem.
Resampling opens you up to simulation noise in w | Can I resample two datasets and then perform a t-test?
But...why? Your data is as ideal as could be. It satisfies nearly every assumption of sophomore stats. People only write about this kind of problem.
Resampling opens you up to simulation noise in which you could falsely reject/fail to reject simply because of simulation error. The statistical significance would not itself be significant. | Can I resample two datasets and then perform a t-test?
But...why? Your data is as ideal as could be. It satisfies nearly every assumption of sophomore stats. People only write about this kind of problem.
Resampling opens you up to simulation noise in w |
51,941 | How to obtain the variance of fixed effects? | I second @IsabellaGhement's suggestion that you strongly consider a binomial model for the incidence (you'll need to know the 'denominator' — the total number of individuals used to compute the incidence).
$R^2$ measures do exist for linear mixed models, although there are several different, all slightly different,definitions. A reasonable place to start would be the overview of the r2glmm R package.
library(r2glmm)
library(nlme)
library(ggplot2)
m1 <- lme(Incidence ~ Habitat + Season, random = ~1|Site, data=dd)
ggplot(dd, aes(Season,Incidence,colour=Habitat))+
stat_sum(alpha=0.4,position=position_dodge(width=0.2)) +
scale_size(breaks=1:3,range=c(3,6))+
geom_line(aes(group=Site),colour="gray")+
scale_y_sqrt()+
geom_hline(yintercept=0,lty=2)
Now compute $R^2$ and display:
r2 <- r2beta(model=m1,partial=TRUE,method='sgv')
print(r2)
Effect Rsq upper.CL lower.CL
1 Model 0.867 0.929 0.792
2 HabitatEdge 0.705 0.836 0.539
4 HabitatWasteland 0.639 0.797 0.443
3 HabitatOakwood 0.603 0.775 0.394
6 SeasonSummer 0.084 0.348 0.000
5 SeasonSpring 0.003 0.184 0.000
plot(x=r2) | How to obtain the variance of fixed effects? | I second @IsabellaGhement's suggestion that you strongly consider a binomial model for the incidence (you'll need to know the 'denominator' — the total number of individuals used to compute the incide | How to obtain the variance of fixed effects?
I second @IsabellaGhement's suggestion that you strongly consider a binomial model for the incidence (you'll need to know the 'denominator' — the total number of individuals used to compute the incidence).
$R^2$ measures do exist for linear mixed models, although there are several different, all slightly different,definitions. A reasonable place to start would be the overview of the r2glmm R package.
library(r2glmm)
library(nlme)
library(ggplot2)
m1 <- lme(Incidence ~ Habitat + Season, random = ~1|Site, data=dd)
ggplot(dd, aes(Season,Incidence,colour=Habitat))+
stat_sum(alpha=0.4,position=position_dodge(width=0.2)) +
scale_size(breaks=1:3,range=c(3,6))+
geom_line(aes(group=Site),colour="gray")+
scale_y_sqrt()+
geom_hline(yintercept=0,lty=2)
Now compute $R^2$ and display:
r2 <- r2beta(model=m1,partial=TRUE,method='sgv')
print(r2)
Effect Rsq upper.CL lower.CL
1 Model 0.867 0.929 0.792
2 HabitatEdge 0.705 0.836 0.539
4 HabitatWasteland 0.639 0.797 0.443
3 HabitatOakwood 0.603 0.775 0.394
6 SeasonSummer 0.084 0.348 0.000
5 SeasonSpring 0.003 0.184 0.000
plot(x=r2) | How to obtain the variance of fixed effects?
I second @IsabellaGhement's suggestion that you strongly consider a binomial model for the incidence (you'll need to know the 'denominator' — the total number of individuals used to compute the incide |
51,942 | How to obtain the variance of fixed effects? | Have you checked the model diagnostics for your linear mixed effects model? Incidence is a so-called "discrete proportion" and may be better modelled by a generalized linear mixed effects model (GLMM) with binomial family via the glmer() function in the lme4 package of R.
It seems like what you need is to report R-squared values for your linear mixed effects model (if its model diagnostics look acceptable) or pseudo R-squared values for your GLMM model.
For mixed effects models - be them linear or generalized linear - there are two types of R-squared values you can report:
Marginal R-Squared: Proportion of the total variance explained by the fixed effects;
Conditional R-Squared: Proportion of the total variance explained by the fixed and random effects.
These values can be computed, for instance, using the rsquared() function in the R package piecewiseSEM or the function r2_nakagawa() from the performance package.
See the article The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded by Nakagawa et al. available at https://royalsocietypublishing.org/doi/10.1098/rsif.2017.0213. | How to obtain the variance of fixed effects? | Have you checked the model diagnostics for your linear mixed effects model? Incidence is a so-called "discrete proportion" and may be better modelled by a generalized linear mixed effects model (GLMM) | How to obtain the variance of fixed effects?
Have you checked the model diagnostics for your linear mixed effects model? Incidence is a so-called "discrete proportion" and may be better modelled by a generalized linear mixed effects model (GLMM) with binomial family via the glmer() function in the lme4 package of R.
It seems like what you need is to report R-squared values for your linear mixed effects model (if its model diagnostics look acceptable) or pseudo R-squared values for your GLMM model.
For mixed effects models - be them linear or generalized linear - there are two types of R-squared values you can report:
Marginal R-Squared: Proportion of the total variance explained by the fixed effects;
Conditional R-Squared: Proportion of the total variance explained by the fixed and random effects.
These values can be computed, for instance, using the rsquared() function in the R package piecewiseSEM or the function r2_nakagawa() from the performance package.
See the article The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded by Nakagawa et al. available at https://royalsocietypublishing.org/doi/10.1098/rsif.2017.0213. | How to obtain the variance of fixed effects?
Have you checked the model diagnostics for your linear mixed effects model? Incidence is a so-called "discrete proportion" and may be better modelled by a generalized linear mixed effects model (GLMM) |
51,943 | Is it true that the word prior should be used only with latent random variables? | The term prior (as well as posterior) is usually reserved for distributions defined in a Bayesian framework on objects that are not considered as random variables by other inferential approaches, namely parameters. Latent variable models are most often defined outside the Bayesian/non-Bayesian dichotomy and the distribution of the latent variable generally depends on parameters, so is also conditional on the realisation of these parameters in a Bayesian framework. Since it is a matter of terminology, there is no true or false (or right or wrong) to call a marginal distribution a prior, but this may prove confusing to Bayesians and non-Bayesians alike. | Is it true that the word prior should be used only with latent random variables? | The term prior (as well as posterior) is usually reserved for distributions defined in a Bayesian framework on objects that are not considered as random variables by other inferential approaches, name | Is it true that the word prior should be used only with latent random variables?
The term prior (as well as posterior) is usually reserved for distributions defined in a Bayesian framework on objects that are not considered as random variables by other inferential approaches, namely parameters. Latent variable models are most often defined outside the Bayesian/non-Bayesian dichotomy and the distribution of the latent variable generally depends on parameters, so is also conditional on the realisation of these parameters in a Bayesian framework. Since it is a matter of terminology, there is no true or false (or right or wrong) to call a marginal distribution a prior, but this may prove confusing to Bayesians and non-Bayesians alike. | Is it true that the word prior should be used only with latent random variables?
The term prior (as well as posterior) is usually reserved for distributions defined in a Bayesian framework on objects that are not considered as random variables by other inferential approaches, name |
51,944 | Is it true that the word prior should be used only with latent random variables? | Before answering your question, let's first explain some basic Bayesian mindset.
In Bayesian statistics, everything is a random variable, the only difference between these random variables is whether they are observed or hidden. Say for example if you believe $X$ follows a distribution defined by $\theta$, denote
$$
X \sim P(X|\theta)
$$
Where $\theta$ is the parameter of the distribution, from Bayesian perspective it's also a random variable. Usually in this case random variable $X$ is observed and $\theta$ is not, and you want to infer/learn/esitmate $\theta$ based on your observations. In such situations there's no matter of "prior", "marginal" or "posterior"
The term "prior", "marginal" or "posterior" matters when you believe $\theta$ follows some other distribution
$$
\theta \sim P(\theta|\gamma)
$$
Then we call this "other distribution" the prior, more specifically it's the piror distribution for $\theta$. Among all three random variables $X$, $\theta$ and $\gamma$, usually $X$ and $\gamma$ are observed, $\theta$ is not, and you want to estimate $\theta$ based on the observed $X$ and $\gamma$. So yes the term "prior" is usually on hidden random variables, of course you can believe there's a prior distribution for $\theta$ even when it is observed, but usually nobody do so(why would anyone esitimate something that is already observed?). And, if you can't observe $\gamma$, you can even assume $\gamma$ follows a distribution defined by another random variable $\eta$, then $P(\gamma | \eta)$ will be the prior for $\gamma$. Hope this answers your question regarding to "prior".
Now let's talk about "marginal". In previous example people usually interested in the distribution of $X$ (while $\theta$ is hidden), given $\gamma$, the distribution
$$
X \sim P(X|\gamma)
$$
is called the "marginal distribution". The term "marginal" came from the fact that $P(X|\gamma)$ is acquired by marginalizing out $\theta$ from the joint distribution:
$$
p(X|\gamma) = \int_\theta p(X|\theta)p(\theta|\gamma)
$$ | Is it true that the word prior should be used only with latent random variables? | Before answering your question, let's first explain some basic Bayesian mindset.
In Bayesian statistics, everything is a random variable, the only difference between these random variables is whether | Is it true that the word prior should be used only with latent random variables?
Before answering your question, let's first explain some basic Bayesian mindset.
In Bayesian statistics, everything is a random variable, the only difference between these random variables is whether they are observed or hidden. Say for example if you believe $X$ follows a distribution defined by $\theta$, denote
$$
X \sim P(X|\theta)
$$
Where $\theta$ is the parameter of the distribution, from Bayesian perspective it's also a random variable. Usually in this case random variable $X$ is observed and $\theta$ is not, and you want to infer/learn/esitmate $\theta$ based on your observations. In such situations there's no matter of "prior", "marginal" or "posterior"
The term "prior", "marginal" or "posterior" matters when you believe $\theta$ follows some other distribution
$$
\theta \sim P(\theta|\gamma)
$$
Then we call this "other distribution" the prior, more specifically it's the piror distribution for $\theta$. Among all three random variables $X$, $\theta$ and $\gamma$, usually $X$ and $\gamma$ are observed, $\theta$ is not, and you want to estimate $\theta$ based on the observed $X$ and $\gamma$. So yes the term "prior" is usually on hidden random variables, of course you can believe there's a prior distribution for $\theta$ even when it is observed, but usually nobody do so(why would anyone esitimate something that is already observed?). And, if you can't observe $\gamma$, you can even assume $\gamma$ follows a distribution defined by another random variable $\eta$, then $P(\gamma | \eta)$ will be the prior for $\gamma$. Hope this answers your question regarding to "prior".
Now let's talk about "marginal". In previous example people usually interested in the distribution of $X$ (while $\theta$ is hidden), given $\gamma$, the distribution
$$
X \sim P(X|\gamma)
$$
is called the "marginal distribution". The term "marginal" came from the fact that $P(X|\gamma)$ is acquired by marginalizing out $\theta$ from the joint distribution:
$$
p(X|\gamma) = \int_\theta p(X|\theta)p(\theta|\gamma)
$$ | Is it true that the word prior should be used only with latent random variables?
Before answering your question, let's first explain some basic Bayesian mindset.
In Bayesian statistics, everything is a random variable, the only difference between these random variables is whether |
51,945 | Understanding correlation on clearly correlated but not reversible data | Correlation does not have a "from" and a "to". It is invariant $Cor(A, B) = Cor(B, A)$.
The terms "from" and "to" can make sense in the context of regression, where we speak of "independent" and "dependent" variables or "predictor" and "predicted". Pearson correlation is closely related to linear regression. In Linear Regression again, the first order of a value does not play a role, it cannot be expressed in it.
So if you constructed a form of regression that has a way to express "first order of value", then that form of regression would perform better with $num$ as predictor for $dec$ then the other way around. | Understanding correlation on clearly correlated but not reversible data | Correlation does not have a "from" and a "to". It is invariant $Cor(A, B) = Cor(B, A)$.
The terms "from" and "to" can make sense in the context of regression, where we speak of "independent" and "depe | Understanding correlation on clearly correlated but not reversible data
Correlation does not have a "from" and a "to". It is invariant $Cor(A, B) = Cor(B, A)$.
The terms "from" and "to" can make sense in the context of regression, where we speak of "independent" and "dependent" variables or "predictor" and "predicted". Pearson correlation is closely related to linear regression. In Linear Regression again, the first order of a value does not play a role, it cannot be expressed in it.
So if you constructed a form of regression that has a way to express "first order of value", then that form of regression would perform better with $num$ as predictor for $dec$ then the other way around. | Understanding correlation on clearly correlated but not reversible data
Correlation does not have a "from" and a "to". It is invariant $Cor(A, B) = Cor(B, A)$.
The terms "from" and "to" can make sense in the context of regression, where we speak of "independent" and "depe |
51,946 | Understanding correlation on clearly correlated but not reversible data | This is simply a case where dec is a function of num ---i.e., the value of dec is fully determined by the value of num. That is all it is called --- a function. Functions of random variables are often correlated with the initial random variables, so this is not an unusual situation. The correlation indicates that the two variables are (statistically) linearly related, which they are. Obviously, in this case the correlation is not a particularly good representation of the relationship, but that is not surprising, since the function relationship is highly non-linear. | Understanding correlation on clearly correlated but not reversible data | This is simply a case where dec is a function of num ---i.e., the value of dec is fully determined by the value of num. That is all it is called --- a function. Functions of random variables are oft | Understanding correlation on clearly correlated but not reversible data
This is simply a case where dec is a function of num ---i.e., the value of dec is fully determined by the value of num. That is all it is called --- a function. Functions of random variables are often correlated with the initial random variables, so this is not an unusual situation. The correlation indicates that the two variables are (statistically) linearly related, which they are. Obviously, in this case the correlation is not a particularly good representation of the relationship, but that is not surprising, since the function relationship is highly non-linear. | Understanding correlation on clearly correlated but not reversible data
This is simply a case where dec is a function of num ---i.e., the value of dec is fully determined by the value of num. That is all it is called --- a function. Functions of random variables are oft |
51,947 | Understanding correlation on clearly correlated but not reversible data | As Bernhard mentioned, correlation does not have a "from - to" concept. It describes the relationship between to variables.
Another useful idea to think about is that if we change (or filter on) one variable, how would another variable change.
Think about the relationship between human height and weight, if we focus on tall population, it is very likely we are getting larger numbers on weight. This is called "positive" correlation.
Now think about another interesting case what will happen if one variable have zero variance, i.e., how all data have same value?
The answer can be found in this closely related post
How would you explain covariance to someone who understands only the mean? | Understanding correlation on clearly correlated but not reversible data | As Bernhard mentioned, correlation does not have a "from - to" concept. It describes the relationship between to variables.
Another useful idea to think about is that if we change (or filter on) one v | Understanding correlation on clearly correlated but not reversible data
As Bernhard mentioned, correlation does not have a "from - to" concept. It describes the relationship between to variables.
Another useful idea to think about is that if we change (or filter on) one variable, how would another variable change.
Think about the relationship between human height and weight, if we focus on tall population, it is very likely we are getting larger numbers on weight. This is called "positive" correlation.
Now think about another interesting case what will happen if one variable have zero variance, i.e., how all data have same value?
The answer can be found in this closely related post
How would you explain covariance to someone who understands only the mean? | Understanding correlation on clearly correlated but not reversible data
As Bernhard mentioned, correlation does not have a "from - to" concept. It describes the relationship between to variables.
Another useful idea to think about is that if we change (or filter on) one v |
51,948 | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$ | The answers have already been provided in the comments. Just so this question has an answer attached...
$Q$ is defined in the optimization problem itself as the variable we're minimizing with respect to. The expression
$$\underset{Q^T Q = I}{\text{argmin }} f(Q)$$
could also be written as:
$$\underset{Q}{\text{argmin }} f(Q) \quad
\text{s.t. } Q^T Q = I$$
That is: find the value of $Q$ that minimizes the objective function $f$, subject to the constraint that $Q^T Q =I$. This means that $Q$ is an orthogonal matrix.
$\| \cdot \|_F$ denotes the Frobenius norm. For a matrix $A$:
$$\| A \|_F =
\left ( \sum_i \sum_j A_{ij}^2 \right)^\frac{1}{2}$$ | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$ | The answers have already been provided in the comments. Just so this question has an answer attached...
$Q$ is defined in the optimization problem itself as the variable we're minimizing with respect | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$
The answers have already been provided in the comments. Just so this question has an answer attached...
$Q$ is defined in the optimization problem itself as the variable we're minimizing with respect to. The expression
$$\underset{Q^T Q = I}{\text{argmin }} f(Q)$$
could also be written as:
$$\underset{Q}{\text{argmin }} f(Q) \quad
\text{s.t. } Q^T Q = I$$
That is: find the value of $Q$ that minimizes the objective function $f$, subject to the constraint that $Q^T Q =I$. This means that $Q$ is an orthogonal matrix.
$\| \cdot \|_F$ denotes the Frobenius norm. For a matrix $A$:
$$\| A \|_F =
\left ( \sum_i \sum_j A_{ij}^2 \right)^\frac{1}{2}$$ | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$
The answers have already been provided in the comments. Just so this question has an answer attached...
$Q$ is defined in the optimization problem itself as the variable we're minimizing with respect |
51,949 | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$ | The commenters essentially answered this question, but I will memorialize it here.
The argmin (or argmax) notation can be a bit confusing, because it often introduces a dummy variable (much like the dx or dt in an integral). As Matthew Drury's comment indicates, the $\mathbf{Q}$ is the dummy variable here (so it won't be introduced elsewhere in the paper, as it only serves a place holder function).
Next, the argmin operator asks you to figure out which value of $\mathbf{Q}$ gives the smallest value. However, instead of returning the smallest value for the expression in the argmin, you instead want the value generating this smallest number. With this in mind, your $\mathbf{Q}$ is essentially your $\mathbf{R}^{(t)}$...so, $\mathbf{Q}$ is defined however $\mathbf{R}^{(t)}$ is defined.
Lastly the $F$ subscript on the norm function $||·||$ most likely indicates the Frobenius norm (https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm; as suggested by @Flounderer). This is just the square-root of the sum of the squares of all of the entries in the matrix inside the norm function. | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$ | The commenters essentially answered this question, but I will memorialize it here.
The argmin (or argmax) notation can be a bit confusing, because it often introduces a dummy variable (much like the d | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$
The commenters essentially answered this question, but I will memorialize it here.
The argmin (or argmax) notation can be a bit confusing, because it often introduces a dummy variable (much like the dx or dt in an integral). As Matthew Drury's comment indicates, the $\mathbf{Q}$ is the dummy variable here (so it won't be introduced elsewhere in the paper, as it only serves a place holder function).
Next, the argmin operator asks you to figure out which value of $\mathbf{Q}$ gives the smallest value. However, instead of returning the smallest value for the expression in the argmin, you instead want the value generating this smallest number. With this in mind, your $\mathbf{Q}$ is essentially your $\mathbf{R}^{(t)}$...so, $\mathbf{Q}$ is defined however $\mathbf{R}^{(t)}$ is defined.
Lastly the $F$ subscript on the norm function $||·||$ most likely indicates the Frobenius norm (https://en.wikipedia.org/wiki/Matrix_norm#Frobenius_norm; as suggested by @Flounderer). This is just the square-root of the sum of the squares of all of the entries in the matrix inside the norm function. | What does this notation mean: $F$ at the matrix norm and $Q$ under the $\arg\min$
The commenters essentially answered this question, but I will memorialize it here.
The argmin (or argmax) notation can be a bit confusing, because it often introduces a dummy variable (much like the d |
51,950 | Difference between invertible NN and flow-based NN | After some more reading I came to following conclusion:
Invertible NN are just neural networks that represent bijective functions $f$.
Normalizing flows are invertible NN $f$ that also have a tractable determinant of the Jacobian $D_x f$ as well as a tractable inverse $f^{-1}$. This allows for following interpretation: Let $X \sim p_X, Z \sim p_Z$ be some random variable with $Z = f(X)$. Then $$p_X(x) = p_Z(f(x)) \det D_x f .$$
Because $f$ has a tractable inverse $f^{-1}$ we can therefore easily sample from one of the two distributions $p_X, p_Z$ by sampling from the other one and using the transformation above.
This could be applied in the following way (just as an example): We could train $f$ such that $p_X$ represents a distribution of images (e.g. represented by MNIST) and $p_Z$ a Gaussian. Then we can easily sample from the distribution of images by sampling $Z \sim p_Z$ (Gaussian) and just transforming it back to $X = f^{-1}(Z) \sim p_X$. | Difference between invertible NN and flow-based NN | After some more reading I came to following conclusion:
Invertible NN are just neural networks that represent bijective functions $f$.
Normalizing flows are invertible NN $f$ that also have a tractab | Difference between invertible NN and flow-based NN
After some more reading I came to following conclusion:
Invertible NN are just neural networks that represent bijective functions $f$.
Normalizing flows are invertible NN $f$ that also have a tractable determinant of the Jacobian $D_x f$ as well as a tractable inverse $f^{-1}$. This allows for following interpretation: Let $X \sim p_X, Z \sim p_Z$ be some random variable with $Z = f(X)$. Then $$p_X(x) = p_Z(f(x)) \det D_x f .$$
Because $f$ has a tractable inverse $f^{-1}$ we can therefore easily sample from one of the two distributions $p_X, p_Z$ by sampling from the other one and using the transformation above.
This could be applied in the following way (just as an example): We could train $f$ such that $p_X$ represents a distribution of images (e.g. represented by MNIST) and $p_Z$ a Gaussian. Then we can easily sample from the distribution of images by sampling $Z \sim p_Z$ (Gaussian) and just transforming it back to $X = f^{-1}(Z) \sim p_X$. | Difference between invertible NN and flow-based NN
After some more reading I came to following conclusion:
Invertible NN are just neural networks that represent bijective functions $f$.
Normalizing flows are invertible NN $f$ that also have a tractab |
51,951 | Difference between invertible NN and flow-based NN | An invertible neural network is a general term used for any neural network that’s invertible. A flow neural network is a specific kind of invertible neural network. It’s just that it’s rather difficult to construct invertible neural networks, and flow neural networks offer an easy recipe. | Difference between invertible NN and flow-based NN | An invertible neural network is a general term used for any neural network that’s invertible. A flow neural network is a specific kind of invertible neural network. It’s just that it’s rather difficul | Difference between invertible NN and flow-based NN
An invertible neural network is a general term used for any neural network that’s invertible. A flow neural network is a specific kind of invertible neural network. It’s just that it’s rather difficult to construct invertible neural networks, and flow neural networks offer an easy recipe. | Difference between invertible NN and flow-based NN
An invertible neural network is a general term used for any neural network that’s invertible. A flow neural network is a specific kind of invertible neural network. It’s just that it’s rather difficul |
51,952 | Control variables and other independent variables | When you say "control", I suspect you mean that you have a primary variable of interest, and then you have other variables that are potential confounders.
In the presence of a confounder, the effect size of the primary variable may appear higher or lower than it actually is (Simpson's Paradoxon / omitted variable bias). To "control" for this effect (see also here), the confounder must be added to the multiple regression (otherwise you lose the ability to infer the causal effect of the primary variable).
Note, however, that not all variables should be added to a regression. In some cases, adding a variable can even produce bias (collider). The causal structure determines which variables should go into the regression, regardless of significance or how they affect the estimates of other variables. See more comments here and in the excellent paper by Lederer et al., 2019. | Control variables and other independent variables | When you say "control", I suspect you mean that you have a primary variable of interest, and then you have other variables that are potential confounders.
In the presence of a confounder, the effect | Control variables and other independent variables
When you say "control", I suspect you mean that you have a primary variable of interest, and then you have other variables that are potential confounders.
In the presence of a confounder, the effect size of the primary variable may appear higher or lower than it actually is (Simpson's Paradoxon / omitted variable bias). To "control" for this effect (see also here), the confounder must be added to the multiple regression (otherwise you lose the ability to infer the causal effect of the primary variable).
Note, however, that not all variables should be added to a regression. In some cases, adding a variable can even produce bias (collider). The causal structure determines which variables should go into the regression, regardless of significance or how they affect the estimates of other variables. See more comments here and in the excellent paper by Lederer et al., 2019. | Control variables and other independent variables
When you say "control", I suspect you mean that you have a primary variable of interest, and then you have other variables that are potential confounders.
In the presence of a confounder, the effect |
51,953 | Control variables and other independent variables | While it is common practice to "control" (put another independent variable in regression) for any potential confounders, this isn't always the best case. Sometimes "controlling" for variables introduces confounders into the regression.1 It all depends on the underlying relationship between your variables.
As @Florian Hartig and @boulder allude to, you are likely interested in explaining (causal inference), not just predicting. (Although, often there is lots of overlap.) To Explain or Predict
First, I would recommend drawing a causal graph 3 of your system.
It won't be perfect, because well, life is messy. Just a simple diagram with all relevant variables and arrows that show how they are connected. This graph will be based on the scientific theory and allows one to show others how they think the system operates (generates data). After you have this causal graph then you can consider which variables would reduce confounding and which variables would introduce confounding.
Example of causal graph:
EDIT:
If you wanted to understand what produces traffic jams, you might use rush hour and accidents. However, if you don't include bad weather you might overestimate the effect of accidents. Because bad weather happens to increase traffic directly (from slower speeds presumably) and indirectly by increasing accidents which then go onto increase traffic. So, given this graph of the system you would want to include ("control for") bad weather. Then you could interpret your coefficients as given my causal graph of the system and given my statistical model choice and and given my data and given a certain amount of bad weather and given a certain amount of accidents the effect of rush hour is ___ on traffic. But don't let this example lead you to believe that adding extra variables is harmless. As stated earlier, depending on the structure of your graph/system sometimes it hurts inference.
If you want more plain language discussion of this topic I highly recommend Richard McElreath's book 6 and lectures 7. If you have access to academic journals you may find a copy of his book there for free. | Control variables and other independent variables | While it is common practice to "control" (put another independent variable in regression) for any potential confounders, this isn't always the best case. Sometimes "controlling" for variables introduc | Control variables and other independent variables
While it is common practice to "control" (put another independent variable in regression) for any potential confounders, this isn't always the best case. Sometimes "controlling" for variables introduces confounders into the regression.1 It all depends on the underlying relationship between your variables.
As @Florian Hartig and @boulder allude to, you are likely interested in explaining (causal inference), not just predicting. (Although, often there is lots of overlap.) To Explain or Predict
First, I would recommend drawing a causal graph 3 of your system.
It won't be perfect, because well, life is messy. Just a simple diagram with all relevant variables and arrows that show how they are connected. This graph will be based on the scientific theory and allows one to show others how they think the system operates (generates data). After you have this causal graph then you can consider which variables would reduce confounding and which variables would introduce confounding.
Example of causal graph:
EDIT:
If you wanted to understand what produces traffic jams, you might use rush hour and accidents. However, if you don't include bad weather you might overestimate the effect of accidents. Because bad weather happens to increase traffic directly (from slower speeds presumably) and indirectly by increasing accidents which then go onto increase traffic. So, given this graph of the system you would want to include ("control for") bad weather. Then you could interpret your coefficients as given my causal graph of the system and given my statistical model choice and and given my data and given a certain amount of bad weather and given a certain amount of accidents the effect of rush hour is ___ on traffic. But don't let this example lead you to believe that adding extra variables is harmless. As stated earlier, depending on the structure of your graph/system sometimes it hurts inference.
If you want more plain language discussion of this topic I highly recommend Richard McElreath's book 6 and lectures 7. If you have access to academic journals you may find a copy of his book there for free. | Control variables and other independent variables
While it is common practice to "control" (put another independent variable in regression) for any potential confounders, this isn't always the best case. Sometimes "controlling" for variables introduc |
51,954 | Control variables and other independent variables | My suggestion
Model: Run a model only with the independent variables you are interested in.
Model: Add the variables you want to control for into the model.
Reason for the suggestion
If you add all the variables together this might change the coefficients of your predictors, i.e. increase/ decrease or even change the direction of the coefficients. If there is no contradiction between borh models you can interpret the coefficient of the independent variable from the second model.
Example
I'll use an example that I was taught by a statistic professor of my university. Let's assume you want to predict success at work (variable: success; higher is better) and you collected different variables to do this. Among others, you know what the average grades of the people are (variable: grades; higher is better) and how much people are afraid of exams (variable: fear; higher is worse, i.e. more fear). You run a multiple regression and you get the following results:
Variable Estimate Std. Error t value p
Intercept 0.00 0.12 0.00 1.000
fear 0.33 0.14 2.42 0.019
grades 0.67 0.14 4.85 < 0.001
Incredible: The more someone was afraid of exams the more successful he/ she is at work. Who would have thought that! But wait: Actually, I made up the data and in fact the correlation matrix looks like this (see R code below):
success fear grades
success 1 0 .5
fear 0 1 -.5
grades .5 -.5 1
As you can see, actually there is no correlation between fear and success. Keep in mind the relationship between correlation and regression: If the correlation is zero, the estimate of the regression will be zero, too. Why does the estimate of fear incease in the multivariate regression? We have a suppression effect here: fear does not explain variance in the dependent variable but in the other independent variable grades.
This means you would draw a wrong conclusion if you would only look at the multivariate regression! Altough the estimate of fear is positive and significant in the multivaraite regression you can not say that having more fear increases success at work. And you can only understand that this conclusion would be wrong if you compare the univariate (1. model) and the multivariate (2. model) regression.
Real world examples and a bigger picture
The example above shows why I disagree with @Florian Hartig who wrote that "suppressors are secondary, and seldom crucial for the scientific conclusions". To stress this point, please note that "Reversal in association (magnitude or direction) are examples of Simpson's Paradox, Lord's Paradox and Suppression Effects. The differences essentially relate to the type of variable" (here). Thus not only the suppression but Simpson's and Lord's paradox can occure in multivariate regression, too, hence, the suggestion to run two models in order to avoid misleading results applies here, too. You can find real world examples here or here. Also, looking for suppression on cv shows that people do encounter this problem. See here or here, for example.
Last notes
Also keep in mind that there are concepts that are close to suppression. So "in order to interpret these results you will have to think about / have some theory of why the relationships manifest as they do. That is, you will need to think about the pattern of causal relationships that underlies your data" (here).
R Code
require(MASS)
df <- data.frame(mvrnorm(50, mu = c(0, 0, 0),
Sigma = matrix(c(1, 0, 0.5,
0, 1, -0.5,
0.5, -0.5 ,1), ncol = 3),
empirical = TRUE))
names(df) <- c("success", "fear", "grades")
cor(df)
summary(lm(success ~ fear, data= df))
summary(lm(success ~ fear + grades, data= df)) | Control variables and other independent variables | My suggestion
Model: Run a model only with the independent variables you are interested in.
Model: Add the variables you want to control for into the model.
Reason for the suggestion
If you add all | Control variables and other independent variables
My suggestion
Model: Run a model only with the independent variables you are interested in.
Model: Add the variables you want to control for into the model.
Reason for the suggestion
If you add all the variables together this might change the coefficients of your predictors, i.e. increase/ decrease or even change the direction of the coefficients. If there is no contradiction between borh models you can interpret the coefficient of the independent variable from the second model.
Example
I'll use an example that I was taught by a statistic professor of my university. Let's assume you want to predict success at work (variable: success; higher is better) and you collected different variables to do this. Among others, you know what the average grades of the people are (variable: grades; higher is better) and how much people are afraid of exams (variable: fear; higher is worse, i.e. more fear). You run a multiple regression and you get the following results:
Variable Estimate Std. Error t value p
Intercept 0.00 0.12 0.00 1.000
fear 0.33 0.14 2.42 0.019
grades 0.67 0.14 4.85 < 0.001
Incredible: The more someone was afraid of exams the more successful he/ she is at work. Who would have thought that! But wait: Actually, I made up the data and in fact the correlation matrix looks like this (see R code below):
success fear grades
success 1 0 .5
fear 0 1 -.5
grades .5 -.5 1
As you can see, actually there is no correlation between fear and success. Keep in mind the relationship between correlation and regression: If the correlation is zero, the estimate of the regression will be zero, too. Why does the estimate of fear incease in the multivariate regression? We have a suppression effect here: fear does not explain variance in the dependent variable but in the other independent variable grades.
This means you would draw a wrong conclusion if you would only look at the multivariate regression! Altough the estimate of fear is positive and significant in the multivaraite regression you can not say that having more fear increases success at work. And you can only understand that this conclusion would be wrong if you compare the univariate (1. model) and the multivariate (2. model) regression.
Real world examples and a bigger picture
The example above shows why I disagree with @Florian Hartig who wrote that "suppressors are secondary, and seldom crucial for the scientific conclusions". To stress this point, please note that "Reversal in association (magnitude or direction) are examples of Simpson's Paradox, Lord's Paradox and Suppression Effects. The differences essentially relate to the type of variable" (here). Thus not only the suppression but Simpson's and Lord's paradox can occure in multivariate regression, too, hence, the suggestion to run two models in order to avoid misleading results applies here, too. You can find real world examples here or here. Also, looking for suppression on cv shows that people do encounter this problem. See here or here, for example.
Last notes
Also keep in mind that there are concepts that are close to suppression. So "in order to interpret these results you will have to think about / have some theory of why the relationships manifest as they do. That is, you will need to think about the pattern of causal relationships that underlies your data" (here).
R Code
require(MASS)
df <- data.frame(mvrnorm(50, mu = c(0, 0, 0),
Sigma = matrix(c(1, 0, 0.5,
0, 1, -0.5,
0.5, -0.5 ,1), ncol = 3),
empirical = TRUE))
names(df) <- c("success", "fear", "grades")
cor(df)
summary(lm(success ~ fear, data= df))
summary(lm(success ~ fear + grades, data= df)) | Control variables and other independent variables
My suggestion
Model: Run a model only with the independent variables you are interested in.
Model: Add the variables you want to control for into the model.
Reason for the suggestion
If you add all |
51,955 | Control variables and other independent variables | Keep in mind the goal of regression through this simple example:
Let's say you want to determine the relationship between the price of homes in the state you live in and the size of the home. And you are clever, so you also include a control variable which indicates whether the home is in a rural or urban district of the state. To study this problem, you randomly pick 311 homes in your state and record those three measurements for each.
Now the question that regression is trying to answer is as follows: given the relationships between the variables observed (which we obtain through the coef. column of the output), are these relationships likely to be simply a coincidence of the particular 311 homes we randomly picked, or is it likely that regardless of which homes ended up in the sample, you would still be likely to observe the same relationships.
(slight tangent on what we mean by significance)
How do we define what "likely" means in this context? This is where the p > |t| column comes in. In general, you will often see a significance level of 5% being used. That is, you want your p > |t| column to be less than .05 and if it is you call the variable "statistically significant". The exact reason for this starts to get technical, but the general idea is as follows: suppose there is no relationship between the things we are studying. If we were to repeatedly take random samples and perform the regression, how often would the regression indicate there was a significant relationship between the things? The answer is around 5% of the time.
OK, back to the concept of control variables.
In this scenario, the reason for including a control variable is rather obvious-- real estate prices are highly influenced by location so by including a location control variable you will greatly increase your understanding of the relationship between price and size. In fact, by not including a control variable, it is possible that you completely miss the true relationship between your variables of interest. Consider this situation:
Let's say you have a population of interest and you want to know the relationship between Y and X. In this population, people are 50-50 split between membership in some group (this group membership is our control variable). Suppose that if people belong to the group, the coefficient between Y and X is a large positive number and if not it is large negative number. If we ran a regression without including this group membership we would find that our regression would tell us there is no relationship between Y and X! The large positive association that exists for people in the group would be completely negated by the large negative association between Y and X that people not in the group have.
But it can get even worse! Look at https://en.wikipedia.org/wiki/Simpson%27s_paradox . Not only can relationships be negated by not including the correct control variables, doing so may give you relationships that are the opposite of what they are in reality. You think Y and X are positively correlated, but in reality after proper controls they are negatively correlated. This is why we sometimes see 2 studies on the same subject that come to opposite conclusions!
Now, back to your questions.
How can I explain the relationship between the controlling variables and my other significant independent variables? What do the insignificant controlling variables say about the significant independent variables?
Given your results, it would appear that location (I am assuming this is what the categorical variable BO represents) has no impact on the relationship between your dependent and independent variables. Essentially, because the p > |t| is so large, it is highly likely that another random sample of data on these variables would produce entirely different coefficients for Oppland Oslo and so on. Which is to say, it would not at all be unusual to see that another sample produced a slightly positive coefficient for Oppland instead of a slightly negative one.
Because the p > |t| values are so large for this variable it should almost certainly be excluded and instead you should just note that you attempted to control for location but found no relationship.
Now AB is a different story because one level is highly significant while the others are not. I would guess the improvement you see (according to r^2 and adj r^2) in the bigger model is almost entirely because of the >= 10 ganger level of AB. One thing you could do here is instead turn the variable into a binary variable and see what happens. But since this is an ordinal categorical variable (there is a natural way to order the levels) sometimes slightly non-significant levels are kept if the coefficients "make sense".
To specifically answer your question: holding everything else constant we would expect an observation with 10+ ganger to have a higher LOJ than one with <10 ganger, on average.
And how do I know if my significant independent variables are "good" or not, when all the controlling variables are insignificant?
It could just be the case that X and Y have a highly causative relationship and that not many other things influence the relationship between them very much. Or you just picked bad control variables. It really doesn't mean much on its own. It can be a cause for concern, though. Like if you are doing the house price study and it comes out that location isn't significant I would be very concerned about the data quality.
This question really gets down to the heart of the issues with regression. Why do we see studies that say x causes cancer, and then another that says the exact opposite? Because the whole thing only works when we have included all variables relevant to the problem. In nutrition studies, as an example, this is incredibly difficult. Because in reality we can never know for certain that we haven't missed an incredibly influential lurking variable that, if included, would change all the results. | Control variables and other independent variables | Keep in mind the goal of regression through this simple example:
Let's say you want to determine the relationship between the price of homes in the state you live in and the size of the home. And yo | Control variables and other independent variables
Keep in mind the goal of regression through this simple example:
Let's say you want to determine the relationship between the price of homes in the state you live in and the size of the home. And you are clever, so you also include a control variable which indicates whether the home is in a rural or urban district of the state. To study this problem, you randomly pick 311 homes in your state and record those three measurements for each.
Now the question that regression is trying to answer is as follows: given the relationships between the variables observed (which we obtain through the coef. column of the output), are these relationships likely to be simply a coincidence of the particular 311 homes we randomly picked, or is it likely that regardless of which homes ended up in the sample, you would still be likely to observe the same relationships.
(slight tangent on what we mean by significance)
How do we define what "likely" means in this context? This is where the p > |t| column comes in. In general, you will often see a significance level of 5% being used. That is, you want your p > |t| column to be less than .05 and if it is you call the variable "statistically significant". The exact reason for this starts to get technical, but the general idea is as follows: suppose there is no relationship between the things we are studying. If we were to repeatedly take random samples and perform the regression, how often would the regression indicate there was a significant relationship between the things? The answer is around 5% of the time.
OK, back to the concept of control variables.
In this scenario, the reason for including a control variable is rather obvious-- real estate prices are highly influenced by location so by including a location control variable you will greatly increase your understanding of the relationship between price and size. In fact, by not including a control variable, it is possible that you completely miss the true relationship between your variables of interest. Consider this situation:
Let's say you have a population of interest and you want to know the relationship between Y and X. In this population, people are 50-50 split between membership in some group (this group membership is our control variable). Suppose that if people belong to the group, the coefficient between Y and X is a large positive number and if not it is large negative number. If we ran a regression without including this group membership we would find that our regression would tell us there is no relationship between Y and X! The large positive association that exists for people in the group would be completely negated by the large negative association between Y and X that people not in the group have.
But it can get even worse! Look at https://en.wikipedia.org/wiki/Simpson%27s_paradox . Not only can relationships be negated by not including the correct control variables, doing so may give you relationships that are the opposite of what they are in reality. You think Y and X are positively correlated, but in reality after proper controls they are negatively correlated. This is why we sometimes see 2 studies on the same subject that come to opposite conclusions!
Now, back to your questions.
How can I explain the relationship between the controlling variables and my other significant independent variables? What do the insignificant controlling variables say about the significant independent variables?
Given your results, it would appear that location (I am assuming this is what the categorical variable BO represents) has no impact on the relationship between your dependent and independent variables. Essentially, because the p > |t| is so large, it is highly likely that another random sample of data on these variables would produce entirely different coefficients for Oppland Oslo and so on. Which is to say, it would not at all be unusual to see that another sample produced a slightly positive coefficient for Oppland instead of a slightly negative one.
Because the p > |t| values are so large for this variable it should almost certainly be excluded and instead you should just note that you attempted to control for location but found no relationship.
Now AB is a different story because one level is highly significant while the others are not. I would guess the improvement you see (according to r^2 and adj r^2) in the bigger model is almost entirely because of the >= 10 ganger level of AB. One thing you could do here is instead turn the variable into a binary variable and see what happens. But since this is an ordinal categorical variable (there is a natural way to order the levels) sometimes slightly non-significant levels are kept if the coefficients "make sense".
To specifically answer your question: holding everything else constant we would expect an observation with 10+ ganger to have a higher LOJ than one with <10 ganger, on average.
And how do I know if my significant independent variables are "good" or not, when all the controlling variables are insignificant?
It could just be the case that X and Y have a highly causative relationship and that not many other things influence the relationship between them very much. Or you just picked bad control variables. It really doesn't mean much on its own. It can be a cause for concern, though. Like if you are doing the house price study and it comes out that location isn't significant I would be very concerned about the data quality.
This question really gets down to the heart of the issues with regression. Why do we see studies that say x causes cancer, and then another that says the exact opposite? Because the whole thing only works when we have included all variables relevant to the problem. In nutrition studies, as an example, this is incredibly difficult. Because in reality we can never know for certain that we haven't missed an incredibly influential lurking variable that, if included, would change all the results. | Control variables and other independent variables
Keep in mind the goal of regression through this simple example:
Let's say you want to determine the relationship between the price of homes in the state you live in and the size of the home. And yo |
51,956 | Probability that a draw from a normal distribution is some number greater than another draw from the same distribution | Since it looks like self-study question, I'll start with a hint: Think of $X_1-X_2$ with
$X_1, X_2 \sim N(470, 70^2)$. What distribution does $X_1-X_2$ follow? How to interpret $X_1-X_2$? | Probability that a draw from a normal distribution is some number greater than another draw from the | Since it looks like self-study question, I'll start with a hint: Think of $X_1-X_2$ with
$X_1, X_2 \sim N(470, 70^2)$. What distribution does $X_1-X_2$ follow? How to interpret $X_1-X_2$? | Probability that a draw from a normal distribution is some number greater than another draw from the same distribution
Since it looks like self-study question, I'll start with a hint: Think of $X_1-X_2$ with
$X_1, X_2 \sim N(470, 70^2)$. What distribution does $X_1-X_2$ follow? How to interpret $X_1-X_2$? | Probability that a draw from a normal distribution is some number greater than another draw from the
Since it looks like self-study question, I'll start with a hint: Think of $X_1-X_2$ with
$X_1, X_2 \sim N(470, 70^2)$. What distribution does $X_1-X_2$ follow? How to interpret $X_1-X_2$? |
51,957 | Is there any better alternative to Linear Probability Model? | The first "drawback" you mention is the definition of the risk difference, so there is no avoiding this.
There is at least one way to obtain the risk difference using the logistic regression model. It is the average marginal effects approach. The formula depends on whether the predictor of interest is binary or continuous. I will focus on the case of the continuous predictor.
Imagine the following logistic regression model:
$$\ln\bigg[\frac{\hat\pi}{1-\hat\pi}\bigg] = \hat{y}^* = \hat\gamma_c \times x_c + Z\hat\beta$$
where $Z$ is an $n$ cases by $k$ predictors matrix including the constant, $\hat\beta$ are $k$ regression weights for the $k$ predictors, $x_c$ is the continuous predictor whose effect is of interest and $\hat\gamma_c$ is its estimated coefficient on the log-odds scale.
Then the average marginal effect is:
$$\mathrm{RD}_c = \hat\gamma_c \times \frac{1}{n}\Bigg(\sum\frac{e^{\hat{y}^*}}{\big(1 + e^{\hat{y}^*}\big)^2}\Bigg)\\$$
This is the average PDF scaled by the weight of $x_c$. It turns out that this effect is very well approximated by the regression weight from OLS applied to the problem regardless of drawbacks 2 and 3. This is the simplest justification in practice for the application of OLS to estimating the linear probability model.
For drawback 2, as mentioned in one of your citations, we can manage it using heteroskedasticity-consistent standard errors.
Now, Horrace and Oaxaca (2003) have done some very interesting work on consistent estimators for the linear probability model. To explain their work, it is useful to lay out the conditions under which the linear probability model is the true data generating process for a binary response variable. We begin with:
\begin{align}
\begin{split}
P(y = 1 \mid X)
{}& = P(X\beta + \epsilon > t \mid X) \quad \text{using a latent variable formulation for } y \\
{}& = P(\epsilon > t-X\beta \mid X)
\end{split}
\end{align}
where $y \in \{0, 1\}$, $t$ is some threshold above which the latent variable is observed as 1, $X$ is matrix of $n$ cases by $k$ predictors, and $\beta$ their weights. If we assume $\epsilon\sim\mathcal{U}(-0.5, 0.5)$ and $t=0.5$, then:
\begin{align}
\begin{split}
P(y = 1 \mid X)
{}& = P(\epsilon > 0.5-X\beta \mid X) \\
{}& = P(\epsilon < X\beta -0.5 \mid X) \quad \text{since $\mathcal{U}(-0.5, 0.5)$ is symmetric about 0} \\
{}&=\begin{cases}
0, & \mathrm{if}\ X\beta -0.5 < -0.5\\
\frac{(X\beta -0.5)-(-0.5)}{0.5-(-0.5)}, & \mathrm{if}\ X\beta -0.5 \in [-0.5, 0.5)\\
1, & \mathrm{if}\ X\beta -0.5 \geq 0.5
\end{cases} \quad \text{CDF of $\mathcal{U}(-0.5,0.5)$}\\
{}&=\begin{cases}
0, & \mathrm{if}\ X\beta < 0\\
X\beta, & \mathrm{if}\ X\beta \in [0, 1)\\
1, & \mathrm{if}\ X\beta \geq 1
\end{cases}
\end{split}
\end{align}
So the relationship between $X\beta$ and $P(y = 1\mid X)$ is only linear when $X\beta \in [0, 1]$, otherwise it is not. Horrace and Oaxaca suggested that we may use $X\hat\beta$ as a proxy for $X\beta$ and in empirical situations, if we assume a linear probability model, we should consider it inadequate if there are any predicted values outside the unit interval.
As a solution, they recommended the following steps:
Estimate the model using OLS
Check for any fitted values outside the unit interval. If there are none, stop, you have your model.
Drop all cases with fitted values outside the unit interval and return to step 1
Using a simple simulation (and in my own more extensive simulations), they found this approach to recover adequately $\beta$ when the linear probability model is true. They termed the approach sequential least squares (SLS). SLS is similar in spirit to doing MLE and censoring the mean of the normal distribution at 0 and 1 within each iteration of estimation, see Wacholder (1986).
Now how about if the logistic regression model is true? I will demonstrate in a simulated data example what happens using R:
# An implementation of SLS
s.ols <- function(fit.ols) {
dat.ols <- model.frame(fit.ols)
n.org <- nrow(dat.ols)
fitted <- fit.ols$fitted.values
form <- formula(fit.ols)
while (any(fitted > 1 | fitted < 0)) {
dat.ols <- dat.ols[!(fitted > 1 | fitted < 0), ]
m.ols <- lm(form, dat.ols)
fitted <- m.ols$fitted.values
}
m.ols <- lm(form, dat.ols)
# Bound predicted values at 0 and 1 using complete data
m.ols$fitted.values <- punif(as.numeric(model.matrix(fit.ols) %*% coef(m.ols)))
m.ols
}
set.seed(12345)
n <- 20000
dat <- data.frame(x = rnorm(n))
# With an intercept of 2, this will be a high probability outcome
dat$y <- ((2 + 2 * dat$x + + rlogis(n)) > 0) + 0
coef(fit.logit <- glm(y ~ x, binomial, dat))
# (Intercept) x
# 2.042820 2.021912
coef(fit.ols <- lm(y ~ x, dat))
# (Intercept) x
# 0.7797852 0.2237350
coef(fit.sls <- s.ols(fit.ols))
# (Intercept) x
# 0.8989707 0.3932077
We see that the RD from OLS is .22 and that from SLS is .39. We can also compute the average marginal effect from the logistic regression equation:
coef(fit.logit)["x"] * mean(dlogis(predict(fit.logit)))
# x
# 0.224426
We can see that the OLS estimate is very close to this value.
How about we plot the different effects to better understand what they try to capture:
library(ggplot2)
dat.res <- data.frame(
x = dat$x, logit = fitted(fit.logit),
ols = fitted(fit.ols), sls = fitted(fit.sls))
dat.res <- tidyr::gather(dat.res, model, fitted, logit:sls)
ggplot(dat.res, aes(x, fitted, col = model)) +
geom_line() + theme_bw()
From here, we see that the OLS results looks nothing like the logistic curve. OLS captures the average change in probability of y across the range of x (the average marginal effect). While SLS results in the linear approximation to the logistic curve in the region it is changing on the probability scale.
In this scenario, I think the SLS estimate better reflects the reality of the situation.
As with OLS, heteroskedasticity is implied by SLS, so Horrace and Oaxaca recommend heteroskedasticity-consistent standard errors.
Horrace, W. C., & Oaxaca, R. L. (2003, January 1). New Wine in Old Bottles: A Sequential Estimation Technique for the Lpm. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=383102
Wacholder, S. (1986). Binomial regression in GLIM: estimating risk ratios and risk differences. American Journal of Epidemiology, 123(1), 174–184. https://doi.org/10.1093/oxfordjournals.aje.a114212 | Is there any better alternative to Linear Probability Model? | The first "drawback" you mention is the definition of the risk difference, so there is no avoiding this.
There is at least one way to obtain the risk difference using the logistic regression model. It | Is there any better alternative to Linear Probability Model?
The first "drawback" you mention is the definition of the risk difference, so there is no avoiding this.
There is at least one way to obtain the risk difference using the logistic regression model. It is the average marginal effects approach. The formula depends on whether the predictor of interest is binary or continuous. I will focus on the case of the continuous predictor.
Imagine the following logistic regression model:
$$\ln\bigg[\frac{\hat\pi}{1-\hat\pi}\bigg] = \hat{y}^* = \hat\gamma_c \times x_c + Z\hat\beta$$
where $Z$ is an $n$ cases by $k$ predictors matrix including the constant, $\hat\beta$ are $k$ regression weights for the $k$ predictors, $x_c$ is the continuous predictor whose effect is of interest and $\hat\gamma_c$ is its estimated coefficient on the log-odds scale.
Then the average marginal effect is:
$$\mathrm{RD}_c = \hat\gamma_c \times \frac{1}{n}\Bigg(\sum\frac{e^{\hat{y}^*}}{\big(1 + e^{\hat{y}^*}\big)^2}\Bigg)\\$$
This is the average PDF scaled by the weight of $x_c$. It turns out that this effect is very well approximated by the regression weight from OLS applied to the problem regardless of drawbacks 2 and 3. This is the simplest justification in practice for the application of OLS to estimating the linear probability model.
For drawback 2, as mentioned in one of your citations, we can manage it using heteroskedasticity-consistent standard errors.
Now, Horrace and Oaxaca (2003) have done some very interesting work on consistent estimators for the linear probability model. To explain their work, it is useful to lay out the conditions under which the linear probability model is the true data generating process for a binary response variable. We begin with:
\begin{align}
\begin{split}
P(y = 1 \mid X)
{}& = P(X\beta + \epsilon > t \mid X) \quad \text{using a latent variable formulation for } y \\
{}& = P(\epsilon > t-X\beta \mid X)
\end{split}
\end{align}
where $y \in \{0, 1\}$, $t$ is some threshold above which the latent variable is observed as 1, $X$ is matrix of $n$ cases by $k$ predictors, and $\beta$ their weights. If we assume $\epsilon\sim\mathcal{U}(-0.5, 0.5)$ and $t=0.5$, then:
\begin{align}
\begin{split}
P(y = 1 \mid X)
{}& = P(\epsilon > 0.5-X\beta \mid X) \\
{}& = P(\epsilon < X\beta -0.5 \mid X) \quad \text{since $\mathcal{U}(-0.5, 0.5)$ is symmetric about 0} \\
{}&=\begin{cases}
0, & \mathrm{if}\ X\beta -0.5 < -0.5\\
\frac{(X\beta -0.5)-(-0.5)}{0.5-(-0.5)}, & \mathrm{if}\ X\beta -0.5 \in [-0.5, 0.5)\\
1, & \mathrm{if}\ X\beta -0.5 \geq 0.5
\end{cases} \quad \text{CDF of $\mathcal{U}(-0.5,0.5)$}\\
{}&=\begin{cases}
0, & \mathrm{if}\ X\beta < 0\\
X\beta, & \mathrm{if}\ X\beta \in [0, 1)\\
1, & \mathrm{if}\ X\beta \geq 1
\end{cases}
\end{split}
\end{align}
So the relationship between $X\beta$ and $P(y = 1\mid X)$ is only linear when $X\beta \in [0, 1]$, otherwise it is not. Horrace and Oaxaca suggested that we may use $X\hat\beta$ as a proxy for $X\beta$ and in empirical situations, if we assume a linear probability model, we should consider it inadequate if there are any predicted values outside the unit interval.
As a solution, they recommended the following steps:
Estimate the model using OLS
Check for any fitted values outside the unit interval. If there are none, stop, you have your model.
Drop all cases with fitted values outside the unit interval and return to step 1
Using a simple simulation (and in my own more extensive simulations), they found this approach to recover adequately $\beta$ when the linear probability model is true. They termed the approach sequential least squares (SLS). SLS is similar in spirit to doing MLE and censoring the mean of the normal distribution at 0 and 1 within each iteration of estimation, see Wacholder (1986).
Now how about if the logistic regression model is true? I will demonstrate in a simulated data example what happens using R:
# An implementation of SLS
s.ols <- function(fit.ols) {
dat.ols <- model.frame(fit.ols)
n.org <- nrow(dat.ols)
fitted <- fit.ols$fitted.values
form <- formula(fit.ols)
while (any(fitted > 1 | fitted < 0)) {
dat.ols <- dat.ols[!(fitted > 1 | fitted < 0), ]
m.ols <- lm(form, dat.ols)
fitted <- m.ols$fitted.values
}
m.ols <- lm(form, dat.ols)
# Bound predicted values at 0 and 1 using complete data
m.ols$fitted.values <- punif(as.numeric(model.matrix(fit.ols) %*% coef(m.ols)))
m.ols
}
set.seed(12345)
n <- 20000
dat <- data.frame(x = rnorm(n))
# With an intercept of 2, this will be a high probability outcome
dat$y <- ((2 + 2 * dat$x + + rlogis(n)) > 0) + 0
coef(fit.logit <- glm(y ~ x, binomial, dat))
# (Intercept) x
# 2.042820 2.021912
coef(fit.ols <- lm(y ~ x, dat))
# (Intercept) x
# 0.7797852 0.2237350
coef(fit.sls <- s.ols(fit.ols))
# (Intercept) x
# 0.8989707 0.3932077
We see that the RD from OLS is .22 and that from SLS is .39. We can also compute the average marginal effect from the logistic regression equation:
coef(fit.logit)["x"] * mean(dlogis(predict(fit.logit)))
# x
# 0.224426
We can see that the OLS estimate is very close to this value.
How about we plot the different effects to better understand what they try to capture:
library(ggplot2)
dat.res <- data.frame(
x = dat$x, logit = fitted(fit.logit),
ols = fitted(fit.ols), sls = fitted(fit.sls))
dat.res <- tidyr::gather(dat.res, model, fitted, logit:sls)
ggplot(dat.res, aes(x, fitted, col = model)) +
geom_line() + theme_bw()
From here, we see that the OLS results looks nothing like the logistic curve. OLS captures the average change in probability of y across the range of x (the average marginal effect). While SLS results in the linear approximation to the logistic curve in the region it is changing on the probability scale.
In this scenario, I think the SLS estimate better reflects the reality of the situation.
As with OLS, heteroskedasticity is implied by SLS, so Horrace and Oaxaca recommend heteroskedasticity-consistent standard errors.
Horrace, W. C., & Oaxaca, R. L. (2003, January 1). New Wine in Old Bottles: A Sequential Estimation Technique for the Lpm. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=383102
Wacholder, S. (1986). Binomial regression in GLIM: estimating risk ratios and risk differences. American Journal of Epidemiology, 123(1), 174–184. https://doi.org/10.1093/oxfordjournals.aje.a114212 | Is there any better alternative to Linear Probability Model?
The first "drawback" you mention is the definition of the risk difference, so there is no avoiding this.
There is at least one way to obtain the risk difference using the logistic regression model. It |
51,958 | Is there any better alternative to Linear Probability Model? | Every model has this problem. For example, logistic regression implies the constant log odd ratio.
For binomial distribution, the variance is $p(1-p)$ for one trial. So the different predict value of p implies the different variance. But in the model fitting process this problem is resolve by WLS (weighted least square).
For $\hat p = X\hat \beta$, it is possible for some $X$, the $\hat p$ can go lower than 0, or higher than 1, especially when the model is used to predict the probability using the $X$s that is not in the dataset used to build the model. | Is there any better alternative to Linear Probability Model? | Every model has this problem. For example, logistic regression implies the constant log odd ratio.
For binomial distribution, the variance is $p(1-p)$ for one trial. So the different predict value of | Is there any better alternative to Linear Probability Model?
Every model has this problem. For example, logistic regression implies the constant log odd ratio.
For binomial distribution, the variance is $p(1-p)$ for one trial. So the different predict value of p implies the different variance. But in the model fitting process this problem is resolve by WLS (weighted least square).
For $\hat p = X\hat \beta$, it is possible for some $X$, the $\hat p$ can go lower than 0, or higher than 1, especially when the model is used to predict the probability using the $X$s that is not in the dataset used to build the model. | Is there any better alternative to Linear Probability Model?
Every model has this problem. For example, logistic regression implies the constant log odd ratio.
For binomial distribution, the variance is $p(1-p)$ for one trial. So the different predict value of |
51,959 | A simple Neural Network, finding weights to achieve 100% accuracy | There's three sides to the triangles and three hidden neurons. You want each hidden neurons to check on which side of the triangle side an input is. So:
The first hidden neuron will represent $x_1 > 0.5$
The second will represent $x_2 > 0.5$
The third will represent $x_1 + x_2 < 1$
Then the output will be something like a logical "and", or in other words, a sigmoid with an activation threshold. | A simple Neural Network, finding weights to achieve 100% accuracy | There's three sides to the triangles and three hidden neurons. You want each hidden neurons to check on which side of the triangle side an input is. So:
The first hidden neuron will represent $x_1 > | A simple Neural Network, finding weights to achieve 100% accuracy
There's three sides to the triangles and three hidden neurons. You want each hidden neurons to check on which side of the triangle side an input is. So:
The first hidden neuron will represent $x_1 > 0.5$
The second will represent $x_2 > 0.5$
The third will represent $x_1 + x_2 < 1$
Then the output will be something like a logical "and", or in other words, a sigmoid with an activation threshold. | A simple Neural Network, finding weights to achieve 100% accuracy
There's three sides to the triangles and three hidden neurons. You want each hidden neurons to check on which side of the triangle side an input is. So:
The first hidden neuron will represent $x_1 > |
51,960 | Approximating leaky ReLU with a differentiable function | The softplus function is commonly described as a smooth approximation of the standard ReLU:
$$s(x) = \log(1 + e^x)$$
The leaky ReLU (with leak coefficient $\alpha$) is:
$$r_L(x) = \max \{ \alpha x, x\}$$
We can also write this as:
$$r_L(x) = \alpha x + (1-\alpha) \max\{0, x\}$$
Note that $\max\{0, x\}$ is the standard ReLU. So, we can construct a smooth approximation to the leaky ReLU by substituting in the softplus function:
$$s_L(x) = \alpha x + (1-\alpha) \log(1 + e^x)$$ | Approximating leaky ReLU with a differentiable function | The softplus function is commonly described as a smooth approximation of the standard ReLU:
$$s(x) = \log(1 + e^x)$$
The leaky ReLU (with leak coefficient $\alpha$) is:
$$r_L(x) = \max \{ \alpha x, x | Approximating leaky ReLU with a differentiable function
The softplus function is commonly described as a smooth approximation of the standard ReLU:
$$s(x) = \log(1 + e^x)$$
The leaky ReLU (with leak coefficient $\alpha$) is:
$$r_L(x) = \max \{ \alpha x, x\}$$
We can also write this as:
$$r_L(x) = \alpha x + (1-\alpha) \max\{0, x\}$$
Note that $\max\{0, x\}$ is the standard ReLU. So, we can construct a smooth approximation to the leaky ReLU by substituting in the softplus function:
$$s_L(x) = \alpha x + (1-\alpha) \log(1 + e^x)$$ | Approximating leaky ReLU with a differentiable function
The softplus function is commonly described as a smooth approximation of the standard ReLU:
$$s(x) = \log(1 + e^x)$$
The leaky ReLU (with leak coefficient $\alpha$) is:
$$r_L(x) = \max \{ \alpha x, x |
51,961 | Approximating leaky ReLU with a differentiable function | I stumbled on this on accident, not sure if this would be useful but try this weird function I thought of:
(1/20)x(e^arctan(x))
Edit: let's put a constant of say (1/20) in front to prevent the slope beign greater than one
arctan goes to plus or minus pi/2 at values |x|>3 so we have on the left hand side xe^(-pi/2)= x(small constant) and on the right side xe^(pi/2) = x(large constant) which resembles the leaky-relu activation function
plotted on wolfram alpha:
https://www.wolframalpha.com/input/?i=x*%28e%5Earctan%28x%29%29 | Approximating leaky ReLU with a differentiable function | I stumbled on this on accident, not sure if this would be useful but try this weird function I thought of:
(1/20)x(e^arctan(x))
Edit: let's put a constant of say (1/20) in front to prevent the slope b | Approximating leaky ReLU with a differentiable function
I stumbled on this on accident, not sure if this would be useful but try this weird function I thought of:
(1/20)x(e^arctan(x))
Edit: let's put a constant of say (1/20) in front to prevent the slope beign greater than one
arctan goes to plus or minus pi/2 at values |x|>3 so we have on the left hand side xe^(-pi/2)= x(small constant) and on the right side xe^(pi/2) = x(large constant) which resembles the leaky-relu activation function
plotted on wolfram alpha:
https://www.wolframalpha.com/input/?i=x*%28e%5Earctan%28x%29%29 | Approximating leaky ReLU with a differentiable function
I stumbled on this on accident, not sure if this would be useful but try this weird function I thought of:
(1/20)x(e^arctan(x))
Edit: let's put a constant of say (1/20) in front to prevent the slope b |
51,962 | Approximating leaky ReLU with a differentiable function | Leaky RELU is defined as:
LogSumExp is a smooth approximation of the max function. So I would go with: | Approximating leaky ReLU with a differentiable function | Leaky RELU is defined as:
LogSumExp is a smooth approximation of the max function. So I would go with: | Approximating leaky ReLU with a differentiable function
Leaky RELU is defined as:
LogSumExp is a smooth approximation of the max function. So I would go with: | Approximating leaky ReLU with a differentiable function
Leaky RELU is defined as:
LogSumExp is a smooth approximation of the max function. So I would go with: |
51,963 | VIF(collinearity) vs Correlation? | First, I think it is better to use condition indexes rather than VIF to diagnose collinearity. See the work of David Belsley or even (if you want a soporific) my dissertation (that link seems to have vanished; this one should work (I hope).
However, to get to your question: It is possible to have very low correlations among all variables but perfect collinearity. If you have 11 independent variables, 10 of which are independent and the 11th is the sum of the other 10, then correlations will be about 0.1 but collinearity is perfect. So, high VIF does not imply high correlations.
It is also true that you can have pretty high correlations without it creating troublesome collinearity, but this is trickier to show. See the references. | VIF(collinearity) vs Correlation? | First, I think it is better to use condition indexes rather than VIF to diagnose collinearity. See the work of David Belsley or even (if you want a soporific) my dissertation (that link seems to have | VIF(collinearity) vs Correlation?
First, I think it is better to use condition indexes rather than VIF to diagnose collinearity. See the work of David Belsley or even (if you want a soporific) my dissertation (that link seems to have vanished; this one should work (I hope).
However, to get to your question: It is possible to have very low correlations among all variables but perfect collinearity. If you have 11 independent variables, 10 of which are independent and the 11th is the sum of the other 10, then correlations will be about 0.1 but collinearity is perfect. So, high VIF does not imply high correlations.
It is also true that you can have pretty high correlations without it creating troublesome collinearity, but this is trickier to show. See the references. | VIF(collinearity) vs Correlation?
First, I think it is better to use condition indexes rather than VIF to diagnose collinearity. See the work of David Belsley or even (if you want a soporific) my dissertation (that link seems to have |
51,964 | Doubling &/or halving p-values for one- vs. two-tailed tests | If you do a two-tailed test and computation gives you $p=0.03$, then $p<0.05$. The result is significant. If you do a one-tailed test, you will get a different result, depending on which tail you investigate. It will be either a lot larger or only half as big.
$\alpha=0.05$ is the usual convention, no matter whether you test one- ode two-tailed. You don't halve that (except maybe in Bonferroni-correction, which is not the topic here). Thus yes, sometimes a one-tailed test will give you a significant result where the two-tailed does not. However, this is not how things work: You have to always determine upfront, whether you consider a one- or a two-tailed test appropriate as you have to determine your $\alpha$-level upfront. Then you calculate the $p$-value for that test and there are no more degrees of freedom how to test or what to compare the $p$-value to. If you determine on the sidedness of your test depending on whether you like the result, this is not good scientific practice.
That being said, there is hardly ever a situation where it is appropriate to test one-tailed. In far most circumstances it would be worth communicating a significant result in both directions. If you test one-tailed, some of your audience will consider this a trick to hack your $p$-value into being as small as possible. | Doubling &/or halving p-values for one- vs. two-tailed tests | If you do a two-tailed test and computation gives you $p=0.03$, then $p<0.05$. The result is significant. If you do a one-tailed test, you will get a different result, depending on which tail you inve | Doubling &/or halving p-values for one- vs. two-tailed tests
If you do a two-tailed test and computation gives you $p=0.03$, then $p<0.05$. The result is significant. If you do a one-tailed test, you will get a different result, depending on which tail you investigate. It will be either a lot larger or only half as big.
$\alpha=0.05$ is the usual convention, no matter whether you test one- ode two-tailed. You don't halve that (except maybe in Bonferroni-correction, which is not the topic here). Thus yes, sometimes a one-tailed test will give you a significant result where the two-tailed does not. However, this is not how things work: You have to always determine upfront, whether you consider a one- or a two-tailed test appropriate as you have to determine your $\alpha$-level upfront. Then you calculate the $p$-value for that test and there are no more degrees of freedom how to test or what to compare the $p$-value to. If you determine on the sidedness of your test depending on whether you like the result, this is not good scientific practice.
That being said, there is hardly ever a situation where it is appropriate to test one-tailed. In far most circumstances it would be worth communicating a significant result in both directions. If you test one-tailed, some of your audience will consider this a trick to hack your $p$-value into being as small as possible. | Doubling &/or halving p-values for one- vs. two-tailed tests
If you do a two-tailed test and computation gives you $p=0.03$, then $p<0.05$. The result is significant. If you do a one-tailed test, you will get a different result, depending on which tail you inve |
51,965 | Doubling &/or halving p-values for one- vs. two-tailed tests | You have to consider how the $p$-value was determined. That will dictate how you should move between the $p$-value you have and the one you want. In general, a $p$-value is the proportion of possible values (e.g., test statistics, mean differences, etc.) that are as far away or further than your value under a given distribution.
In standard textbook presentations, the first step is to compute the area under the curve to the left of the value as you move from $-\infty$ to $\infty$. It may help to examine the figure below. If that is all you did (e.g., by looking up the $p$-value in a table, or what your software did), that constitutes the lower one-tailed $p$-value. If you wanted the upper one-tailed $p$-value, you would subtract that value from $1$ (not double or halve it).
$\hspace{12mm}$
However, it is pretty standard that statistical software will default to providing a two-tailed $p$-value. (To move from the textbook value above, you would multiply the $p$-value by two, if $<.5$—such as working with the white area in the figure above, or subtract from $1$ and then multiply by two, if $>.5$—such as working with the gray area above.) If you are starting from a two-tailed $p$-value, and you wanted to compute a one-tailed $p$-value, you need to determine which tail you are in relative to the tail you want to assess. Start by dividing the two-tailed $p$-value by $2$. Then, if your result is in the tail you are after (e.g., you want to know if $\bar x$ is significantly less than $\mu_0$ and $\bar x < \mu_0$), you are done. If your observed value is in the wrong tail, subtract the quotient from $1$.
It may help to walk through a simple calculation. For simplicity, imagine you test an observed sample mean (say, $.75$) against a null population mean value (call it $0$), when the population is known to be normally distributed and the standard deviation is known to be $1$. Your software returns a two-tailed $p$-value (viz., $.453$) and you want...
the two-tailed $p$-value: Stop, you're already there.
the $p$-value for the test that $\bar x > \mu_0$: $.453/2 = .227$.
the $p$-value for the test that $\bar x < \mu_0$: $.453/2 = .227\quad \Rightarrow \quad 1-.227= .773$.
If your software returned one of the above one-tailed $p$-values, and you wanted to determine the two-tailed $p$-value, you would reverse one of the processes listed above depending on whether you were in the tail you wanted or not. | Doubling &/or halving p-values for one- vs. two-tailed tests | You have to consider how the $p$-value was determined. That will dictate how you should move between the $p$-value you have and the one you want. In general, a $p$-value is the proportion of possibl | Doubling &/or halving p-values for one- vs. two-tailed tests
You have to consider how the $p$-value was determined. That will dictate how you should move between the $p$-value you have and the one you want. In general, a $p$-value is the proportion of possible values (e.g., test statistics, mean differences, etc.) that are as far away or further than your value under a given distribution.
In standard textbook presentations, the first step is to compute the area under the curve to the left of the value as you move from $-\infty$ to $\infty$. It may help to examine the figure below. If that is all you did (e.g., by looking up the $p$-value in a table, or what your software did), that constitutes the lower one-tailed $p$-value. If you wanted the upper one-tailed $p$-value, you would subtract that value from $1$ (not double or halve it).
$\hspace{12mm}$
However, it is pretty standard that statistical software will default to providing a two-tailed $p$-value. (To move from the textbook value above, you would multiply the $p$-value by two, if $<.5$—such as working with the white area in the figure above, or subtract from $1$ and then multiply by two, if $>.5$—such as working with the gray area above.) If you are starting from a two-tailed $p$-value, and you wanted to compute a one-tailed $p$-value, you need to determine which tail you are in relative to the tail you want to assess. Start by dividing the two-tailed $p$-value by $2$. Then, if your result is in the tail you are after (e.g., you want to know if $\bar x$ is significantly less than $\mu_0$ and $\bar x < \mu_0$), you are done. If your observed value is in the wrong tail, subtract the quotient from $1$.
It may help to walk through a simple calculation. For simplicity, imagine you test an observed sample mean (say, $.75$) against a null population mean value (call it $0$), when the population is known to be normally distributed and the standard deviation is known to be $1$. Your software returns a two-tailed $p$-value (viz., $.453$) and you want...
the two-tailed $p$-value: Stop, you're already there.
the $p$-value for the test that $\bar x > \mu_0$: $.453/2 = .227$.
the $p$-value for the test that $\bar x < \mu_0$: $.453/2 = .227\quad \Rightarrow \quad 1-.227= .773$.
If your software returned one of the above one-tailed $p$-values, and you wanted to determine the two-tailed $p$-value, you would reverse one of the processes listed above depending on whether you were in the tail you wanted or not. | Doubling &/or halving p-values for one- vs. two-tailed tests
You have to consider how the $p$-value was determined. That will dictate how you should move between the $p$-value you have and the one you want. In general, a $p$-value is the proportion of possibl |
51,966 | Doubling &/or halving p-values for one- vs. two-tailed tests | In ALL cases, $\alpha$ is some standard value like 0.05, and you do not alter it for the directionality of the test. Do not halve it, double it, etc. What is always true is that $p$ is some rejection area, and you are testing whether the area is smaller than some standard threshold (e.g. 0.05). What changes is how you calculate $p$.
I always think of $p$ as a function of the cumulative distribution function (CDF) of the test $T$ statistic you calculated - that is, a function of $F(T)$, which is the integral of the PDF of the test statistic's theoretical distribution from $-\infty$ up to your $T$. Here's the crib sheet for the $p$-values, after which I will explain them:
Left-tailed: $p = F(T)$
Right-tailed: $p = 1-F(T)$
Two-tailed: $p=\begin{cases}
T<0: & 2\cdot F(T)\\
T>0: & 2\cdot(1-F(T))
\end{cases}$
In all cases, you are trying to calculate the area of rejection. For left-tailed, it's just the CDF mentioned above. For right-tailed, it's the opposite: everything BUT the left-area, hence the $1-F(T)$. For two-tailed, it's slightly tricky. You're trying to create a rejection area of two symmetrical pieces. For a $T$ that is negative and thus lies to the left of the standardized distribution's mean of 0, you have something like a left-tail leading up to it. So, you take the left-tail calculation and double it. For a $T$ that is positive and thus lies to the right of the standardized distribution's mean of 0, you have something like a right-tail going past it. So you take the right-tail calculation and double it. | Doubling &/or halving p-values for one- vs. two-tailed tests | In ALL cases, $\alpha$ is some standard value like 0.05, and you do not alter it for the directionality of the test. Do not halve it, double it, etc. What is always true is that $p$ is some rejectio | Doubling &/or halving p-values for one- vs. two-tailed tests
In ALL cases, $\alpha$ is some standard value like 0.05, and you do not alter it for the directionality of the test. Do not halve it, double it, etc. What is always true is that $p$ is some rejection area, and you are testing whether the area is smaller than some standard threshold (e.g. 0.05). What changes is how you calculate $p$.
I always think of $p$ as a function of the cumulative distribution function (CDF) of the test $T$ statistic you calculated - that is, a function of $F(T)$, which is the integral of the PDF of the test statistic's theoretical distribution from $-\infty$ up to your $T$. Here's the crib sheet for the $p$-values, after which I will explain them:
Left-tailed: $p = F(T)$
Right-tailed: $p = 1-F(T)$
Two-tailed: $p=\begin{cases}
T<0: & 2\cdot F(T)\\
T>0: & 2\cdot(1-F(T))
\end{cases}$
In all cases, you are trying to calculate the area of rejection. For left-tailed, it's just the CDF mentioned above. For right-tailed, it's the opposite: everything BUT the left-area, hence the $1-F(T)$. For two-tailed, it's slightly tricky. You're trying to create a rejection area of two symmetrical pieces. For a $T$ that is negative and thus lies to the left of the standardized distribution's mean of 0, you have something like a left-tail leading up to it. So, you take the left-tail calculation and double it. For a $T$ that is positive and thus lies to the right of the standardized distribution's mean of 0, you have something like a right-tail going past it. So you take the right-tail calculation and double it. | Doubling &/or halving p-values for one- vs. two-tailed tests
In ALL cases, $\alpha$ is some standard value like 0.05, and you do not alter it for the directionality of the test. Do not halve it, double it, etc. What is always true is that $p$ is some rejectio |
51,967 | A data set with missing values in multiple variables | @Tim gave a nice response. To add to that, the best thinking about dealing with missing values (MVs) began with Donald Rubin and Roderick Little in their book Statistical Analysis with Missing Data, now in its 9th edition. They originated the classifications into MAR, MCAR, etc. To their several books I would add Paul Allison's highly readable Sage book Missing Data, which remains one of the best, most accessible treatments on this topic in the literature.
A number of commonly used, bad heuristics have emerged over the years for dealing with missing data, many of which still see use today since they are easily implemented "solutions." These include ones already mentioned such as discretizing the variable and creating a junk category labelled "Missing" or "NA" (not available, unknown) into which all missing values for that variable are tossed, as well as, for continuous variables, plugging the missing values with a constant -- e.g., the arithmetic mean. Secondarily and for regression models, some recommend using dummy variables (0,1) indicating the presence (absence) of an MV. The dummy is intended to "capture" the overall impact of the MVs on the model while also appropriately adjusting the parameters. These are all bad ideas because, in the first case, a heterogenous mix of information is lumped into a single category while, in the second case, a potentially large burst or spike containing a single value (the mean) is introduced into an otherwise typically smooth distribution for a predictor.
The least biasing of all of the options for imputation are regression models. In an American Statistican paper (for which I no longer have a reference, sorry), dummy variables for MVs in regression have been demonstrated to not only not capture the effects of missing values but also to generate biased parameters. The AmStat paper based these conclusions on a comparison of the scenarios for the various MV options with full information data. The author's recommendation was, assuming the magnitude or volume of missing information wasn't too much or too large, to use the least biasing solution -- full information modeled imputation based on data available after deleting the observations containing MVs. Of course, this response demands an answer to "what is too much?" Here, there are no firm benchmarks, only experiential, subjective heuristics and rules of thumb without any firm theoretical motivation. This means that it's up to the analyst to decide. Just so, @Discipulus' rule of thumb is to work with variables containing 50% or less MVs, certainly a reasonable heuristic. In the OPs case, that would exclude the two variables containing more than 50% MVs, variables that are described as "important" to the analysis. That said, it is safe to assume that 95% MVs qualifies as "too much."
If it's thought that there are not too many MVs, then use some variant of multiple imputation to plug them. Here too, there are many bad methods to choose from including, e.g., "sorted hot deck" multiple imputation where observations are sorted across a string of fully observed variables and the fully observed value that comes closest to the observation with missing information across that sort string is used as a plug. In general, all of these "mechanistic" solutions to plugging MVs are to be rejected in deference to model based multiple imputation.
In an ASA workshop taught by Rubin, several "best" practices were discussed for dealing with multiple variables containing MVs in a dataset. First, rank the variables by their frequency or percent of missing information from high to low and begin the process of imputation, one variable at a time, on those containing the lightest or least amount of MVs. Then, retain and use these newly plugged variables in the model-building process for each subsequent variable. Use every variable available to you in building imputation models, including the target or dependent variable(s) and excluding the lower ranked variables with MVs.
The key metric in building and evaluating model-based imputation is the comparison of the pre-imputed means and std devs (based on the full information after deleting MVs) with the post-imputation or plugged value means and std devs. If the imputation was successful, then little or no (significant) difference should be observed in these marginals. An important note of caution needs to be introduced at this point: this metric and multiple imputation in general is intended to evaluate the preservation of overall or unconditional marginals. This means that the actual values used and assigned to each MV field have a high likelihood of being "wrong" for that observation, if compared with the full but unavailable information. For instance, in a head-to-head comparison of actual vs imputed values based on a sample possessing both self-reported survey information (the actual values) versus the imputations made by a leading vendor of geo-demographic information (the imputed values), imputed fields such as head of household age and income were wrong nearly 80% of the time at the level of the individual observation. Even after partitioning these fields into high and low groups based on median splits, the imputations were still wrong more than 50% of the time. However, the marginals were recovered more or less accurately.
One final note, imputation can be appropriate for features, predictors or independent variables but is not recommended for the target or dependent variable. | A data set with missing values in multiple variables | @Tim gave a nice response. To add to that, the best thinking about dealing with missing values (MVs) began with Donald Rubin and Roderick Little in their book Statistical Analysis with Missing Data, n | A data set with missing values in multiple variables
@Tim gave a nice response. To add to that, the best thinking about dealing with missing values (MVs) began with Donald Rubin and Roderick Little in their book Statistical Analysis with Missing Data, now in its 9th edition. They originated the classifications into MAR, MCAR, etc. To their several books I would add Paul Allison's highly readable Sage book Missing Data, which remains one of the best, most accessible treatments on this topic in the literature.
A number of commonly used, bad heuristics have emerged over the years for dealing with missing data, many of which still see use today since they are easily implemented "solutions." These include ones already mentioned such as discretizing the variable and creating a junk category labelled "Missing" or "NA" (not available, unknown) into which all missing values for that variable are tossed, as well as, for continuous variables, plugging the missing values with a constant -- e.g., the arithmetic mean. Secondarily and for regression models, some recommend using dummy variables (0,1) indicating the presence (absence) of an MV. The dummy is intended to "capture" the overall impact of the MVs on the model while also appropriately adjusting the parameters. These are all bad ideas because, in the first case, a heterogenous mix of information is lumped into a single category while, in the second case, a potentially large burst or spike containing a single value (the mean) is introduced into an otherwise typically smooth distribution for a predictor.
The least biasing of all of the options for imputation are regression models. In an American Statistican paper (for which I no longer have a reference, sorry), dummy variables for MVs in regression have been demonstrated to not only not capture the effects of missing values but also to generate biased parameters. The AmStat paper based these conclusions on a comparison of the scenarios for the various MV options with full information data. The author's recommendation was, assuming the magnitude or volume of missing information wasn't too much or too large, to use the least biasing solution -- full information modeled imputation based on data available after deleting the observations containing MVs. Of course, this response demands an answer to "what is too much?" Here, there are no firm benchmarks, only experiential, subjective heuristics and rules of thumb without any firm theoretical motivation. This means that it's up to the analyst to decide. Just so, @Discipulus' rule of thumb is to work with variables containing 50% or less MVs, certainly a reasonable heuristic. In the OPs case, that would exclude the two variables containing more than 50% MVs, variables that are described as "important" to the analysis. That said, it is safe to assume that 95% MVs qualifies as "too much."
If it's thought that there are not too many MVs, then use some variant of multiple imputation to plug them. Here too, there are many bad methods to choose from including, e.g., "sorted hot deck" multiple imputation where observations are sorted across a string of fully observed variables and the fully observed value that comes closest to the observation with missing information across that sort string is used as a plug. In general, all of these "mechanistic" solutions to plugging MVs are to be rejected in deference to model based multiple imputation.
In an ASA workshop taught by Rubin, several "best" practices were discussed for dealing with multiple variables containing MVs in a dataset. First, rank the variables by their frequency or percent of missing information from high to low and begin the process of imputation, one variable at a time, on those containing the lightest or least amount of MVs. Then, retain and use these newly plugged variables in the model-building process for each subsequent variable. Use every variable available to you in building imputation models, including the target or dependent variable(s) and excluding the lower ranked variables with MVs.
The key metric in building and evaluating model-based imputation is the comparison of the pre-imputed means and std devs (based on the full information after deleting MVs) with the post-imputation or plugged value means and std devs. If the imputation was successful, then little or no (significant) difference should be observed in these marginals. An important note of caution needs to be introduced at this point: this metric and multiple imputation in general is intended to evaluate the preservation of overall or unconditional marginals. This means that the actual values used and assigned to each MV field have a high likelihood of being "wrong" for that observation, if compared with the full but unavailable information. For instance, in a head-to-head comparison of actual vs imputed values based on a sample possessing both self-reported survey information (the actual values) versus the imputations made by a leading vendor of geo-demographic information (the imputed values), imputed fields such as head of household age and income were wrong nearly 80% of the time at the level of the individual observation. Even after partitioning these fields into high and low groups based on median splits, the imputations were still wrong more than 50% of the time. However, the marginals were recovered more or less accurately.
One final note, imputation can be appropriate for features, predictors or independent variables but is not recommended for the target or dependent variable. | A data set with missing values in multiple variables
@Tim gave a nice response. To add to that, the best thinking about dealing with missing values (MVs) began with Donald Rubin and Roderick Little in their book Statistical Analysis with Missing Data, n |
51,968 | A data set with missing values in multiple variables | It all depends on why the data is missing. If the data is Missing Completely At Random, you can discard the incomplete data. If the data is Missing At Random, your best bet is multiple imputation (e.g., check out the mice or mi packages in R, and the various blog posts that describe how to use multiple imputation).
However, with the type of data you have, it is typical that the missing data will not be "randomly" missing in any sense of the word. For example, if data is missing because people did not get specific procedures, or were not given specific tests, or only stayed in hospital for a short time so little data was collected, imputation is a poor way to go, as your missing data is nonignorable. A solution is to treat each of your predictors as being categorical, where one of the categories is called "No data". As to whether this is the best solution, there is no way to say without a whole lot more detail on your data, problem, and the model. | A data set with missing values in multiple variables | It all depends on why the data is missing. If the data is Missing Completely At Random, you can discard the incomplete data. If the data is Missing At Random, your best bet is multiple imputation (e.g | A data set with missing values in multiple variables
It all depends on why the data is missing. If the data is Missing Completely At Random, you can discard the incomplete data. If the data is Missing At Random, your best bet is multiple imputation (e.g., check out the mice or mi packages in R, and the various blog posts that describe how to use multiple imputation).
However, with the type of data you have, it is typical that the missing data will not be "randomly" missing in any sense of the word. For example, if data is missing because people did not get specific procedures, or were not given specific tests, or only stayed in hospital for a short time so little data was collected, imputation is a poor way to go, as your missing data is nonignorable. A solution is to treat each of your predictors as being categorical, where one of the categories is called "No data". As to whether this is the best solution, there is no way to say without a whole lot more detail on your data, problem, and the model. | A data set with missing values in multiple variables
It all depends on why the data is missing. If the data is Missing Completely At Random, you can discard the incomplete data. If the data is Missing At Random, your best bet is multiple imputation (e.g |
51,969 | A data set with missing values in multiple variables | Before you decide on whether you want to impute, you should ask yourself whether the patients not giving you information should be part of your model. For instance, if you want to model for control vs treated, all those missing patients should be dropped (they don't tell you which group they are in). However, if you want to construct just a contingency table (or just exploratory analysis) of all patients in the hospital, you may want to keep the patients.
You will need to have good reasons for imputation because your data is potentially sensitive. For instance, if you impute a patient with the average cancer rate, what if the patient doesn't have cancer? How are you going to explain your boss that you guess a model for predicting cancers?
Any imprecision could be potentially deadly in your data set. Since your missing data is not random, the most common strategy is to recode your variables such that you have a new "NA" category. | A data set with missing values in multiple variables | Before you decide on whether you want to impute, you should ask yourself whether the patients not giving you information should be part of your model. For instance, if you want to model for control vs | A data set with missing values in multiple variables
Before you decide on whether you want to impute, you should ask yourself whether the patients not giving you information should be part of your model. For instance, if you want to model for control vs treated, all those missing patients should be dropped (they don't tell you which group they are in). However, if you want to construct just a contingency table (or just exploratory analysis) of all patients in the hospital, you may want to keep the patients.
You will need to have good reasons for imputation because your data is potentially sensitive. For instance, if you impute a patient with the average cancer rate, what if the patient doesn't have cancer? How are you going to explain your boss that you guess a model for predicting cancers?
Any imprecision could be potentially deadly in your data set. Since your missing data is not random, the most common strategy is to recode your variables such that you have a new "NA" category. | A data set with missing values in multiple variables
Before you decide on whether you want to impute, you should ask yourself whether the patients not giving you information should be part of your model. For instance, if you want to model for control vs |
51,970 | What would cause a residual plot to be entirely above 0? | To summarize the various comments and answers so far:
If the predictions are on data that was not part of the training sample, there could be a systematic difference between the training data and the prediction data. For example, if you are fitting time-series data and the data contains an upward-curving trend, then predicting the future from the past with a linear model will yield under-predictions on average.
If the model always under-predicts on training data (or even just on average), it could be a less commonly used variety of linear model, such as a quantile regression model; or it might not contain an intercept (or terms which can linearly combine to form an intercept).
If the model is standard linear least squares and does contain an intercept or equivalent spanning terms, then Benjamin's post is correct. The phenomenon you observed cannot possibly happen. So there must be an error in the computational code used for the model's training or prediction. | What would cause a residual plot to be entirely above 0? | To summarize the various comments and answers so far:
If the predictions are on data that was not part of the training sample, there could be a systematic difference between the training data and th | What would cause a residual plot to be entirely above 0?
To summarize the various comments and answers so far:
If the predictions are on data that was not part of the training sample, there could be a systematic difference between the training data and the prediction data. For example, if you are fitting time-series data and the data contains an upward-curving trend, then predicting the future from the past with a linear model will yield under-predictions on average.
If the model always under-predicts on training data (or even just on average), it could be a less commonly used variety of linear model, such as a quantile regression model; or it might not contain an intercept (or terms which can linearly combine to form an intercept).
If the model is standard linear least squares and does contain an intercept or equivalent spanning terms, then Benjamin's post is correct. The phenomenon you observed cannot possibly happen. So there must be an error in the computational code used for the model's training or prediction. | What would cause a residual plot to be entirely above 0?
To summarize the various comments and answers so far:
If the predictions are on data that was not part of the training sample, there could be a systematic difference between the training data and th |
51,971 | What would cause a residual plot to be entirely above 0? | We will assume that the linear regression fit is through least squares, contains an intercept, and the residual plot is that from the training data.
From the normal equations, we see that the residuals of the regression has sample mean 0. Therefore, it's not possible for the residual plot to be entirely above 0. There must be a mistake somewhere in the visualization/computation. | What would cause a residual plot to be entirely above 0? | We will assume that the linear regression fit is through least squares, contains an intercept, and the residual plot is that from the training data.
From the normal equations, we see that the residual | What would cause a residual plot to be entirely above 0?
We will assume that the linear regression fit is through least squares, contains an intercept, and the residual plot is that from the training data.
From the normal equations, we see that the residuals of the regression has sample mean 0. Therefore, it's not possible for the residual plot to be entirely above 0. There must be a mistake somewhere in the visualization/computation. | What would cause a residual plot to be entirely above 0?
We will assume that the linear regression fit is through least squares, contains an intercept, and the residual plot is that from the training data.
From the normal equations, we see that the residual |
51,972 | What is the distribution of the ratio between independent Beta and Gamma random variables? | For independent $a, d\sim\operatorname{\Gamma}(M,c)$, a remarkable result is that $U=a+d$ and $V=a/(a+d)$ are also independent. In addition, $U\sim\operatorname{\Gamma}(2M, c)$, $V\sim\mathrm{B}(M,M)$. See for example Ch.25, Sec. 2 of Johnson, Kotz, and Balakrishnan's
Continuous Univariate Distributions, Volume 2 for some details.
Now this says $$Y=\frac{a}{(a+d)^2}=\frac{1}{a+d}\frac{a}{a+d}=\frac{V}{U}$$ is a ratio between independent Gamma and Beta random variables. Nadarajah and Kotz's On the Product and Ratio of Gamma and Beta Random Variables and Nadarajah's The Gamma Beta Ratio Distribution have discussed this distribution thoroughly.
EDIT: Using the CDFs and PDFs given in the articles mentioned above, we can derive (simplified with Wolfram Alpha) the CDF of $Y$ as: $$F_Y(y) = 1-\frac{\Gamma(3M)}{2(cy)^{2M}\Gamma(M+1)\Gamma(4M)}{}_2F_2\left(2M, 3M; 2M+1,4M; -\frac{1}{cy}\right),$$ and the PDF of $Y$ as: $$f_Y(y)=\frac{\Gamma(3M)}{y(cy)^{2M}\Gamma(M)\Gamma(4M)}{}_1F_1\left(3M; 4M; -\frac{1}{cy}\right)$$ where ${}_1F_1$ and ${}_2F_2$ are the generalized hyper-geometric functions. These expressions are in accordance with @wolfies's result. | What is the distribution of the ratio between independent Beta and Gamma random variables? | For independent $a, d\sim\operatorname{\Gamma}(M,c)$, a remarkable result is that $U=a+d$ and $V=a/(a+d)$ are also independent. In addition, $U\sim\operatorname{\Gamma}(2M, c)$, $V\sim\mathrm{B}(M,M)$ | What is the distribution of the ratio between independent Beta and Gamma random variables?
For independent $a, d\sim\operatorname{\Gamma}(M,c)$, a remarkable result is that $U=a+d$ and $V=a/(a+d)$ are also independent. In addition, $U\sim\operatorname{\Gamma}(2M, c)$, $V\sim\mathrm{B}(M,M)$. See for example Ch.25, Sec. 2 of Johnson, Kotz, and Balakrishnan's
Continuous Univariate Distributions, Volume 2 for some details.
Now this says $$Y=\frac{a}{(a+d)^2}=\frac{1}{a+d}\frac{a}{a+d}=\frac{V}{U}$$ is a ratio between independent Gamma and Beta random variables. Nadarajah and Kotz's On the Product and Ratio of Gamma and Beta Random Variables and Nadarajah's The Gamma Beta Ratio Distribution have discussed this distribution thoroughly.
EDIT: Using the CDFs and PDFs given in the articles mentioned above, we can derive (simplified with Wolfram Alpha) the CDF of $Y$ as: $$F_Y(y) = 1-\frac{\Gamma(3M)}{2(cy)^{2M}\Gamma(M+1)\Gamma(4M)}{}_2F_2\left(2M, 3M; 2M+1,4M; -\frac{1}{cy}\right),$$ and the PDF of $Y$ as: $$f_Y(y)=\frac{\Gamma(3M)}{y(cy)^{2M}\Gamma(M)\Gamma(4M)}{}_1F_1\left(3M; 4M; -\frac{1}{cy}\right)$$ where ${}_1F_1$ and ${}_2F_2$ are the generalized hyper-geometric functions. These expressions are in accordance with @wolfies's result. | What is the distribution of the ratio between independent Beta and Gamma random variables?
For independent $a, d\sim\operatorname{\Gamma}(M,c)$, a remarkable result is that $U=a+d$ and $V=a/(a+d)$ are also independent. In addition, $U\sim\operatorname{\Gamma}(2M, c)$, $V\sim\mathrm{B}(M,M)$ |
51,973 | What is the distribution of the ratio between independent Beta and Gamma random variables? | Francis has provided the definitive elegant solution. Following his beautiful method, it seems possible to produce a somewhat simpler form for the pdf.
In particular, following Francis' elegant solution, let $V \sim \text{Beta}(m,m)$ with pdf $f(v)$:
.. and let $U \sim \text{Gamma}(2m, c)$ where $U$ is independent of $V$, and let $Z = \frac1U$, so that $Z \sim \text{InverseGamma}(2m,c)$ with pdf $g(z)$:
Then the desired pdf of $Y = \frac{V}{U} = V Z$, say $h(y)$, can be found with:
where I am using the TransformProduct function from the mathStatica package for Mathematica to automate the nitty gritties.
So does it work?
Here is a quick Monte Carlo check of the:
simulated pdf of $Y = \frac{A}{(A+D)^2}$ (generated from $A$ and $D$) (blue squiggly)
compared to the exact theoretical pdf $h(y)$ (dashed red) derived above:
when parameter $m = 4$ and $c=3.01$. Looks fine. | What is the distribution of the ratio between independent Beta and Gamma random variables? | Francis has provided the definitive elegant solution. Following his beautiful method, it seems possible to produce a somewhat simpler form for the pdf.
In particular, following Francis' elegant soluti | What is the distribution of the ratio between independent Beta and Gamma random variables?
Francis has provided the definitive elegant solution. Following his beautiful method, it seems possible to produce a somewhat simpler form for the pdf.
In particular, following Francis' elegant solution, let $V \sim \text{Beta}(m,m)$ with pdf $f(v)$:
.. and let $U \sim \text{Gamma}(2m, c)$ where $U$ is independent of $V$, and let $Z = \frac1U$, so that $Z \sim \text{InverseGamma}(2m,c)$ with pdf $g(z)$:
Then the desired pdf of $Y = \frac{V}{U} = V Z$, say $h(y)$, can be found with:
where I am using the TransformProduct function from the mathStatica package for Mathematica to automate the nitty gritties.
So does it work?
Here is a quick Monte Carlo check of the:
simulated pdf of $Y = \frac{A}{(A+D)^2}$ (generated from $A$ and $D$) (blue squiggly)
compared to the exact theoretical pdf $h(y)$ (dashed red) derived above:
when parameter $m = 4$ and $c=3.01$. Looks fine. | What is the distribution of the ratio between independent Beta and Gamma random variables?
Francis has provided the definitive elegant solution. Following his beautiful method, it seems possible to produce a somewhat simpler form for the pdf.
In particular, following Francis' elegant soluti |
51,974 | What is Shapley value regression and how does one implement it? | The Shapley Value Regression: Shapley value regression significantly ameliorates the deleterious effects of collinearity on the estimated parameters of a regression equation. The concept of Shapley value was introduced in (cooperative collusive) game theory where agents form collusion and cooperate with each other to raise the value of a game in their favour and later divide it among themselves. Distribution of the value of the game according to Shapley decomposition has been shown to have many desirable properties (Roth, 1988: pp 1-10) including linearity, unanimity, marginalism, etc. Following this theory of sharing of the value of a game, the Shapley value regression decomposes the R2 (read it R square) of a conventional regression (which is considered as the value of the collusive cooperative game) such that the mean expected marginal contribution of every predictor variable (agents in collusion to explain the variation in y, the dependent variable) sums up to R2.
The scheme of Shapley value regression is simple. Suppose z is the dependent variable and x1, x2, ... , xk ∈ X are the predictor variables, which may have strong collinearity. Let Yi ⊂ X in which xi ∈ X is not there or xi ∉ Yi. Thus, Yi will have only k-1 variables. We draw r (r=0, 1, 2, ... , k-1) variables from Yi and let this collection of variables so drawn be called Pr such that Pr ⊆ Yi . Also, Yi = Yi∪∅. Now, Pr can be drawn in L=kCr ways. Also, let Qr = Pr ∪ xi. Regress (least squares) z on Qr to find R2q. Regress (least squares) z on Pr to obtain R2p. The difference between the two R-squares is Dr = R2q - R2p, which is the marginal contribution of xi to z. This is done for all L combinations for a given r and arithmetic mean of Dr (over the sum of all L values of Dr) is computed. Once it is obtained for each r, its arithmetic mean is computed. Note that Pr is null for r=0, and thus Qr contains a single variable, namely xi. Further, when Pr is null, its R2 is zero. The result is the arithmetic average of the mean (or expected) marginal contributions of xi to z. This is done for all xi; i=1, k to obtain the Shapley value (Si) of xi; i=1, k. The In the regression model z=Xb+u, the OLS gives a value of R2. The sum of all Si; i=1,2, ..., k is equal to R2. Thus, OLS R2 has been decomposed. Once all Shapley value shares are known, one may retrieve the coefficients (with original scale and origin) by solving an optimization problem suggested by Lipovetsky (2006) using any appropriate optimization method. A simple algorithm and computer program is available in Mishra (2016). | What is Shapley value regression and how does one implement it? | The Shapley Value Regression: Shapley value regression significantly ameliorates the deleterious effects of collinearity on the estimated parameters of a regression equation. The concept of Shapley va | What is Shapley value regression and how does one implement it?
The Shapley Value Regression: Shapley value regression significantly ameliorates the deleterious effects of collinearity on the estimated parameters of a regression equation. The concept of Shapley value was introduced in (cooperative collusive) game theory where agents form collusion and cooperate with each other to raise the value of a game in their favour and later divide it among themselves. Distribution of the value of the game according to Shapley decomposition has been shown to have many desirable properties (Roth, 1988: pp 1-10) including linearity, unanimity, marginalism, etc. Following this theory of sharing of the value of a game, the Shapley value regression decomposes the R2 (read it R square) of a conventional regression (which is considered as the value of the collusive cooperative game) such that the mean expected marginal contribution of every predictor variable (agents in collusion to explain the variation in y, the dependent variable) sums up to R2.
The scheme of Shapley value regression is simple. Suppose z is the dependent variable and x1, x2, ... , xk ∈ X are the predictor variables, which may have strong collinearity. Let Yi ⊂ X in which xi ∈ X is not there or xi ∉ Yi. Thus, Yi will have only k-1 variables. We draw r (r=0, 1, 2, ... , k-1) variables from Yi and let this collection of variables so drawn be called Pr such that Pr ⊆ Yi . Also, Yi = Yi∪∅. Now, Pr can be drawn in L=kCr ways. Also, let Qr = Pr ∪ xi. Regress (least squares) z on Qr to find R2q. Regress (least squares) z on Pr to obtain R2p. The difference between the two R-squares is Dr = R2q - R2p, which is the marginal contribution of xi to z. This is done for all L combinations for a given r and arithmetic mean of Dr (over the sum of all L values of Dr) is computed. Once it is obtained for each r, its arithmetic mean is computed. Note that Pr is null for r=0, and thus Qr contains a single variable, namely xi. Further, when Pr is null, its R2 is zero. The result is the arithmetic average of the mean (or expected) marginal contributions of xi to z. This is done for all xi; i=1, k to obtain the Shapley value (Si) of xi; i=1, k. The In the regression model z=Xb+u, the OLS gives a value of R2. The sum of all Si; i=1,2, ..., k is equal to R2. Thus, OLS R2 has been decomposed. Once all Shapley value shares are known, one may retrieve the coefficients (with original scale and origin) by solving an optimization problem suggested by Lipovetsky (2006) using any appropriate optimization method. A simple algorithm and computer program is available in Mishra (2016). | What is Shapley value regression and how does one implement it?
The Shapley Value Regression: Shapley value regression significantly ameliorates the deleterious effects of collinearity on the estimated parameters of a regression equation. The concept of Shapley va |
51,975 | What is Shapley value regression and how does one implement it? | There are two good papers to tell you a lot about the Shapley Value Regression:
Lipovetsky, S. (2006). Entropy criterion in logistic regression and Shapley value of predictors. Journal of Modern Applied Statistical Methods, 5(1), 95-106.
Mishra, S.K. (2016). Shapley Value Regression and the Resolution of Multicollinearity. Journal of Economics Bibliography, 3(3), 498-515.
I also wrote a computer program (in Fortran 77) for Shapely regression. It is available here. | What is Shapley value regression and how does one implement it? | There are two good papers to tell you a lot about the Shapley Value Regression:
Lipovetsky, S. (2006). Entropy criterion in logistic regression and Shapley value of predictors. Journal of Modern Appli | What is Shapley value regression and how does one implement it?
There are two good papers to tell you a lot about the Shapley Value Regression:
Lipovetsky, S. (2006). Entropy criterion in logistic regression and Shapley value of predictors. Journal of Modern Applied Statistical Methods, 5(1), 95-106.
Mishra, S.K. (2016). Shapley Value Regression and the Resolution of Multicollinearity. Journal of Economics Bibliography, 3(3), 498-515.
I also wrote a computer program (in Fortran 77) for Shapely regression. It is available here. | What is Shapley value regression and how does one implement it?
There are two good papers to tell you a lot about the Shapley Value Regression:
Lipovetsky, S. (2006). Entropy criterion in logistic regression and Shapley value of predictors. Journal of Modern Appli |
51,976 | What is Shapley value regression and how does one implement it? | In statistics, "Shapely value regression" is called "averaging of the sequential sum-of-squares." Ulrike Grömping is the author of a R package called relaimpo in this package, she named this method which is based on this work lmg that calculates the relative importance when the predictor unlike the common methods has a relevant, known ordering. | What is Shapley value regression and how does one implement it? | In statistics, "Shapely value regression" is called "averaging of the sequential sum-of-squares." Ulrike Grömping is the author of a R package called relaimpo in this package, she named this method w | What is Shapley value regression and how does one implement it?
In statistics, "Shapely value regression" is called "averaging of the sequential sum-of-squares." Ulrike Grömping is the author of a R package called relaimpo in this package, she named this method which is based on this work lmg that calculates the relative importance when the predictor unlike the common methods has a relevant, known ordering. | What is Shapley value regression and how does one implement it?
In statistics, "Shapely value regression" is called "averaging of the sequential sum-of-squares." Ulrike Grömping is the author of a R package called relaimpo in this package, she named this method w |
51,977 | Discrete probability distribution with two 'tails' | There's an infinite number of such distributions -- one need merely a way to get a probability for each value of $i$ and one has just such a distribution. It's a simple matter to spend a leisurely weekend afternoon or an evening inventing a hundred of the things.
However, few are explicitly named in the literature.
One example that does come up now and then is the Skellam distribution, which is the distribution of the difference between two independent Poisson variates. If the two Poisson parameters are equal, it is symmetric.
One could as easily consider taking a difference of other distributions, such as geometric or negative binomial distributions, for example.
One might consider mixtures (of weights $w$ and $1-w$) of some distribution on the non-negative integers and the negative of some other distribution on the non-negative integers.
e.g. one might take a 0.8 probability of a Poisson with mean 2 and 0.2 probability of the negative of a geometric with mean 2:
or indeed any other method that is of interest/relevance to some problem.
Edit: looking at the information in your edits, I'd suggest considering a three part mixture of a probability at 0, some distribution on the positive integers and some distribution on the negative integers. This could be geometric, negative binomial, Poisson etc, (zero-truncated as needed).
For example, if you took the positive and negative halves as geometric, you'd have 4 parameters in the distribution (the two geometric p's, and the two mixture probabilities on those geometric parts; the probability at zero being defined by the other two, since they must add to 1)
Here's an example: | Discrete probability distribution with two 'tails' | There's an infinite number of such distributions -- one need merely a way to get a probability for each value of $i$ and one has just such a distribution. It's a simple matter to spend a leisurely wee | Discrete probability distribution with two 'tails'
There's an infinite number of such distributions -- one need merely a way to get a probability for each value of $i$ and one has just such a distribution. It's a simple matter to spend a leisurely weekend afternoon or an evening inventing a hundred of the things.
However, few are explicitly named in the literature.
One example that does come up now and then is the Skellam distribution, which is the distribution of the difference between two independent Poisson variates. If the two Poisson parameters are equal, it is symmetric.
One could as easily consider taking a difference of other distributions, such as geometric or negative binomial distributions, for example.
One might consider mixtures (of weights $w$ and $1-w$) of some distribution on the non-negative integers and the negative of some other distribution on the non-negative integers.
e.g. one might take a 0.8 probability of a Poisson with mean 2 and 0.2 probability of the negative of a geometric with mean 2:
or indeed any other method that is of interest/relevance to some problem.
Edit: looking at the information in your edits, I'd suggest considering a three part mixture of a probability at 0, some distribution on the positive integers and some distribution on the negative integers. This could be geometric, negative binomial, Poisson etc, (zero-truncated as needed).
For example, if you took the positive and negative halves as geometric, you'd have 4 parameters in the distribution (the two geometric p's, and the two mixture probabilities on those geometric parts; the probability at zero being defined by the other two, since they must add to 1)
Here's an example: | Discrete probability distribution with two 'tails'
There's an infinite number of such distributions -- one need merely a way to get a probability for each value of $i$ and one has just such a distribution. It's a simple matter to spend a leisurely wee |
51,978 | Discrete probability distribution with two 'tails' | $$X \sim Y\times \mbox{Poisson}(\lambda)$$
where $Y$ follows a Rademacher distribution (i.e. $Y$ is $1$ with probability $\frac{1}{2}$ and $-1$ with probability $\frac{1}{2}$). | Discrete probability distribution with two 'tails' | $$X \sim Y\times \mbox{Poisson}(\lambda)$$
where $Y$ follows a Rademacher distribution (i.e. $Y$ is $1$ with probability $\frac{1}{2}$ and $-1$ with probability $\frac{1}{2}$). | Discrete probability distribution with two 'tails'
$$X \sim Y\times \mbox{Poisson}(\lambda)$$
where $Y$ follows a Rademacher distribution (i.e. $Y$ is $1$ with probability $\frac{1}{2}$ and $-1$ with probability $\frac{1}{2}$). | Discrete probability distribution with two 'tails'
$$X \sim Y\times \mbox{Poisson}(\lambda)$$
where $Y$ follows a Rademacher distribution (i.e. $Y$ is $1$ with probability $\frac{1}{2}$ and $-1$ with probability $\frac{1}{2}$). |
51,979 | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$? | If it's only continuous but not differentiable at the mode, you can't. Consider the Laplace distribution. | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$? | If it's only continuous but not differentiable at the mode, you can't. Consider the Laplace distribution. | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$?
If it's only continuous but not differentiable at the mode, you can't. Consider the Laplace distribution. | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$?
If it's only continuous but not differentiable at the mode, you can't. Consider the Laplace distribution. |
51,980 | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$? | Find the maximum value. The Laplace distribution is defined piece-wise, and not smooth at the mode, which is at the piece-wise common point (A.K.A., corner point), and such a common, junctional corner point in a piece-wise defined function has no general continuity guarantee of either the separate piecewise defined functions' amplitudes nor any of their derivatives. The mode for a Laplace distribution occurs at $\frac{e^{-\frac{x-\mu }{\beta }}}{2 \beta }=\frac{e^{-\frac{\mu -x}{\beta }}}{2 \beta }\to x=\mu$, because $f(x=\mu)=\frac{1}{2 \beta }$ is the maximum value of $f$. Of course there are continuous functions that are differentiable nowhere, and differentiability is not a requirement. Thus, the general answer is simply find the maximum of the pdf. This can be done in closed form as above for something as simple as the Laplace distribution. It can be done numerically for nice smooth functions using derivatives example by referring to How to prevent newton's method from finding maxima? and altering it to avoid minima. Or, more generally for nasty functions using a canned routine like FindMaximum in Mathematica with a global search routine like RandomSearch.
@Glen_b and @Dougal, thanks for the help, anything further @me. | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$? | Find the maximum value. The Laplace distribution is defined piece-wise, and not smooth at the mode, which is at the piece-wise common point (A.K.A., corner point), and such a common, junctional corner | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$?
Find the maximum value. The Laplace distribution is defined piece-wise, and not smooth at the mode, which is at the piece-wise common point (A.K.A., corner point), and such a common, junctional corner point in a piece-wise defined function has no general continuity guarantee of either the separate piecewise defined functions' amplitudes nor any of their derivatives. The mode for a Laplace distribution occurs at $\frac{e^{-\frac{x-\mu }{\beta }}}{2 \beta }=\frac{e^{-\frac{\mu -x}{\beta }}}{2 \beta }\to x=\mu$, because $f(x=\mu)=\frac{1}{2 \beta }$ is the maximum value of $f$. Of course there are continuous functions that are differentiable nowhere, and differentiability is not a requirement. Thus, the general answer is simply find the maximum of the pdf. This can be done in closed form as above for something as simple as the Laplace distribution. It can be done numerically for nice smooth functions using derivatives example by referring to How to prevent newton's method from finding maxima? and altering it to avoid minima. Or, more generally for nasty functions using a canned routine like FindMaximum in Mathematica with a global search routine like RandomSearch.
@Glen_b and @Dougal, thanks for the help, anything further @me. | If $f(x)$ is a unimodal probability density function, how can I show that its mode is at $f'(x)=0$?
Find the maximum value. The Laplace distribution is defined piece-wise, and not smooth at the mode, which is at the piece-wise common point (A.K.A., corner point), and such a common, junctional corner |
51,981 | auto.arima: why forecast converges to mean after some periods? | In your ARIMA specification, the middle number in both the first and the second bracket is zero. That means, there is no [simple] differencing and no seasonal differencing. Thus your series appears to be mean-stationary (rather than integrated or seasonally integrated). (As far as I know, the AR and MA parameters yielded by the auto.arima function are restricted to be in the region of stationarity; meanwhile, nonstationarity can be introduced by [simple] or seasonal differencing.) Given a mean-stationary process, you should actually expect the point forecasts to converge to the mean of the process. That is one of the intrinsic features of a mean-stationary time series. And that is also why these kind of processes are called "mean-reverting".
Regarding weekends, you have only supplied the frequency as frequency=24, and auto.arima will not guess whether there is weekly, monthly or other kind of seasonality besides what you have specified. You could indeed create some dummies and supply them using xreg to account for the day-of-the-week pattern. | auto.arima: why forecast converges to mean after some periods? | In your ARIMA specification, the middle number in both the first and the second bracket is zero. That means, there is no [simple] differencing and no seasonal differencing. Thus your series appears to | auto.arima: why forecast converges to mean after some periods?
In your ARIMA specification, the middle number in both the first and the second bracket is zero. That means, there is no [simple] differencing and no seasonal differencing. Thus your series appears to be mean-stationary (rather than integrated or seasonally integrated). (As far as I know, the AR and MA parameters yielded by the auto.arima function are restricted to be in the region of stationarity; meanwhile, nonstationarity can be introduced by [simple] or seasonal differencing.) Given a mean-stationary process, you should actually expect the point forecasts to converge to the mean of the process. That is one of the intrinsic features of a mean-stationary time series. And that is also why these kind of processes are called "mean-reverting".
Regarding weekends, you have only supplied the frequency as frequency=24, and auto.arima will not guess whether there is weekly, monthly or other kind of seasonality besides what you have specified. You could indeed create some dummies and supply them using xreg to account for the day-of-the-week pattern. | auto.arima: why forecast converges to mean after some periods?
In your ARIMA specification, the middle number in both the first and the second bracket is zero. That means, there is no [simple] differencing and no seasonal differencing. Thus your series appears to |
51,982 | auto.arima: why forecast converges to mean after some periods? | As a complement to the first part of the answer given by @RichardHardy, it could be checked analytically that the forecasts converge to a constant as the forecasting horizon goes to infinity.
For simplicity, let's take an ARMA(1,1) model defined as follows:
$$
y_t = \delta + \phi y_{t-1} + \epsilon_t + \theta \epsilon_{t-1} \,, \quad
\epsilon_t \sim NID(0, \sigma^2) \,, \quad t=1,2,\dots,T \,.
$$
The expected value of the ARMA(1,1) process, denote by $\mu$, is given by:
\begin{eqnarray}
\begin{array}{rcl}
E(y_t) &=& \delta + \phi E(y_{t-1}) + E(\epsilon_t) + \theta E(\epsilon_{t-1}) \,, \\
\mu &=& \delta + \phi \mu + 0 + 0 \,, \\
\mu &=& \frac{\delta}{1 - \phi} \,.
\end{array}
\end{eqnarray}
We used the fact that, assuming the fitted model is restricted to be a stationary process, then the mean of the process is the same at different time periods and, hence, $E(y_t) = E(y_{t-1})) = \mu$; $E(\epsilon_t) = E(\epsilon_{t-1}=0)$ by definition.
The forecast function of the ARMA(1,1) process is given by:
\begin{eqnarray}
\begin{array}{rcl}
E(y_{T+1}) &=& \delta + \phi E(y_{T}) + \theta \epsilon_{T}) \,, \\
E(y_{T+2}) &=& \delta + \phi E(y_{T+1}) + \theta \epsilon_{T}) =
(1+\phi)\delta + \phi^2y_T + \phi\theta\epsilon_T\,, \\
\dots \\
E(y_{T+k}) &=& (1 + \phi + \phi^2 + \cdots + \phi^{k-1} + \phi^k y_{T} \phi^{k-1}\theta\epsilon_{T}) \,, \\
\end{array}
\end{eqnarray}
taking $k\to\infty$:
$$
\lim_{k\to\infty} = E_T (y_{T+k}) = \frac{\delta}{1 - \phi} \,,
$$
which is equal to the expected value of the process that we obtained before.
Notice that, since the process is stationary, $|\phi| < 1$ and hence the term $\phi^k$ is dropped; this condition implies also that the summation converges to $1/(1+\phi$).
The same idea could be applied to the ARIMA(2,0,1)(2,0,0) model that you show, but the operations would be much more cumbersome. The model would be:
$$
(1 - \phi_1L - \phi_2 L^2)(1 - \Phi_1L^{24} - \Phi_2L^{48})y_t = \delta + \epsilon_t + \theta\epsilon_{t-1} \,,
$$
where $L$ is the lag operator such that $L^iy_t = y_{t-i}$. Before applying the idea shown above, the product of polynomials should be first expanded as it is done here. The resulting expression is a bit cumbersome, so just for illustration purposes we can stick to the example of the stationary ARMA(1,1) process.
It seems that you are pursuing 30 days ahead forecasts. This is probably beyond the abilities of these models. They are intended for short-term forecasting (partly due to the issues discussed here). See for example the discussion in this post. | auto.arima: why forecast converges to mean after some periods? | As a complement to the first part of the answer given by @RichardHardy, it could be checked analytically that the forecasts converge to a constant as the forecasting horizon goes to infinity.
For simp | auto.arima: why forecast converges to mean after some periods?
As a complement to the first part of the answer given by @RichardHardy, it could be checked analytically that the forecasts converge to a constant as the forecasting horizon goes to infinity.
For simplicity, let's take an ARMA(1,1) model defined as follows:
$$
y_t = \delta + \phi y_{t-1} + \epsilon_t + \theta \epsilon_{t-1} \,, \quad
\epsilon_t \sim NID(0, \sigma^2) \,, \quad t=1,2,\dots,T \,.
$$
The expected value of the ARMA(1,1) process, denote by $\mu$, is given by:
\begin{eqnarray}
\begin{array}{rcl}
E(y_t) &=& \delta + \phi E(y_{t-1}) + E(\epsilon_t) + \theta E(\epsilon_{t-1}) \,, \\
\mu &=& \delta + \phi \mu + 0 + 0 \,, \\
\mu &=& \frac{\delta}{1 - \phi} \,.
\end{array}
\end{eqnarray}
We used the fact that, assuming the fitted model is restricted to be a stationary process, then the mean of the process is the same at different time periods and, hence, $E(y_t) = E(y_{t-1})) = \mu$; $E(\epsilon_t) = E(\epsilon_{t-1}=0)$ by definition.
The forecast function of the ARMA(1,1) process is given by:
\begin{eqnarray}
\begin{array}{rcl}
E(y_{T+1}) &=& \delta + \phi E(y_{T}) + \theta \epsilon_{T}) \,, \\
E(y_{T+2}) &=& \delta + \phi E(y_{T+1}) + \theta \epsilon_{T}) =
(1+\phi)\delta + \phi^2y_T + \phi\theta\epsilon_T\,, \\
\dots \\
E(y_{T+k}) &=& (1 + \phi + \phi^2 + \cdots + \phi^{k-1} + \phi^k y_{T} \phi^{k-1}\theta\epsilon_{T}) \,, \\
\end{array}
\end{eqnarray}
taking $k\to\infty$:
$$
\lim_{k\to\infty} = E_T (y_{T+k}) = \frac{\delta}{1 - \phi} \,,
$$
which is equal to the expected value of the process that we obtained before.
Notice that, since the process is stationary, $|\phi| < 1$ and hence the term $\phi^k$ is dropped; this condition implies also that the summation converges to $1/(1+\phi$).
The same idea could be applied to the ARIMA(2,0,1)(2,0,0) model that you show, but the operations would be much more cumbersome. The model would be:
$$
(1 - \phi_1L - \phi_2 L^2)(1 - \Phi_1L^{24} - \Phi_2L^{48})y_t = \delta + \epsilon_t + \theta\epsilon_{t-1} \,,
$$
where $L$ is the lag operator such that $L^iy_t = y_{t-i}$. Before applying the idea shown above, the product of polynomials should be first expanded as it is done here. The resulting expression is a bit cumbersome, so just for illustration purposes we can stick to the example of the stationary ARMA(1,1) process.
It seems that you are pursuing 30 days ahead forecasts. This is probably beyond the abilities of these models. They are intended for short-term forecasting (partly due to the issues discussed here). See for example the discussion in this post. | auto.arima: why forecast converges to mean after some periods?
As a complement to the first part of the answer given by @RichardHardy, it could be checked analytically that the forecasts converge to a constant as the forecasting horizon goes to infinity.
For simp |
51,983 | Interpreting regression with transformed variables | I agree strongly with the suggestion of COOLSerdash and Srikanth Guhan that Poisson regression is much more natural for your problem. So the idea of transforming the response is a much weaker solution than using an appropriate count model. Poisson regression is well documented in many good texts and in this forum. In effect you get treatment on a logarithmic scale but with smart ideas ensuring that zeros in your data don't bite. (If Poisson regression turns out to be an over-simplification, there are models beyond.)
The rest of this answer focuses on the transformation you have used and whether it is a good idea for temperature change as a predictor that can be negative, zero or positive. This is a more unusual question by far.
The transformation used here has a name, asinh or inverse hyperbolic sine. Its graph is nicer than its algebraic definition in the question using other functions more likely to be met in elementary mathematics:
The function is defined for all finite values of an argument, $x$ say. I've plotted it arbitrarily over the range from $x = -10$ to $10$, except that that range has your example in mind to the extent possible; we'll get to that in a moment.
The virtues of this transformation include treating positive and negative values symmetrically. The result of the transformation varies smoothly as the argument passes through zero. It will certainly pull in outliers that are large positive or large negative. It is likely to reduce skewness in many cases.
Another virtue of the transformation is that it is close to the identity transformation for values near zero. The graph shows a line of equality near the origin to underline this point. (All puns should be considered intentional.)
Against all that I would want to underline reservations about using this transformation.
First, but not least, even if people have met this transformation in a previous existence, I doubt that many people retain a good feeling for what it is and how it works. To be personal about this, I meet it in statistical practice about once every three years and have to draw the graph and think about it every time. Unless you are working in an area where it is a known trick, most readers are going to say "What's that?" in some way or another. That doesn't rule it out as a solution, but it's a practical consideration when imagining putting this in a paper or thesis.
Second, and more important, I doubt there is any biological (physical, economic, whatever) rationale to this. It's unphysical and unphysiological, I would guess wildly, to imagine that temperature change works in this way in affecting organisms. We do some things in statistics for purely statistical reasons, naturally, but it is a bonus when that makes sense scientifically, and not otherwise.
Third, something you should certainly do is plot asinh(temperature change) versus temperature change for your data and ask how much difference it really makes. As pointed out, the transformation is close to identity for small $x$: the important part of that is being close to linear for small $x$. The scatter plot equivalent of my graph for your data points may indicate that you are better off leaving the data as they are. How does the skewness arise? Is a matter of a few outliers? I will now reveal that my limits from $-10$ to $10$ arise from a(nother wild) guess that most of your temperature changes will be small, a few $^\circ$C. (What is temperature change any way? Day to day? Really big temperature changes might just kill the hens, or put them off egg laying altogether; perhaps that is part of your problem.) So: one possibility is that the transformation makes less difference than you imagine, in which case not doing it would simplify your analysis. Another possibility is that there is a real problem here which might need to be dealt with in other ways. We would need to hear more about your data to advise better.
Fourth, and perhaps most important, is that the marginal distribution of any predictor is not itself important for regression models, contrary to many myths recycled repeatedly.
Fifth, plotting residuals versus predictor is a (partial) way to see whether the version of the predictor used actually works well in your model. Pattern on this plot may indicate that you got it wrong. (By "version of", I mean temperature change, or its asinh, or some other transformation.)
I don't see much need for or merit in standardisation of predictors here.
In seeking better advice, posting your data as well makes it much easier to advise.
EDIT 3 Sept 2021 "To be personal about this" Since this was written I have seen asinh in action more and played with the definition and its derivative in relation to data. Hence I know better how and when it works, or helps. That's trivial, but also universal: If something is unfamiliar, you may have to work at it before it becomes familiar. | Interpreting regression with transformed variables | I agree strongly with the suggestion of COOLSerdash and Srikanth Guhan that Poisson regression is much more natural for your problem. So the idea of transforming the response is a much weaker solution | Interpreting regression with transformed variables
I agree strongly with the suggestion of COOLSerdash and Srikanth Guhan that Poisson regression is much more natural for your problem. So the idea of transforming the response is a much weaker solution than using an appropriate count model. Poisson regression is well documented in many good texts and in this forum. In effect you get treatment on a logarithmic scale but with smart ideas ensuring that zeros in your data don't bite. (If Poisson regression turns out to be an over-simplification, there are models beyond.)
The rest of this answer focuses on the transformation you have used and whether it is a good idea for temperature change as a predictor that can be negative, zero or positive. This is a more unusual question by far.
The transformation used here has a name, asinh or inverse hyperbolic sine. Its graph is nicer than its algebraic definition in the question using other functions more likely to be met in elementary mathematics:
The function is defined for all finite values of an argument, $x$ say. I've plotted it arbitrarily over the range from $x = -10$ to $10$, except that that range has your example in mind to the extent possible; we'll get to that in a moment.
The virtues of this transformation include treating positive and negative values symmetrically. The result of the transformation varies smoothly as the argument passes through zero. It will certainly pull in outliers that are large positive or large negative. It is likely to reduce skewness in many cases.
Another virtue of the transformation is that it is close to the identity transformation for values near zero. The graph shows a line of equality near the origin to underline this point. (All puns should be considered intentional.)
Against all that I would want to underline reservations about using this transformation.
First, but not least, even if people have met this transformation in a previous existence, I doubt that many people retain a good feeling for what it is and how it works. To be personal about this, I meet it in statistical practice about once every three years and have to draw the graph and think about it every time. Unless you are working in an area where it is a known trick, most readers are going to say "What's that?" in some way or another. That doesn't rule it out as a solution, but it's a practical consideration when imagining putting this in a paper or thesis.
Second, and more important, I doubt there is any biological (physical, economic, whatever) rationale to this. It's unphysical and unphysiological, I would guess wildly, to imagine that temperature change works in this way in affecting organisms. We do some things in statistics for purely statistical reasons, naturally, but it is a bonus when that makes sense scientifically, and not otherwise.
Third, something you should certainly do is plot asinh(temperature change) versus temperature change for your data and ask how much difference it really makes. As pointed out, the transformation is close to identity for small $x$: the important part of that is being close to linear for small $x$. The scatter plot equivalent of my graph for your data points may indicate that you are better off leaving the data as they are. How does the skewness arise? Is a matter of a few outliers? I will now reveal that my limits from $-10$ to $10$ arise from a(nother wild) guess that most of your temperature changes will be small, a few $^\circ$C. (What is temperature change any way? Day to day? Really big temperature changes might just kill the hens, or put them off egg laying altogether; perhaps that is part of your problem.) So: one possibility is that the transformation makes less difference than you imagine, in which case not doing it would simplify your analysis. Another possibility is that there is a real problem here which might need to be dealt with in other ways. We would need to hear more about your data to advise better.
Fourth, and perhaps most important, is that the marginal distribution of any predictor is not itself important for regression models, contrary to many myths recycled repeatedly.
Fifth, plotting residuals versus predictor is a (partial) way to see whether the version of the predictor used actually works well in your model. Pattern on this plot may indicate that you got it wrong. (By "version of", I mean temperature change, or its asinh, or some other transformation.)
I don't see much need for or merit in standardisation of predictors here.
In seeking better advice, posting your data as well makes it much easier to advise.
EDIT 3 Sept 2021 "To be personal about this" Since this was written I have seen asinh in action more and played with the definition and its derivative in relation to data. Hence I know better how and when it works, or helps. That's trivial, but also universal: If something is unfamiliar, you may have to work at it before it becomes familiar. | Interpreting regression with transformed variables
I agree strongly with the suggestion of COOLSerdash and Srikanth Guhan that Poisson regression is much more natural for your problem. So the idea of transforming the response is a much weaker solution |
51,984 | Derivative of softmax and squared error | Yes, your formula is correct. The formula in the draft chapter was for the sigmoid not for the softmax. We will fix it. Thanks for pointing it out.
-- Yoshua Bengio | Derivative of softmax and squared error | Yes, your formula is correct. The formula in the draft chapter was for the sigmoid not for the softmax. We will fix it. Thanks for pointing it out.
-- Yoshua Bengio | Derivative of softmax and squared error
Yes, your formula is correct. The formula in the draft chapter was for the sigmoid not for the softmax. We will fix it. Thanks for pointing it out.
-- Yoshua Bengio | Derivative of softmax and squared error
Yes, your formula is correct. The formula in the draft chapter was for the sigmoid not for the softmax. We will fix it. Thanks for pointing it out.
-- Yoshua Bengio |
51,985 | Derivative of softmax and squared error | The formula you quote from the book
$$ \frac{d p_i}{d a_j} = \sum_i p_i (1 - p_i) $$
cannot be correct because it has no dependence on $j$. Also, the relationship
$$\sum_k p_k = 1$$
implies that
$$ \sum_k \frac{d p_k}{d a_j} = 0 $$
and this doesn't hold for the book's proposed formula.
I think your formula is correct. Here's my derivation:
$$\begin{align}
\frac{d p_i}{d a_j} &= \frac{e^{a_j}e^{a_i} - \delta^i_j e^{a_i} \sum_k e^{a_k}}{ \left( \sum_k e^{a_k} \right)^2 } \\
&= \frac{e^{a_j}}{\sum_k e^{a_k}} \frac{e^{a_i} - \delta^i_j \sum_k e^{a_k}}{\sum_k e^{a_k}} \\
&= p_j (p_i - \delta^i_j)
\end{align}$$
My $i$ and $j$ are reversed from yours, but it's easy to see that its the same. | Derivative of softmax and squared error | The formula you quote from the book
$$ \frac{d p_i}{d a_j} = \sum_i p_i (1 - p_i) $$
cannot be correct because it has no dependence on $j$. Also, the relationship
$$\sum_k p_k = 1$$
implies that
$$ \ | Derivative of softmax and squared error
The formula you quote from the book
$$ \frac{d p_i}{d a_j} = \sum_i p_i (1 - p_i) $$
cannot be correct because it has no dependence on $j$. Also, the relationship
$$\sum_k p_k = 1$$
implies that
$$ \sum_k \frac{d p_k}{d a_j} = 0 $$
and this doesn't hold for the book's proposed formula.
I think your formula is correct. Here's my derivation:
$$\begin{align}
\frac{d p_i}{d a_j} &= \frac{e^{a_j}e^{a_i} - \delta^i_j e^{a_i} \sum_k e^{a_k}}{ \left( \sum_k e^{a_k} \right)^2 } \\
&= \frac{e^{a_j}}{\sum_k e^{a_k}} \frac{e^{a_i} - \delta^i_j \sum_k e^{a_k}}{\sum_k e^{a_k}} \\
&= p_j (p_i - \delta^i_j)
\end{align}$$
My $i$ and $j$ are reversed from yours, but it's easy to see that its the same. | Derivative of softmax and squared error
The formula you quote from the book
$$ \frac{d p_i}{d a_j} = \sum_i p_i (1 - p_i) $$
cannot be correct because it has no dependence on $j$. Also, the relationship
$$\sum_k p_k = 1$$
implies that
$$ \ |
51,986 | Do the residual plot and QQ plot look normal? | Pretty obviously not normal. A step function is not a straight line. However, you also seem to be checking (unconditional) normality of the response, which is not assumed to be normal in a mixed model (you'd have some mixture of normals, depending on the fixed effects)
You clearly have discrete data. So your response's conditional distribution will be discrete, not normal.
This is what a Q-Q plot of normal residuals with sample size close to yours tends to look like:
The non-normality you will have in your conditional response is not automatically a big problem - it may or may not be. You don't seem to have strong skewness, for example, nor heavy tails and your sample size is largish.
Please describe your response in more detail. What is this "rating scale data"? (It sounds like it might be an ordinal scale.)
One thing you can do is use simulation to investigate the effect of this discrete scale on the inferences you wish to perform compared to having an actually normal error term. | Do the residual plot and QQ plot look normal? | Pretty obviously not normal. A step function is not a straight line. However, you also seem to be checking (unconditional) normality of the response, which is not assumed to be normal in a mixed model | Do the residual plot and QQ plot look normal?
Pretty obviously not normal. A step function is not a straight line. However, you also seem to be checking (unconditional) normality of the response, which is not assumed to be normal in a mixed model (you'd have some mixture of normals, depending on the fixed effects)
You clearly have discrete data. So your response's conditional distribution will be discrete, not normal.
This is what a Q-Q plot of normal residuals with sample size close to yours tends to look like:
The non-normality you will have in your conditional response is not automatically a big problem - it may or may not be. You don't seem to have strong skewness, for example, nor heavy tails and your sample size is largish.
Please describe your response in more detail. What is this "rating scale data"? (It sounds like it might be an ordinal scale.)
One thing you can do is use simulation to investigate the effect of this discrete scale on the inferences you wish to perform compared to having an actually normal error term. | Do the residual plot and QQ plot look normal?
Pretty obviously not normal. A step function is not a straight line. However, you also seem to be checking (unconditional) normality of the response, which is not assumed to be normal in a mixed model |
51,987 | Do the residual plot and QQ plot look normal? | Likert data simply cannot be normal. Although in some cases it is safe enough to treat it as normal, it isn't actually ever normal and treating it as such is potentially dangerous.
In addition to the points @Glen_b has made, your residual plot doesn't look good. The residuals should be symmetrical (vertically) around the 0 line. Either there is something seriously wrong with your model or you have strong skew in opposite directions at the two ends (this seems more likely). This means that your model will be attenuated through much of its range (as you get closer to the ends, the predicted values will be closer to the grand mean than they should be), but then will overshoot the possible values if you were to move far enough out from the mean of X. So in addition to your interval estimates being distorted, the model isn't even picking out your means correctly.
You would do best to use a mixed ordinal logistic regression. | Do the residual plot and QQ plot look normal? | Likert data simply cannot be normal. Although in some cases it is safe enough to treat it as normal, it isn't actually ever normal and treating it as such is potentially dangerous.
In addition to t | Do the residual plot and QQ plot look normal?
Likert data simply cannot be normal. Although in some cases it is safe enough to treat it as normal, it isn't actually ever normal and treating it as such is potentially dangerous.
In addition to the points @Glen_b has made, your residual plot doesn't look good. The residuals should be symmetrical (vertically) around the 0 line. Either there is something seriously wrong with your model or you have strong skew in opposite directions at the two ends (this seems more likely). This means that your model will be attenuated through much of its range (as you get closer to the ends, the predicted values will be closer to the grand mean than they should be), but then will overshoot the possible values if you were to move far enough out from the mean of X. So in addition to your interval estimates being distorted, the model isn't even picking out your means correctly.
You would do best to use a mixed ordinal logistic regression. | Do the residual plot and QQ plot look normal?
Likert data simply cannot be normal. Although in some cases it is safe enough to treat it as normal, it isn't actually ever normal and treating it as such is potentially dangerous.
In addition to t |
51,988 | What are distribution assumptions in Ridge and Lasso regression models? | Your question is not entirely clear. This answer assumes that by "distribution of the features" you mean the conditional distributions of the response across explanatory variables.
Under the General Linear Model regression estimates are obtained by minimising the Residual Sum of Squares
$RSS = \sum\limits_{i=1}^{n} \Big(y_i - \beta_0 - \sum\limits_{j=1}^{p} \beta_j x_{ij} \Big)^2$.
In contrast, Ridge Regression aims to minimise
$RSS + \lambda \; \sum\limits_{j=1}^p \beta_j^2$,
and Lasso Regression aims to minimise
$RSS + \lambda \; \sum\limits_{j=1}^p |\beta_j|$,
where $\lambda$ is a tuning parameter.
So, Ridge Regression and Lasso Regression are special cases of the General Linear Model. They add penalty terms but otherwise all of the same conditions apply, including conditionally independent Gaussian residuals with zero mean and constant variance across the range of the explanatory variable(s). | What are distribution assumptions in Ridge and Lasso regression models? | Your question is not entirely clear. This answer assumes that by "distribution of the features" you mean the conditional distributions of the response across explanatory variables.
Under the General L | What are distribution assumptions in Ridge and Lasso regression models?
Your question is not entirely clear. This answer assumes that by "distribution of the features" you mean the conditional distributions of the response across explanatory variables.
Under the General Linear Model regression estimates are obtained by minimising the Residual Sum of Squares
$RSS = \sum\limits_{i=1}^{n} \Big(y_i - \beta_0 - \sum\limits_{j=1}^{p} \beta_j x_{ij} \Big)^2$.
In contrast, Ridge Regression aims to minimise
$RSS + \lambda \; \sum\limits_{j=1}^p \beta_j^2$,
and Lasso Regression aims to minimise
$RSS + \lambda \; \sum\limits_{j=1}^p |\beta_j|$,
where $\lambda$ is a tuning parameter.
So, Ridge Regression and Lasso Regression are special cases of the General Linear Model. They add penalty terms but otherwise all of the same conditions apply, including conditionally independent Gaussian residuals with zero mean and constant variance across the range of the explanatory variable(s). | What are distribution assumptions in Ridge and Lasso regression models?
Your question is not entirely clear. This answer assumes that by "distribution of the features" you mean the conditional distributions of the response across explanatory variables.
Under the General L |
51,989 | What are distribution assumptions in Ridge and Lasso regression models? | From a Bayesian standpoint, the assumptions are simply in the priors on the coefficients. Ridge regression is equivalent to using a Gaussian prior, whereas LASSO is equivalent to using a Laplace prior. As @whuber said, these models don't make assumptions on the distribution of the explanatory variables. | What are distribution assumptions in Ridge and Lasso regression models? | From a Bayesian standpoint, the assumptions are simply in the priors on the coefficients. Ridge regression is equivalent to using a Gaussian prior, whereas LASSO is equivalent to using a Laplace prior | What are distribution assumptions in Ridge and Lasso regression models?
From a Bayesian standpoint, the assumptions are simply in the priors on the coefficients. Ridge regression is equivalent to using a Gaussian prior, whereas LASSO is equivalent to using a Laplace prior. As @whuber said, these models don't make assumptions on the distribution of the explanatory variables. | What are distribution assumptions in Ridge and Lasso regression models?
From a Bayesian standpoint, the assumptions are simply in the priors on the coefficients. Ridge regression is equivalent to using a Gaussian prior, whereas LASSO is equivalent to using a Laplace prior |
51,990 | Does high number of values outside of 95% Confidence Interval imply non-normality? | It doesn't mean that there isn't a problem, but you are comparing apples with oranges. The confidence interval is for the mean -- not the population. With a huge amount of data, the confidence interval for the mean will be very narrow because you can estimate the mean very accurately -- but almost all the data values will be outside that confidence interval.
Put another way, the confidence limits are about $\pm 2\sigma/\sqrt{n}$, while the 95% limits for a normal population are about $\pm 2\sigma$, without dividing by $\sqrt{n}$. | Does high number of values outside of 95% Confidence Interval imply non-normality? | It doesn't mean that there isn't a problem, but you are comparing apples with oranges. The confidence interval is for the mean -- not the population. With a huge amount of data, the confidence interva | Does high number of values outside of 95% Confidence Interval imply non-normality?
It doesn't mean that there isn't a problem, but you are comparing apples with oranges. The confidence interval is for the mean -- not the population. With a huge amount of data, the confidence interval for the mean will be very narrow because you can estimate the mean very accurately -- but almost all the data values will be outside that confidence interval.
Put another way, the confidence limits are about $\pm 2\sigma/\sqrt{n}$, while the 95% limits for a normal population are about $\pm 2\sigma$, without dividing by $\sqrt{n}$. | Does high number of values outside of 95% Confidence Interval imply non-normality?
It doesn't mean that there isn't a problem, but you are comparing apples with oranges. The confidence interval is for the mean -- not the population. With a huge amount of data, the confidence interva |
51,991 | Does high number of values outside of 95% Confidence Interval imply non-normality? | Here is a picture that shows what @rvl said. With reasonably large sample sizes, a tiny fraction of the values are within the 95% confidence interval of the mean. Source
. | Does high number of values outside of 95% Confidence Interval imply non-normality? | Here is a picture that shows what @rvl said. With reasonably large sample sizes, a tiny fraction of the values are within the 95% confidence interval of the mean. Source
. | Does high number of values outside of 95% Confidence Interval imply non-normality?
Here is a picture that shows what @rvl said. With reasonably large sample sizes, a tiny fraction of the values are within the 95% confidence interval of the mean. Source
. | Does high number of values outside of 95% Confidence Interval imply non-normality?
Here is a picture that shows what @rvl said. With reasonably large sample sizes, a tiny fraction of the values are within the 95% confidence interval of the mean. Source
. |
51,992 | Any other non-parametric alternative to Kruskal-Wallis? | The generalization of the Kruskal-Wallis test is the proportional odds ordinal logistic model. Such a model can provide the multiple degree of freedom overall test as you get with K-W but also can provide general contrasts (on the log odds ratio scale) including pairwise comparisons. | Any other non-parametric alternative to Kruskal-Wallis? | The generalization of the Kruskal-Wallis test is the proportional odds ordinal logistic model. Such a model can provide the multiple degree of freedom overall test as you get with K-W but also can pr | Any other non-parametric alternative to Kruskal-Wallis?
The generalization of the Kruskal-Wallis test is the proportional odds ordinal logistic model. Such a model can provide the multiple degree of freedom overall test as you get with K-W but also can provide general contrasts (on the log odds ratio scale) including pairwise comparisons. | Any other non-parametric alternative to Kruskal-Wallis?
The generalization of the Kruskal-Wallis test is the proportional odds ordinal logistic model. Such a model can provide the multiple degree of freedom overall test as you get with K-W but also can pr |
51,993 | Any other non-parametric alternative to Kruskal-Wallis? | You can still run the Kruskal-Wallis, all you need to do is run subsequent pair-wise tests comparing each group to the other groups.
After running a Kruskal-Wallis test and determining that there is a significant difference, you could run additional post hoc tests, for example a Dunn's test, to compare each individual group and determine which are significantly different from each other.
For a reference to the rationale for a Dunn`s test vs. Wilcoxon Rank-sum:
Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests?
For the original paper describing the test:
Dunn OJ. Multiple comparisons using rank sums. Technometrics 1964; 6(3):241-52.
http://dx.doi.org/10.1080/00401706.1964.10490181 | Any other non-parametric alternative to Kruskal-Wallis? | You can still run the Kruskal-Wallis, all you need to do is run subsequent pair-wise tests comparing each group to the other groups.
After running a Kruskal-Wallis test and determining that there is a | Any other non-parametric alternative to Kruskal-Wallis?
You can still run the Kruskal-Wallis, all you need to do is run subsequent pair-wise tests comparing each group to the other groups.
After running a Kruskal-Wallis test and determining that there is a significant difference, you could run additional post hoc tests, for example a Dunn's test, to compare each individual group and determine which are significantly different from each other.
For a reference to the rationale for a Dunn`s test vs. Wilcoxon Rank-sum:
Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests?
For the original paper describing the test:
Dunn OJ. Multiple comparisons using rank sums. Technometrics 1964; 6(3):241-52.
http://dx.doi.org/10.1080/00401706.1964.10490181 | Any other non-parametric alternative to Kruskal-Wallis?
You can still run the Kruskal-Wallis, all you need to do is run subsequent pair-wise tests comparing each group to the other groups.
After running a Kruskal-Wallis test and determining that there is a |
51,994 | Revert minmax normalization to original value | Using the same formula as you used to standardize from 0 to 1, now use true min and max to standardize to the true range, most commonly: Xi = (Xi - Xmin)/(Xmax-Xmin) | Revert minmax normalization to original value | Using the same formula as you used to standardize from 0 to 1, now use true min and max to standardize to the true range, most commonly: Xi = (Xi - Xmin)/(Xmax-Xmin) | Revert minmax normalization to original value
Using the same formula as you used to standardize from 0 to 1, now use true min and max to standardize to the true range, most commonly: Xi = (Xi - Xmin)/(Xmax-Xmin) | Revert minmax normalization to original value
Using the same formula as you used to standardize from 0 to 1, now use true min and max to standardize to the true range, most commonly: Xi = (Xi - Xmin)/(Xmax-Xmin) |
51,995 | Revert minmax normalization to original value | Because your output is in [0, 1], I guess you used some output functions for classification, such as sigmoid. However, your loss function is not for classification. Thus, I suggest either using classification loss (such as sigmoid cross entropy) with [0, 1] output values, or using regression loss (such as squared error) with linear output function. Then, you don't have to scale training output values nor network output values.
Nevertheless, you can convert from [0, 1] back to [a, b] using the following equation:
output_ab = output_01 * (b - a) + a,
or generally from [x, y] to [a, b] using the following equation:
value_ab = ((value_xy - x) / (y - x)) * (b - a) + a. | Revert minmax normalization to original value | Because your output is in [0, 1], I guess you used some output functions for classification, such as sigmoid. However, your loss function is not for classification. Thus, I suggest either using classi | Revert minmax normalization to original value
Because your output is in [0, 1], I guess you used some output functions for classification, such as sigmoid. However, your loss function is not for classification. Thus, I suggest either using classification loss (such as sigmoid cross entropy) with [0, 1] output values, or using regression loss (such as squared error) with linear output function. Then, you don't have to scale training output values nor network output values.
Nevertheless, you can convert from [0, 1] back to [a, b] using the following equation:
output_ab = output_01 * (b - a) + a,
or generally from [x, y] to [a, b] using the following equation:
value_ab = ((value_xy - x) / (y - x)) * (b - a) + a. | Revert minmax normalization to original value
Because your output is in [0, 1], I guess you used some output functions for classification, such as sigmoid. However, your loss function is not for classification. Thus, I suggest either using classi |
51,996 | If P(A)=0, is A a null event? | First of all, note that the term ''null event'' is not unambiguous: some sources use it in a sense ''an event that has zero probability'', while others understand it as ''empty set (as an event)''. As the first interpretation makes the question a tautology (of course if the definition of null event is that it's probability is zero, then a null event has zero probability and an event of zero probability is a null event), I'll concentrate on the second interpretation.
In the usual measure theoretic formulation of probability, ''event'' is a set of outcomes; an event is realized if the outcome of the experiment is within the set. Impossible event is the empty set $\emptyset$, i.e. under no outcome of the experiment can this event be realized.
The answer to your question is no. Let $X$ be a random variable with uniform distribution on $\left[0,1\right]$ and $A$ be the event $X=0.5$ (or any other real number on $\left[0,1\right]$). This is obviously not a null event (such random variate can take the value of $0.5$) but has the probability of zero (as the distribution is continuous).
Another example might be having infinite number of heads when flipping a fair coin. (''Infinite number'' might be formalized, but I don't want to make the discussion too technical, consider it intuitively.) This can happen (that is: the event pertaining to it is not an empty set), yet, its probability is zero.
See also this discussion. | If P(A)=0, is A a null event? | First of all, note that the term ''null event'' is not unambiguous: some sources use it in a sense ''an event that has zero probability'', while others understand it as ''empty set (as an event)''. As | If P(A)=0, is A a null event?
First of all, note that the term ''null event'' is not unambiguous: some sources use it in a sense ''an event that has zero probability'', while others understand it as ''empty set (as an event)''. As the first interpretation makes the question a tautology (of course if the definition of null event is that it's probability is zero, then a null event has zero probability and an event of zero probability is a null event), I'll concentrate on the second interpretation.
In the usual measure theoretic formulation of probability, ''event'' is a set of outcomes; an event is realized if the outcome of the experiment is within the set. Impossible event is the empty set $\emptyset$, i.e. under no outcome of the experiment can this event be realized.
The answer to your question is no. Let $X$ be a random variable with uniform distribution on $\left[0,1\right]$ and $A$ be the event $X=0.5$ (or any other real number on $\left[0,1\right]$). This is obviously not a null event (such random variate can take the value of $0.5$) but has the probability of zero (as the distribution is continuous).
Another example might be having infinite number of heads when flipping a fair coin. (''Infinite number'' might be formalized, but I don't want to make the discussion too technical, consider it intuitively.) This can happen (that is: the event pertaining to it is not an empty set), yet, its probability is zero.
See also this discussion. | If P(A)=0, is A a null event?
First of all, note that the term ''null event'' is not unambiguous: some sources use it in a sense ''an event that has zero probability'', while others understand it as ''empty set (as an event)''. As |
51,997 | If P(A)=0, is A a null event? | A probability space is a triple ($\Omega,\Sigma,P)$ where $\Omega$ is the set of outcomes, $\Sigma$ a sigma algebra on that set ( i.e. set of subsets of $\Omega$ with particular properties) and P is a probability measure on $\Sigma$.
One of the properties of $\Sigma$ is that it contains the empty set $\emptyset$, and the measure of this set must be zero. So the empty set has measure zero $P(\emptyset)=0$.
However there exist non-empty sets that also have a zero measure. E.g for the normal random variable the measure of a singleton ( which is obviously not empty) {a} is $\int_a^a \varphi(x)dx=0$.
Consequently, the measure of a union of singletons, being the sum of the measures of each singleton in the union, is also zero.
So the null event is the empty set that is element of $\Sigma$ and
$P(\emptyset)=0$, but $\Sigma$ can contain non-empty sets of probability
zero.
This gives rise to concepts like ''almost everywhere''; a property holds almost everywhere if it holds everywhere except on a set that has probability zero. | If P(A)=0, is A a null event? | A probability space is a triple ($\Omega,\Sigma,P)$ where $\Omega$ is the set of outcomes, $\Sigma$ a sigma algebra on that set ( i.e. set of subsets of $\Omega$ with particular properties) and P is a | If P(A)=0, is A a null event?
A probability space is a triple ($\Omega,\Sigma,P)$ where $\Omega$ is the set of outcomes, $\Sigma$ a sigma algebra on that set ( i.e. set of subsets of $\Omega$ with particular properties) and P is a probability measure on $\Sigma$.
One of the properties of $\Sigma$ is that it contains the empty set $\emptyset$, and the measure of this set must be zero. So the empty set has measure zero $P(\emptyset)=0$.
However there exist non-empty sets that also have a zero measure. E.g for the normal random variable the measure of a singleton ( which is obviously not empty) {a} is $\int_a^a \varphi(x)dx=0$.
Consequently, the measure of a union of singletons, being the sum of the measures of each singleton in the union, is also zero.
So the null event is the empty set that is element of $\Sigma$ and
$P(\emptyset)=0$, but $\Sigma$ can contain non-empty sets of probability
zero.
This gives rise to concepts like ''almost everywhere''; a property holds almost everywhere if it holds everywhere except on a set that has probability zero. | If P(A)=0, is A a null event?
A probability space is a triple ($\Omega,\Sigma,P)$ where $\Omega$ is the set of outcomes, $\Sigma$ a sigma algebra on that set ( i.e. set of subsets of $\Omega$ with particular properties) and P is a |
51,998 | If P(A)=0, is A a null event? | The idea of a Null event is used to emulate the idea of a failed experiment.
Let's consider the simplistic analogy of flipping a coin. You have four possible outcomes.
First you have the probability that you did, in fact flip a coin. This has a probability of 1 technically speaking.
The second is the Null event (usually denoted with a probability of 0). This null event means the experiment failed. Because you know you flipped a coin with a probability of 1 the only real null events for this experiment involve not being able to read the coin. Maybe you flipped it and it rolled into a crack in the floor, and you were unable to get conclusive results. The null event is not a number used in the math, but models the finite improbability that you cannot complete the experiment. You can think of it as 0+ (if you are familiar with limits/asymptotic expressions) because there aren't a lot of absolutes in statistical analysis.
Third and fourth are the probability that you flipped heads or tails. Both of these are considered to be .5 or 1/2 because there are only two real options.
The null event and the first are used to tell whether the experiment worked or not. Something with a probability of 0 is not necessarily a null event. Take for example the probability that you will draw a red marble from a bag of blue marbles. This has a probability of 0, but is not a null event because you did in fact draw a marble.
Hopefully that clarifies the Null event for you. | If P(A)=0, is A a null event? | The idea of a Null event is used to emulate the idea of a failed experiment.
Let's consider the simplistic analogy of flipping a coin. You have four possible outcomes.
First you have the probability | If P(A)=0, is A a null event?
The idea of a Null event is used to emulate the idea of a failed experiment.
Let's consider the simplistic analogy of flipping a coin. You have four possible outcomes.
First you have the probability that you did, in fact flip a coin. This has a probability of 1 technically speaking.
The second is the Null event (usually denoted with a probability of 0). This null event means the experiment failed. Because you know you flipped a coin with a probability of 1 the only real null events for this experiment involve not being able to read the coin. Maybe you flipped it and it rolled into a crack in the floor, and you were unable to get conclusive results. The null event is not a number used in the math, but models the finite improbability that you cannot complete the experiment. You can think of it as 0+ (if you are familiar with limits/asymptotic expressions) because there aren't a lot of absolutes in statistical analysis.
Third and fourth are the probability that you flipped heads or tails. Both of these are considered to be .5 or 1/2 because there are only two real options.
The null event and the first are used to tell whether the experiment worked or not. Something with a probability of 0 is not necessarily a null event. Take for example the probability that you will draw a red marble from a bag of blue marbles. This has a probability of 0, but is not a null event because you did in fact draw a marble.
Hopefully that clarifies the Null event for you. | If P(A)=0, is A a null event?
The idea of a Null event is used to emulate the idea of a failed experiment.
Let's consider the simplistic analogy of flipping a coin. You have four possible outcomes.
First you have the probability |
51,999 | How to calculate the scale parameter of a Cauchy random variable | Note that the mean of the Cauchy distribution doesn't exist (so we can't assume it to be 0). However, I assume you mean the center of symmetry of the Cauchy (which is both the mode and the median and various other measures of location).
Let's call the center $\mu$ and the scale parameter $\sigma\,$:
$$f(x; \mu,\sigma) = \frac{1}{\pi\sigma \left[1 + \left(\frac{x - \mu}{\sigma}\right)^2\right]} ,\quad \sigma>0,\;\;x,\mu\in \mathbb{R}$$
Quick estimate: A reasonable quick estimate for $\sigma$ can be obtained from half the interquartile range. This ignores that $\mu$ is known, of course. The median absolute value would be a corresponding quantity for $\mu=0$. If memory serves, I think the asymptotic relative efficiency (ARE) is about 80% for that median absolute value, but don't quote me on that.
Maximum likelihood: Let $X$ be $\sim\text{Cauchy}(\mu_0,\sigma)$ for known $\mu_0$.
The MLE for $\sigma$ is given by solving the following for $\hat\sigma\,$:
$$\sum_i \frac{\hat{\sigma}^2}{(x_i-\mu_0)^2+\hat{\sigma}^2}=\frac{n}{2}\,.$$
A solution exists for any $n>2$ and is unique (e.g. see Copas, 1975$^{[2]}$). This performs well, but must be iterated.
Efficient one-step estimation: This$^{[2]}$ recent paper
gives an efficient (ARE ~98%) and simple estimate based on the Hodges-Lehmann estimator, as well as some useful details on the ML estimator. In particular, section 3 gives details for the known-location case:
When the known location is zero, they show that half of the median of $n(n+1)/2$ logarithms of the absolute values of pairwise products of the Cauchy observations is ML for $\log(\sigma)$.
That is, $\log(\hat{σ}_\text{HLE}) =\frac{1}{2}\text{med}(\ln|X_iX_j|), 1≤i, j≤n, i≤j$.
This is unbiased for $\log(\sigma)$ and asymptotically normally distributed. As a result, exponentiating that is a suitable (though not unbiased) estimator for $\sigma$. Further details are in the paper, including the variance of the asymptotic normal distribution for the estimator of $\log(\sigma)$.
Rousseeuw and Croux (1993)$^{[3]}$ also give a robust scale estimator, $Q_n$ (for the unknown location case) which is also highly efficient at the Cauchy. It's based on scaling the first quartile of the pairwise absolute distances between observations. (It's available in the package robustbase in R, but for the Cauchy you need a different scale factor than the default scaling constant.)
References
$[1]$ Copas, J.B., (1975),
"On the unimodality of the likelihood for the Cauchy distribution,"
Biometrika, 62(3):701-04.
$[2]$ Kravchuk, O.Y. and Pollett, P.K. (2012),
"Hodges-Lehmann scale estimator for Cauchy distribution."
Communications in Statistics-Theory and Methods, Vol 41(20):3621-3632.
(See also this version of the paper given at the web page of the second author, here)
$[3]$ Rousseeuw, P. and Croux, C. (1993),
"Alternatives to the median absolute deviation,"
Journal of the American Statistical Association, 88(424):1273-1283 | How to calculate the scale parameter of a Cauchy random variable | Note that the mean of the Cauchy distribution doesn't exist (so we can't assume it to be 0). However, I assume you mean the center of symmetry of the Cauchy (which is both the mode and the median and | How to calculate the scale parameter of a Cauchy random variable
Note that the mean of the Cauchy distribution doesn't exist (so we can't assume it to be 0). However, I assume you mean the center of symmetry of the Cauchy (which is both the mode and the median and various other measures of location).
Let's call the center $\mu$ and the scale parameter $\sigma\,$:
$$f(x; \mu,\sigma) = \frac{1}{\pi\sigma \left[1 + \left(\frac{x - \mu}{\sigma}\right)^2\right]} ,\quad \sigma>0,\;\;x,\mu\in \mathbb{R}$$
Quick estimate: A reasonable quick estimate for $\sigma$ can be obtained from half the interquartile range. This ignores that $\mu$ is known, of course. The median absolute value would be a corresponding quantity for $\mu=0$. If memory serves, I think the asymptotic relative efficiency (ARE) is about 80% for that median absolute value, but don't quote me on that.
Maximum likelihood: Let $X$ be $\sim\text{Cauchy}(\mu_0,\sigma)$ for known $\mu_0$.
The MLE for $\sigma$ is given by solving the following for $\hat\sigma\,$:
$$\sum_i \frac{\hat{\sigma}^2}{(x_i-\mu_0)^2+\hat{\sigma}^2}=\frac{n}{2}\,.$$
A solution exists for any $n>2$ and is unique (e.g. see Copas, 1975$^{[2]}$). This performs well, but must be iterated.
Efficient one-step estimation: This$^{[2]}$ recent paper
gives an efficient (ARE ~98%) and simple estimate based on the Hodges-Lehmann estimator, as well as some useful details on the ML estimator. In particular, section 3 gives details for the known-location case:
When the known location is zero, they show that half of the median of $n(n+1)/2$ logarithms of the absolute values of pairwise products of the Cauchy observations is ML for $\log(\sigma)$.
That is, $\log(\hat{σ}_\text{HLE}) =\frac{1}{2}\text{med}(\ln|X_iX_j|), 1≤i, j≤n, i≤j$.
This is unbiased for $\log(\sigma)$ and asymptotically normally distributed. As a result, exponentiating that is a suitable (though not unbiased) estimator for $\sigma$. Further details are in the paper, including the variance of the asymptotic normal distribution for the estimator of $\log(\sigma)$.
Rousseeuw and Croux (1993)$^{[3]}$ also give a robust scale estimator, $Q_n$ (for the unknown location case) which is also highly efficient at the Cauchy. It's based on scaling the first quartile of the pairwise absolute distances between observations. (It's available in the package robustbase in R, but for the Cauchy you need a different scale factor than the default scaling constant.)
References
$[1]$ Copas, J.B., (1975),
"On the unimodality of the likelihood for the Cauchy distribution,"
Biometrika, 62(3):701-04.
$[2]$ Kravchuk, O.Y. and Pollett, P.K. (2012),
"Hodges-Lehmann scale estimator for Cauchy distribution."
Communications in Statistics-Theory and Methods, Vol 41(20):3621-3632.
(See also this version of the paper given at the web page of the second author, here)
$[3]$ Rousseeuw, P. and Croux, C. (1993),
"Alternatives to the median absolute deviation,"
Journal of the American Statistical Association, 88(424):1273-1283 | How to calculate the scale parameter of a Cauchy random variable
Note that the mean of the Cauchy distribution doesn't exist (so we can't assume it to be 0). However, I assume you mean the center of symmetry of the Cauchy (which is both the mode and the median and |
52,000 | Nonlinear Statistics? | I will list some of the models/methods so that you can more easily locate information on them (whether here, on Wikipedia and other such sources, via google, in the titles of books and articles and so on) on your own. This is not a complete list but covers a number of the more common approaches. I'll also mention some references and a few online resources at the end.
simple methods for fitting curved relationships within a regression framework - e.g. polynomial regression, though trigonometric and other models may be used.
Polynomial regression example (from here - pink is the fitted values at each $x$):
Trigonometric regression (link above):
transformation of $\mathbf{x}$ and $y$ variables may be used; one common example is the use of Box-Cox family of power transformations. Where $\mathbf{x}$ is transformed alone, this is essentially a form of the previous case. When both sides are transformed, it can deal with non-normality, heteroskedasticity and curved relationships, though balancing up all three at once can sometimes be tricky. It's often the case that making relationships more homoskedastic can improve the normality (though usually not completely). When taking the interpretation back to the original scale, care needs to be taken.
Here's an example of the result of transforming a relationship that's non-linear and strongly heteroskedastic to one where linear regression is more suitable (from here):
nonlinear regression - $y = g(\mathbf{x})+\epsilon$
($\mathbf{x}$ is a vector of regressors) used for a nonlinear functional relationship ($g$) with either an assumption of Gaussian errors ($\epsilon$) or simply a square-error loss function. Widely used in the physical sciences (physics, chemistry and so on), though may be found in a variety of other contexts.
An example (from here black is data, pink is fitted model, $\text{E}(y) = \beta_0 + \beta_1 e^{-\beta_2 x}$):
A second example (from here):
Generalized Linear models (GLM)
Used for both Gaussian and non-Gaussian $Y$ (within the exponential family) for functions where $g[E(Y|\mathbf{X=x})]$ is linear in $\mathbf{x}$ and the variance is a function of the mean. Very widely used.
Example of a fit of a binomial GLM to mortality data ($E(d_x)=E_xq_x$, where $q_x = \frac{Ab^x}{1+Ab^x}$):
There are more general models fitted via maximum likelihood estimation, method of moments (MoM or sometimes MME) or in other ways. In general some model is specified, some loss function is minimized, or some other fitting criterion specified (as in MoM), in order to produce estimates; a variety of optimization or root-finding methods are applied.
Smoothing techniques, including regression splines, smoothing splines (or penalized versions of either), kernel
smoothing/local linear regression/local polynomial regression (including LOESS and other local regression methods), and so on. Also under the umbrella of nonparametric regression, though that includes some other techniques not of direct relevance to your question.
See also Generalized Additive Models.
Example (weighted natural cubic spline fit to log-transformed data):
There's a plethora of information on almost all of these topics here on CV.
The book "(An R and S-PLUS) Companion to Applied Regression" (Fox or more recently "An R Companion to Applied Regression" Fox & Weisberg) has a number of relevant chapters (including coverage of transformaiton and GLMs, (and there are additional online chapters here, especially the first couple of chapters at that link which cover nonlinear regression and nonparametric regression) that you may find useful.
Also see Fox, Applied Regression, Linear Models, and Related Methods.
Some of the models/methods are discussed in Elements of Statistical Learning (Hastie, Tibshirani and Friedman). The 10th printing of the second edition is available online, but its not especially introductory (though parts are pretty straightforward) and its focus is different from what you're likely after. Nevertheless you may find some of its chapters of value. | Nonlinear Statistics? | I will list some of the models/methods so that you can more easily locate information on them (whether here, on Wikipedia and other such sources, via google, in the titles of books and articles and so | Nonlinear Statistics?
I will list some of the models/methods so that you can more easily locate information on them (whether here, on Wikipedia and other such sources, via google, in the titles of books and articles and so on) on your own. This is not a complete list but covers a number of the more common approaches. I'll also mention some references and a few online resources at the end.
simple methods for fitting curved relationships within a regression framework - e.g. polynomial regression, though trigonometric and other models may be used.
Polynomial regression example (from here - pink is the fitted values at each $x$):
Trigonometric regression (link above):
transformation of $\mathbf{x}$ and $y$ variables may be used; one common example is the use of Box-Cox family of power transformations. Where $\mathbf{x}$ is transformed alone, this is essentially a form of the previous case. When both sides are transformed, it can deal with non-normality, heteroskedasticity and curved relationships, though balancing up all three at once can sometimes be tricky. It's often the case that making relationships more homoskedastic can improve the normality (though usually not completely). When taking the interpretation back to the original scale, care needs to be taken.
Here's an example of the result of transforming a relationship that's non-linear and strongly heteroskedastic to one where linear regression is more suitable (from here):
nonlinear regression - $y = g(\mathbf{x})+\epsilon$
($\mathbf{x}$ is a vector of regressors) used for a nonlinear functional relationship ($g$) with either an assumption of Gaussian errors ($\epsilon$) or simply a square-error loss function. Widely used in the physical sciences (physics, chemistry and so on), though may be found in a variety of other contexts.
An example (from here black is data, pink is fitted model, $\text{E}(y) = \beta_0 + \beta_1 e^{-\beta_2 x}$):
A second example (from here):
Generalized Linear models (GLM)
Used for both Gaussian and non-Gaussian $Y$ (within the exponential family) for functions where $g[E(Y|\mathbf{X=x})]$ is linear in $\mathbf{x}$ and the variance is a function of the mean. Very widely used.
Example of a fit of a binomial GLM to mortality data ($E(d_x)=E_xq_x$, where $q_x = \frac{Ab^x}{1+Ab^x}$):
There are more general models fitted via maximum likelihood estimation, method of moments (MoM or sometimes MME) or in other ways. In general some model is specified, some loss function is minimized, or some other fitting criterion specified (as in MoM), in order to produce estimates; a variety of optimization or root-finding methods are applied.
Smoothing techniques, including regression splines, smoothing splines (or penalized versions of either), kernel
smoothing/local linear regression/local polynomial regression (including LOESS and other local regression methods), and so on. Also under the umbrella of nonparametric regression, though that includes some other techniques not of direct relevance to your question.
See also Generalized Additive Models.
Example (weighted natural cubic spline fit to log-transformed data):
There's a plethora of information on almost all of these topics here on CV.
The book "(An R and S-PLUS) Companion to Applied Regression" (Fox or more recently "An R Companion to Applied Regression" Fox & Weisberg) has a number of relevant chapters (including coverage of transformaiton and GLMs, (and there are additional online chapters here, especially the first couple of chapters at that link which cover nonlinear regression and nonparametric regression) that you may find useful.
Also see Fox, Applied Regression, Linear Models, and Related Methods.
Some of the models/methods are discussed in Elements of Statistical Learning (Hastie, Tibshirani and Friedman). The 10th printing of the second edition is available online, but its not especially introductory (though parts are pretty straightforward) and its focus is different from what you're likely after. Nevertheless you may find some of its chapters of value. | Nonlinear Statistics?
I will list some of the models/methods so that you can more easily locate information on them (whether here, on Wikipedia and other such sources, via google, in the titles of books and articles and so |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.