idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,301
What is the VC dimension of a decision tree?
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees. Consult Aslan et al., Calculating the VC-Dimension of Trees (2009). They address this problem by doing an exhaustive search, in small trees, and then providing an approximate, recursive formula for estimating the VC dimension on larger trees. They then use this formula as part of a pruning algorithm. Had there been a closed-form answer to your question, I am sure they would have supplied it. They felt the need to iterate their way through even fairly small trees. My two cents worth. I'm not sure that it's meaningful to talk about the VC dimension for decision tres. Consider a $d$ dimensional response, where each item is a binary outcome. This is the situation considered by Aslan et al. There are $2^d$ possible outcomes in this sample space and $2^d$ possible response patterns. If I build a complete tree, with $d$ levels and $2^d$ leaves, then I can shatter any pattern of $2^d$ responses. But nobody fits complete trees. Typically, you overfit and then prune back using cross-validation. What you get at the end is a smaller and simpler tree, but your hypothesis set is still large. Aslan et al. try to estimate the VC dimension of families of isomorphic trees. Each family is a hypothesis set with its own VC dimension. The previous picture illustrates a tree for a space with $d=3$ that shatters 4 points: $(1,0,0,1),(1,1,1,0),(0,1,0,1), (1,1,0,1)$. The fourth entry is the "response". Aslan et al. would regard a tree with the same shape, but using $x1$ and $x2$, say, to be isomorphic and part of the same hypothesis set. So, although there are only 3 leaves on each of these trees, the set of such trees can shatter 4 points and the VC dimension is 4 in this case. However, the same tree could occur in a space with 4 variables, in which case the VC dimension would be 5. So it's complicated. Aslan's brute force solution seems to work fairly well, but what they get isn't really the VC dimension of the algorithms people use, since these rely on pruning and cross-validation. It's hard to say what the hypothesis space actually is, since in principle, we start with a shattering number of possible trees, but then prune back to something more reasonable. Even if someone begins with an a priori choice not to go beyond two layers, say, there may still be a need to prune the tree. And we don't really need the VC dimension, since cross-validation goes after the out of sample error directly. To be fair to Aslan et al., they don't use the VC dimension to characterize their hypothesis space. They calculate the VC dimension of branches and use that quantity to determine if the branch should be cut. At each stage, they use the VC dimension of the specific configuration of the branch under consideration. They don't look at the VC dimension of the problem as a whole. If your variables are continuous and the response depends on reaching a threshold, then a decision tree is basically creating a bunch of perceptrons, so the VC dimension would presumably be greater than that (since you have to estimate the cutoff point to make the split). If the response depends monotonically on a continuous response, CART will chop it up into a bunch of steps, trying to recreate a regression model. I would not use trees in that case -- possibly gam or regression.
What is the VC dimension of a decision tree?
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees. Consult Aslan et al., Calculating the VC-Dimension of Trees (2
What is the VC dimension of a decision tree? I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees. Consult Aslan et al., Calculating the VC-Dimension of Trees (2009). They address this problem by doing an exhaustive search, in small trees, and then providing an approximate, recursive formula for estimating the VC dimension on larger trees. They then use this formula as part of a pruning algorithm. Had there been a closed-form answer to your question, I am sure they would have supplied it. They felt the need to iterate their way through even fairly small trees. My two cents worth. I'm not sure that it's meaningful to talk about the VC dimension for decision tres. Consider a $d$ dimensional response, where each item is a binary outcome. This is the situation considered by Aslan et al. There are $2^d$ possible outcomes in this sample space and $2^d$ possible response patterns. If I build a complete tree, with $d$ levels and $2^d$ leaves, then I can shatter any pattern of $2^d$ responses. But nobody fits complete trees. Typically, you overfit and then prune back using cross-validation. What you get at the end is a smaller and simpler tree, but your hypothesis set is still large. Aslan et al. try to estimate the VC dimension of families of isomorphic trees. Each family is a hypothesis set with its own VC dimension. The previous picture illustrates a tree for a space with $d=3$ that shatters 4 points: $(1,0,0,1),(1,1,1,0),(0,1,0,1), (1,1,0,1)$. The fourth entry is the "response". Aslan et al. would regard a tree with the same shape, but using $x1$ and $x2$, say, to be isomorphic and part of the same hypothesis set. So, although there are only 3 leaves on each of these trees, the set of such trees can shatter 4 points and the VC dimension is 4 in this case. However, the same tree could occur in a space with 4 variables, in which case the VC dimension would be 5. So it's complicated. Aslan's brute force solution seems to work fairly well, but what they get isn't really the VC dimension of the algorithms people use, since these rely on pruning and cross-validation. It's hard to say what the hypothesis space actually is, since in principle, we start with a shattering number of possible trees, but then prune back to something more reasonable. Even if someone begins with an a priori choice not to go beyond two layers, say, there may still be a need to prune the tree. And we don't really need the VC dimension, since cross-validation goes after the out of sample error directly. To be fair to Aslan et al., they don't use the VC dimension to characterize their hypothesis space. They calculate the VC dimension of branches and use that quantity to determine if the branch should be cut. At each stage, they use the VC dimension of the specific configuration of the branch under consideration. They don't look at the VC dimension of the problem as a whole. If your variables are continuous and the response depends on reaching a threshold, then a decision tree is basically creating a bunch of perceptrons, so the VC dimension would presumably be greater than that (since you have to estimate the cutoff point to make the split). If the response depends monotonically on a continuous response, CART will chop it up into a bunch of steps, trying to recreate a regression model. I would not use trees in that case -- possibly gam or regression.
What is the VC dimension of a decision tree? I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees. Consult Aslan et al., Calculating the VC-Dimension of Trees (2
51,302
What is the VC dimension of a decision tree?
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give some new information as a follow up. In a recent paper, Decision trees as partitioning machines to characterize their generalization properties by Jean-Samuel Leboeuf, Frédéric LeBlanc and Mario Marchand, the authors consider the VC dimension of decision trees on examples of $\ell$ features (which is a generalization of your question which concerns only 2 dimensions). There, they show that the VC dimension of the class of a single split (AKA decision stumps) is given by the largest integer $d$ which satisfies $2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$. The proof is quite complex and proceeds by reformulating the problem as a matching problem on graphs. Furthermore, while an exact expression is still out of reach, they are able to give an upper bound on the growth function of general decision trees in a recursive fashion, from which they show that the VC dimension is of order $\mathcal{O}(L_T \log (\ell L_T))$, with $L_T$ the number of leaves of the tree. They also develop a new pruning algorithm based on their results, which seems to perform better in practice than CART's cost complexity pruning algorithm without the need for cross-validation, showing that the VC dimension of decision trees can be useful. Disclaimer: I am one of the author of the paper.
What is the VC dimension of a decision tree?
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give
What is the VC dimension of a decision tree? I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give some new information as a follow up. In a recent paper, Decision trees as partitioning machines to characterize their generalization properties by Jean-Samuel Leboeuf, Frédéric LeBlanc and Mario Marchand, the authors consider the VC dimension of decision trees on examples of $\ell$ features (which is a generalization of your question which concerns only 2 dimensions). There, they show that the VC dimension of the class of a single split (AKA decision stumps) is given by the largest integer $d$ which satisfies $2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$. The proof is quite complex and proceeds by reformulating the problem as a matching problem on graphs. Furthermore, while an exact expression is still out of reach, they are able to give an upper bound on the growth function of general decision trees in a recursive fashion, from which they show that the VC dimension is of order $\mathcal{O}(L_T \log (\ell L_T))$, with $L_T$ the number of leaves of the tree. They also develop a new pruning algorithm based on their results, which seems to perform better in practice than CART's cost complexity pruning algorithm without the need for cross-validation, showing that the VC dimension of decision trees can be useful. Disclaimer: I am one of the author of the paper.
What is the VC dimension of a decision tree? I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give
51,303
Is there a t-test equivalent to Stouffer's z-test?
According to Becker's chapter on combining $p$-values in Cooper and Hedges book you can use $$ \frac{\sum {t_f}_i (p_i)}{{\sqrt{\sum\frac{f_i}{f_i-2}}}} > z(\alpha) $$ where ${t_f}_i$ is Student's $t$ with $f_i$ the degrees of freedom $p_i$ the p-value and $\alpha$ is the desired significance value. She does not give a reference for the method which she attributes to Winer. The resemblance to the formula for Stouffer's method is clear though. Cooper, H and Hedges, L V A handbook of research synthesis 1994 (Russell Sage, New York)
Is there a t-test equivalent to Stouffer's z-test?
According to Becker's chapter on combining $p$-values in Cooper and Hedges book you can use $$ \frac{\sum {t_f}_i (p_i)}{{\sqrt{\sum\frac{f_i}{f_i-2}}}} > z(\alpha) $$ where ${t_f}_i$ is Student's $t$
Is there a t-test equivalent to Stouffer's z-test? According to Becker's chapter on combining $p$-values in Cooper and Hedges book you can use $$ \frac{\sum {t_f}_i (p_i)}{{\sqrt{\sum\frac{f_i}{f_i-2}}}} > z(\alpha) $$ where ${t_f}_i$ is Student's $t$ with $f_i$ the degrees of freedom $p_i$ the p-value and $\alpha$ is the desired significance value. She does not give a reference for the method which she attributes to Winer. The resemblance to the formula for Stouffer's method is clear though. Cooper, H and Hedges, L V A handbook of research synthesis 1994 (Russell Sage, New York)
Is there a t-test equivalent to Stouffer's z-test? According to Becker's chapter on combining $p$-values in Cooper and Hedges book you can use $$ \frac{\sum {t_f}_i (p_i)}{{\sqrt{\sum\frac{f_i}{f_i-2}}}} > z(\alpha) $$ where ${t_f}_i$ is Student's $t$
51,304
Tensorflow loss not changing and also computed gradients and applied batch norm but still loss is not changing? [closed]
I jave just basic knowledge of tensorflow but I will try to guess the issue there. Seems like the problem lies in the calls: For conv2 to conv5: conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID") You want to access outputs_fed_lstm declared within scope of conv4 so should be something like conv4.outputs_fed_lstm[map it to whatever the format in tensorflow is], instead you just seem to feedbacking the output_fed_lstm of the same convx to the input. For conv1 the call is: conv = tf.nn.conv2d(x, conv_weights, strides=[1,1,1,1], padding = "VALID") Hence the gradients are valid and existing.
Tensorflow loss not changing and also computed gradients and applied batch norm but still loss is no
I jave just basic knowledge of tensorflow but I will try to guess the issue there. Seems like the problem lies in the calls: For conv2 to conv5: conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, str
Tensorflow loss not changing and also computed gradients and applied batch norm but still loss is not changing? [closed] I jave just basic knowledge of tensorflow but I will try to guess the issue there. Seems like the problem lies in the calls: For conv2 to conv5: conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, strides=[1,1,1,1], padding = "VALID") You want to access outputs_fed_lstm declared within scope of conv4 so should be something like conv4.outputs_fed_lstm[map it to whatever the format in tensorflow is], instead you just seem to feedbacking the output_fed_lstm of the same convx to the input. For conv1 the call is: conv = tf.nn.conv2d(x, conv_weights, strides=[1,1,1,1], padding = "VALID") Hence the gradients are valid and existing.
Tensorflow loss not changing and also computed gradients and applied batch norm but still loss is no I jave just basic knowledge of tensorflow but I will try to guess the issue there. Seems like the problem lies in the calls: For conv2 to conv5: conv = tf.nn.conv2d(outputs_fed_lstm, conv_weights, str
51,305
Row Correlation Heatmap Pandas
Answered in comments by Andy W: Sorting the correlation matrix may provide clusters of variables, see here for one description of how to sort them
Row Correlation Heatmap Pandas
Answered in comments by Andy W: Sorting the correlation matrix may provide clusters of variables, see here for one description of how to sort them
Row Correlation Heatmap Pandas Answered in comments by Andy W: Sorting the correlation matrix may provide clusters of variables, see here for one description of how to sort them
Row Correlation Heatmap Pandas Answered in comments by Andy W: Sorting the correlation matrix may provide clusters of variables, see here for one description of how to sort them
51,306
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
51,307
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
51,308
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
51,309
Why is multiple comparison a problem?
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)? You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that. The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope. This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field. On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible. It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
Why is multiple comparison a problem?
You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication
Why is multiple comparison a problem? You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication is why some people do not believe in Bonferroni style corrections at all. Sometimes the kind of data one deals with in their career is such that this is not an issue. For judges who make one, or very few decisions on each new piece of evidence this is a very valid argument. But what about the judge with 20 defendants and who is basing their judgment on a single large set of data (e.g. war tribunals)? You're ignoring the kicks at the can part of the argument. Generally scientists are looking for something — a p-value less than alpha. Every attempt to find one is another kick at the can. One will eventually find one if one takes enough shots at it. Therefore, they should be penalized for doing that. The way you harmonize these two arguments is to realize they are both true. The simplest solution is to consider testing of differences within a single dataset as a kicks at the can kind of problem but that expanding the scope of correction outside that would be a slippery slope. This is a genuinely difficult problem in a number of fields, notably FMRI where there are thousands of data points being compared and there are bound to be some come up as significant by chance. Given that the field has been historically very exploratory one has to do something to correct for the fact that hundreds of areas of the brain will look significant purely by chance. Therefore, many methods of adjustment of criterion have been developed in that field. On the other hand, in some fields one might at most be looking at 3 to 5 levels of a variable and always just test every combination if a significant ANOVA occurs. This is known to have some problems (type 1 errors) but it's not particularly terrible. It depends on your point of view. The FMRI researcher recognizes a real need for a criterion shift. The person looking at a small ANOVA may feel that there's clearly something there from the test. The proper conservative point of view on the multiple comparisons is to always do something about them but only based on a single dataset. Any new data resets the criterion... unless you're a Bayesian...
Why is multiple comparison a problem? You've stated something that is a classic counter argument to Bonferroni corrections. Shouldn't I adjust my alpha criterion based on every test I will ever make? This kind of ad absurdum implication
51,310
Why is multiple comparison a problem?
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it. Here's an interesting Bayesian perspective on multiple testing from Andrew Gelman: Why we don't (usually) worry about multiple comparisons.
Why is multiple comparison a problem?
Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it. Here's
Why is multiple comparison a problem? Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it. Here's an interesting Bayesian perspective on multiple testing from Andrew Gelman: Why we don't (usually) worry about multiple comparisons.
Why is multiple comparison a problem? Well-respected statisticians have taken a wide variety of positions on multiple comparisons. It's a subtle subject. If someone thinks it's simple, I'd wonder how much they've thought about it. Here's
51,311
Why is multiple comparison a problem?
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it doesn't result in a clinical improvement/detriment, it doesn't matter. That is one way of reducing the concern about multiple comparisons. See also: Bauer, P. (1991). Multiple testing in clinical trials. Stat Med, 10(6), 871-89; discussion 889-90. Proschan, M. A. & Waclawiw, M. A. (2000). Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials, 21(6), 527-39. Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1(1), 43-6. Perneger, T. V. (1998). What's wrong with bonferroni adjustments. BMJ (Clinical Research Ed.), 316(7139), 1236-8.
Why is multiple comparison a problem?
Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it do
Why is multiple comparison a problem? Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it doesn't result in a clinical improvement/detriment, it doesn't matter. That is one way of reducing the concern about multiple comparisons. See also: Bauer, P. (1991). Multiple testing in clinical trials. Stat Med, 10(6), 871-89; discussion 889-90. Proschan, M. A. & Waclawiw, M. A. (2000). Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials, 21(6), 527-39. Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1(1), 43-6. Perneger, T. V. (1998). What's wrong with bonferroni adjustments. BMJ (Clinical Research Ed.), 316(7139), 1236-8.
Why is multiple comparison a problem? Related to the comment earlier, what the fMRI researcher should remember is that clinically-important outcomes are what matter, not the density shift of a single pixel on a fMRI of the brain. If it do
51,312
Why is multiple comparison a problem?
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that you want to know which one have non zero mean, formally you want to test: $H_{0i} : \theta_i=0$ Vs $H_{1i} : \theta_i\neq 0$ Definition of a threshold: You have $n$ decisions to make and you may have different aim. For a given test $i$ you are certainly going to choose a threshold $\tau_i$ and decide not to accept $H_{0i}$ if $|X_i|>\tau_i$. Different options: You have to choose the thresholds $\tau_i$ and for that you have two options: choose the same threshold for everyone to choose a different threshold for everyone (most often a datawise threshold, see below). Different aims: These options can be driven for different aims such as Controling the probability to reject wrongly $H_{0i}$ for one or more than one $i$. Controlling the expectation of the false alarm ratio (or False Discovery Rate) What ever is your aim at the end, it is a good idea to use a datawise threshold. My answer to your question: your intuition is related to the main heuristic for choosing a datawise threshold. It is the following (at the origin of Holm's procedure which is more powerfull than Bonferoni): Imagine you have already taken a decision for the $p$ lowest $|X_{i}|$ and the decision is to accept $H_{0i}$ for all of them. Then you only have to make $n-p$ comparisons and you haven't taken any risk to reject $H_{0i}$ wrongly ! Since you haven't used your budget, you may take a little more risk for the remaining test and choose a larger threshold. In the case of your judges: I assume (and I guess you should do the same) that both judge have the same budgets of false accusation for their life. The 60 years old judge may be less conservative if, in the past, he did not accuse anyone ! But if he already made a lot of accusation he will be more conservative and maybe even more than the youndest judge.
Why is multiple comparison a problem?
To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that yo
Why is multiple comparison a problem? To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that you want to know which one have non zero mean, formally you want to test: $H_{0i} : \theta_i=0$ Vs $H_{1i} : \theta_i\neq 0$ Definition of a threshold: You have $n$ decisions to make and you may have different aim. For a given test $i$ you are certainly going to choose a threshold $\tau_i$ and decide not to accept $H_{0i}$ if $|X_i|>\tau_i$. Different options: You have to choose the thresholds $\tau_i$ and for that you have two options: choose the same threshold for everyone to choose a different threshold for everyone (most often a datawise threshold, see below). Different aims: These options can be driven for different aims such as Controling the probability to reject wrongly $H_{0i}$ for one or more than one $i$. Controlling the expectation of the false alarm ratio (or False Discovery Rate) What ever is your aim at the end, it is a good idea to use a datawise threshold. My answer to your question: your intuition is related to the main heuristic for choosing a datawise threshold. It is the following (at the origin of Holm's procedure which is more powerfull than Bonferoni): Imagine you have already taken a decision for the $p$ lowest $|X_{i}|$ and the decision is to accept $H_{0i}$ for all of them. Then you only have to make $n-p$ comparisons and you haven't taken any risk to reject $H_{0i}$ wrongly ! Since you haven't used your budget, you may take a little more risk for the remaining test and choose a larger threshold. In the case of your judges: I assume (and I guess you should do the same) that both judge have the same budgets of false accusation for their life. The 60 years old judge may be less conservative if, in the past, he did not accuse anyone ! But if he already made a lot of accusation he will be more conservative and maybe even more than the youndest judge.
Why is multiple comparison a problem? To fix ideas: I will take the case when you obverse, $n$ independent random variables $(X_i)_{i=1,\dots,n}$ such that for $i=1,\dots,n$ $X_i$ is drawn from $\mathcal{N}(\theta_i,1)$. I assume that yo
51,313
Why is multiple comparison a problem?
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMRI). This short citation contains most of the message: "[...] we completed an fMRI scanning session with a post-mortem Atlantic Salmon as the subject. The salmon was shown the same social perspective-taking task that was later administered to a group of human subjects." that is, in my experience, a terrific argument to encourage users to use multiple testing corrections.
Why is multiple comparison a problem?
An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMR
Why is multiple comparison a problem? An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMRI). This short citation contains most of the message: "[...] we completed an fMRI scanning session with a post-mortem Atlantic Salmon as the subject. The salmon was shown the same social perspective-taking task that was later administered to a group of human subjects." that is, in my experience, a terrific argument to encourage users to use multiple testing corrections.
Why is multiple comparison a problem? An illustrating (and funny) article (http://www.jsur.org/ar/jsur_ben102010.pdf) about the need to correct for multiple testing in some practical study evolving many variables e.g. functional MRI (fMR
51,314
why sample variance has has n-1 in the denominator? [duplicate]
To put it simply $(n-1)$ is a smaller number than $(n)$. When you divide by a smaller number you get a larger number. Therefore when you divide by $(n-1)$ the sample variance will work out to be a larger number. Let's think about what a larger vs. smaller sample variance means. If the sample variance is larger than there is a greater chance that it captures the true population variance. That is why when you divide by $(n-1)$ we call that an unbiased sample estimate. Whereas dividing by $(n)$ is called a biased sample estimate. Because we are trying to reveal information about a population by calculating the variance from a sample set we probably do not want to underestimate the variance. Basically by just dividing by $(n)$ we are underestimating the true population variance, that is why it is called a biased estimate. Basically comes down to calculating a biased vs. unbiased sample variance estimate. Also because you asked what as estimator is. There was a good post here on CV that will give you some good insight. Hope this helps!
why sample variance has has n-1 in the denominator? [duplicate]
To put it simply $(n-1)$ is a smaller number than $(n)$. When you divide by a smaller number you get a larger number. Therefore when you divide by $(n-1)$ the sample variance will work out to be a lar
why sample variance has has n-1 in the denominator? [duplicate] To put it simply $(n-1)$ is a smaller number than $(n)$. When you divide by a smaller number you get a larger number. Therefore when you divide by $(n-1)$ the sample variance will work out to be a larger number. Let's think about what a larger vs. smaller sample variance means. If the sample variance is larger than there is a greater chance that it captures the true population variance. That is why when you divide by $(n-1)$ we call that an unbiased sample estimate. Whereas dividing by $(n)$ is called a biased sample estimate. Because we are trying to reveal information about a population by calculating the variance from a sample set we probably do not want to underestimate the variance. Basically by just dividing by $(n)$ we are underestimating the true population variance, that is why it is called a biased estimate. Basically comes down to calculating a biased vs. unbiased sample variance estimate. Also because you asked what as estimator is. There was a good post here on CV that will give you some good insight. Hope this helps!
why sample variance has has n-1 in the denominator? [duplicate] To put it simply $(n-1)$ is a smaller number than $(n)$. When you divide by a smaller number you get a larger number. Therefore when you divide by $(n-1)$ the sample variance will work out to be a lar
51,315
Kruskal Wallis or MANOVA
For 8 input variables and 8 outcome variables, you need multivariate multiple regression or MANCOVA. MANOVA is used in case of one input and multiple outcomes.
Kruskal Wallis or MANOVA
For 8 input variables and 8 outcome variables, you need multivariate multiple regression or MANCOVA. MANOVA is used in case of one input and multiple outcomes.
Kruskal Wallis or MANOVA For 8 input variables and 8 outcome variables, you need multivariate multiple regression or MANCOVA. MANOVA is used in case of one input and multiple outcomes.
Kruskal Wallis or MANOVA For 8 input variables and 8 outcome variables, you need multivariate multiple regression or MANCOVA. MANOVA is used in case of one input and multiple outcomes.
51,316
Kruskal Wallis or MANOVA
Can I ask what your sample size is? Are you planning on running all those tests? Is there no other way of reorganizing your data? If not, why not use the following multinomial model: DV = IV1 + IV2 + IV3 + IV4 + IV5 + IV6 + IV7 where each of your nominal categories are dummy variables (1,0) and your IV 8 is your baseline?
Kruskal Wallis or MANOVA
Can I ask what your sample size is? Are you planning on running all those tests? Is there no other way of reorganizing your data? If not, why not use the following multinomial model: DV = IV1 + IV2 +
Kruskal Wallis or MANOVA Can I ask what your sample size is? Are you planning on running all those tests? Is there no other way of reorganizing your data? If not, why not use the following multinomial model: DV = IV1 + IV2 + IV3 + IV4 + IV5 + IV6 + IV7 where each of your nominal categories are dummy variables (1,0) and your IV 8 is your baseline?
Kruskal Wallis or MANOVA Can I ask what your sample size is? Are you planning on running all those tests? Is there no other way of reorganizing your data? If not, why not use the following multinomial model: DV = IV1 + IV2 +
51,317
Comparing smoothing splines vs loess for smoothing?
Here is some R code/example that will let you compare the fits for a loess fit and a spline fit: library(TeachingDemos) library(splines) tmpfun <- function(x,y,span=.75,df=3) { plot(x,y) fit1 <- lm(y ~ ns(x,df)) xx <- seq( min(x), max(x), length.out=250 ) yy <- predict(fit1, data.frame(x=xx)) lines(xx,yy, col='blue') fit2 <- loess(y~x, span=span) yy <- predict(fit2, data.frame(x=xx)) lines(xx,yy, col='green') invisible(NULL) } tmplst <- list( span=list('slider', from=0.1, to=1.5, resolution=0.05, init=0.75), df=list('slider', from=3, to=25, resolution=1, init=3)) tkexamp( tmpfun(ethanol$E, ethanol$NOx), tmplst ) You can try it with your data and change the code to try other types or options. You may also want to look at the loess.demo function in the TeachingDemos package for a better understanding of what the loess algorythm does. Note that what you see from loess is often a combination of loess with a second interpolation smoothing (sometimes itself a spline), the loess.demo function actually shows both the smoothed and the raw loess fit. Theoretically you can always find a spline that approximates another continuous function as close as you want, but it is unlikely that there will be a simple choice of knots that will reliably give a close approximation to a loess fit for any data set.
Comparing smoothing splines vs loess for smoothing?
Here is some R code/example that will let you compare the fits for a loess fit and a spline fit: library(TeachingDemos) library(splines) tmpfun <- function(x,y,span=.75,df=3) { plot(x,y) fit1
Comparing smoothing splines vs loess for smoothing? Here is some R code/example that will let you compare the fits for a loess fit and a spline fit: library(TeachingDemos) library(splines) tmpfun <- function(x,y,span=.75,df=3) { plot(x,y) fit1 <- lm(y ~ ns(x,df)) xx <- seq( min(x), max(x), length.out=250 ) yy <- predict(fit1, data.frame(x=xx)) lines(xx,yy, col='blue') fit2 <- loess(y~x, span=span) yy <- predict(fit2, data.frame(x=xx)) lines(xx,yy, col='green') invisible(NULL) } tmplst <- list( span=list('slider', from=0.1, to=1.5, resolution=0.05, init=0.75), df=list('slider', from=3, to=25, resolution=1, init=3)) tkexamp( tmpfun(ethanol$E, ethanol$NOx), tmplst ) You can try it with your data and change the code to try other types or options. You may also want to look at the loess.demo function in the TeachingDemos package for a better understanding of what the loess algorythm does. Note that what you see from loess is often a combination of loess with a second interpolation smoothing (sometimes itself a spline), the loess.demo function actually shows both the smoothed and the raw loess fit. Theoretically you can always find a spline that approximates another continuous function as close as you want, but it is unlikely that there will be a simple choice of knots that will reliably give a close approximation to a loess fit for any data set.
Comparing smoothing splines vs loess for smoothing? Here is some R code/example that will let you compare the fits for a loess fit and a spline fit: library(TeachingDemos) library(splines) tmpfun <- function(x,y,span=.75,df=3) { plot(x,y) fit1
51,318
Comparing smoothing splines vs loess for smoothing?
The actual results from a smoothing spline or loess are going to be pretty similar. They might look a little different at the edges of the support, but as long as you make sure it's a "natural" smoothing spline they will look really similar. If you are just using one to add a "smoother" to a scatterplot, there's no real reason to prefer one over the other. If instead you want to make predictions on new data, it's generally much easier to use a smoothing spline. This is because the smoothing spline is a direct basis expansion of the original data; if you used 100 knots to make it that means you created ~100 new variables from the original variable. Loess instead just estimates the response at all the values experienced (or a stratified subset for large data). In general, there are established algorithms to optimize the penalty value for smoothing splines (mgcv in R probably does this the best). Loess isn't quite as clear cut, but you'll generally still get reasonable output from any implementation. MGCV also gives you a feel for equivalent Degrees of Freedom so you can get a feel for how "non-linear" your data is. I find that when modeling on very large data, a simpler natural spline often provides similar results for minimal calculation compared to either a smoothing spline or loess.
Comparing smoothing splines vs loess for smoothing?
The actual results from a smoothing spline or loess are going to be pretty similar. They might look a little different at the edges of the support, but as long as you make sure it's a "natural" smoot
Comparing smoothing splines vs loess for smoothing? The actual results from a smoothing spline or loess are going to be pretty similar. They might look a little different at the edges of the support, but as long as you make sure it's a "natural" smoothing spline they will look really similar. If you are just using one to add a "smoother" to a scatterplot, there's no real reason to prefer one over the other. If instead you want to make predictions on new data, it's generally much easier to use a smoothing spline. This is because the smoothing spline is a direct basis expansion of the original data; if you used 100 knots to make it that means you created ~100 new variables from the original variable. Loess instead just estimates the response at all the values experienced (or a stratified subset for large data). In general, there are established algorithms to optimize the penalty value for smoothing splines (mgcv in R probably does this the best). Loess isn't quite as clear cut, but you'll generally still get reasonable output from any implementation. MGCV also gives you a feel for equivalent Degrees of Freedom so you can get a feel for how "non-linear" your data is. I find that when modeling on very large data, a simpler natural spline often provides similar results for minimal calculation compared to either a smoothing spline or loess.
Comparing smoothing splines vs loess for smoothing? The actual results from a smoothing spline or loess are going to be pretty similar. They might look a little different at the edges of the support, but as long as you make sure it's a "natural" smoot
51,319
Linear Regression - Confidence interval for mean response vs prediction interval
I think one is conditional to x (at one value of x, you confidence statement is correct), and the other is simultaneous for the entire regression line support. The second should be larger due to "multiplicity".
Linear Regression - Confidence interval for mean response vs prediction interval
I think one is conditional to x (at one value of x, you confidence statement is correct), and the other is simultaneous for the entire regression line support. The second should be larger due to "mult
Linear Regression - Confidence interval for mean response vs prediction interval I think one is conditional to x (at one value of x, you confidence statement is correct), and the other is simultaneous for the entire regression line support. The second should be larger due to "multiplicity".
Linear Regression - Confidence interval for mean response vs prediction interval I think one is conditional to x (at one value of x, you confidence statement is correct), and the other is simultaneous for the entire regression line support. The second should be larger due to "mult
51,320
How to "trick" R into producing treatment effects in a treatment * dummy model?
I have to struggle a bit with the mathematical notation, if I but in the following formula that you provided: $$y = \delta_0 + \sum_{j=1}^n\beta_j D_j + \sum_{i=1}^m \sum_{j=0}^n \gamma_{i,j}T_iD_j$$ Isn't that essentially saying that there should be one coefficient per dummy variable (excluding the first), and then one coefficient per combination of T and D? In that case, you could simply write the formula as: y ~ group + treatment:group But when including interaction terms, it is often recommended that you should include all the main terms that are included in the interaction terms: y ~ group * treatment
How to "trick" R into producing treatment effects in a treatment * dummy model?
I have to struggle a bit with the mathematical notation, if I but in the following formula that you provided: $$y = \delta_0 + \sum_{j=1}^n\beta_j D_j + \sum_{i=1}^m \sum_{j=0}^n \gamma_{i,j}T_iD_j$$
How to "trick" R into producing treatment effects in a treatment * dummy model? I have to struggle a bit with the mathematical notation, if I but in the following formula that you provided: $$y = \delta_0 + \sum_{j=1}^n\beta_j D_j + \sum_{i=1}^m \sum_{j=0}^n \gamma_{i,j}T_iD_j$$ Isn't that essentially saying that there should be one coefficient per dummy variable (excluding the first), and then one coefficient per combination of T and D? In that case, you could simply write the formula as: y ~ group + treatment:group But when including interaction terms, it is often recommended that you should include all the main terms that are included in the interaction terms: y ~ group * treatment
How to "trick" R into producing treatment effects in a treatment * dummy model? I have to struggle a bit with the mathematical notation, if I but in the following formula that you provided: $$y = \delta_0 + \sum_{j=1}^n\beta_j D_j + \sum_{i=1}^m \sum_{j=0}^n \gamma_{i,j}T_iD_j$$
51,321
Structural Equation Modeling Two-Step Method
Do I get a Measurement Model set in stone, then proceed to estimate the entire model all together? Yes, their idea was to first fix measurement-model misspecifications, then to begin evaluating fit of the structural model given a well-fitting measurement model. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. https://doi.org/10.1037/0033-2909.103.3.411 However, there are limitations with this method. A recent development improves on its limitations: The "Structural-After-Measurement" (SAM) approach to SEM. The PDF preprint of the Psychological Methods paper can be downloaded at that link. https://osf.io/pekbm/ when reporting this, do I report the measurement parts that I obtained from step 1, or from step 2? I would report the initial CFA's fit, discuss what was wrong with it and how you decided to fix it, then report the fit of the respecified CFA and its parameter estimates (e.g., loadings). The fit of the structural model is tested by comparing it to the CFA. If it fits as well as the CFA, then you can report its estimates (e.g., regressions). Posting on online supplementary appendix of all fitted models greatly enables transparency, facilitated by new repositories like the open-science framework.
Structural Equation Modeling Two-Step Method
Do I get a Measurement Model set in stone, then proceed to estimate the entire model all together? Yes, their idea was to first fix measurement-model misspecifications, then to begin evaluating fit o
Structural Equation Modeling Two-Step Method Do I get a Measurement Model set in stone, then proceed to estimate the entire model all together? Yes, their idea was to first fix measurement-model misspecifications, then to begin evaluating fit of the structural model given a well-fitting measurement model. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. https://doi.org/10.1037/0033-2909.103.3.411 However, there are limitations with this method. A recent development improves on its limitations: The "Structural-After-Measurement" (SAM) approach to SEM. The PDF preprint of the Psychological Methods paper can be downloaded at that link. https://osf.io/pekbm/ when reporting this, do I report the measurement parts that I obtained from step 1, or from step 2? I would report the initial CFA's fit, discuss what was wrong with it and how you decided to fix it, then report the fit of the respecified CFA and its parameter estimates (e.g., loadings). The fit of the structural model is tested by comparing it to the CFA. If it fits as well as the CFA, then you can report its estimates (e.g., regressions). Posting on online supplementary appendix of all fitted models greatly enables transparency, facilitated by new repositories like the open-science framework.
Structural Equation Modeling Two-Step Method Do I get a Measurement Model set in stone, then proceed to estimate the entire model all together? Yes, their idea was to first fix measurement-model misspecifications, then to begin evaluating fit o
51,322
Measures of variable importance in random forests
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this predictor's values over your dataset), should have a negative influence on prediction, i.e.: using the same model to predict from data that is the same except for the one variable, should give worse predictions. So, you take a predictive measure (MSE) with the original dataset and then with the 'permuted' dataset, and you compare them somehow. One way, particularly since we expect the original MSE to always be smaller, the difference can be taken. Finally, for making the values comparable over variables, these are scaled. For the second one: at each split, you can calculate how much this split reduces node impurity (for regression trees, indeed, the difference between RSS before and after the split). This is summed over all splits for that variable, over all trees. Note: a good read is Elements of Statistical Learning by Hastie, Tibshirani and Friedman...
Measures of variable importance in random forests
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this pred
Measures of variable importance in random forests The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this predictor's values over your dataset), should have a negative influence on prediction, i.e.: using the same model to predict from data that is the same except for the one variable, should give worse predictions. So, you take a predictive measure (MSE) with the original dataset and then with the 'permuted' dataset, and you compare them somehow. One way, particularly since we expect the original MSE to always be smaller, the difference can be taken. Finally, for making the values comparable over variables, these are scaled. For the second one: at each split, you can calculate how much this split reduces node impurity (for regression trees, indeed, the difference between RSS before and after the split). This is summed over all splits for that variable, over all trees. Note: a good read is Elements of Statistical Learning by Hastie, Tibshirani and Friedman...
Measures of variable importance in random forests The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this pred
51,323
Measures of variable importance in random forests
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values. http://bioinformatics.oxfordjournals.org/content/early/2010/04/12/bioinformatics.btq134.full.pdf I have a modified implementation of random forests out on CRAN which implements their approach of estimating empirical p values and false discovery rates, here http://cran.r-project.org/web/packages/pRF/index.html
Measures of variable importance in random forests
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values. http://bioinformatics.oxfordjournals.org/content/
Measures of variable importance in random forests Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values. http://bioinformatics.oxfordjournals.org/content/early/2010/04/12/bioinformatics.btq134.full.pdf I have a modified implementation of random forests out on CRAN which implements their approach of estimating empirical p values and false discovery rates, here http://cran.r-project.org/web/packages/pRF/index.html
Measures of variable importance in random forests Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values. http://bioinformatics.oxfordjournals.org/content/
51,324
GEE model returns GLM results
The parameter estimates of the intercept and slope are the same for GLM and GEE, which is expected. The standard errors, however, are different, which is also expected. GEE used the exchangeable covariance to adjust the standard errors in order to account for the correlation within ID that was induced in the $Y$ values by adding a random intercept for each ID.
GEE model returns GLM results
The parameter estimates of the intercept and slope are the same for GLM and GEE, which is expected. The standard errors, however, are different, which is also expected. GEE used the exchangeable cova
GEE model returns GLM results The parameter estimates of the intercept and slope are the same for GLM and GEE, which is expected. The standard errors, however, are different, which is also expected. GEE used the exchangeable covariance to adjust the standard errors in order to account for the correlation within ID that was induced in the $Y$ values by adding a random intercept for each ID.
GEE model returns GLM results The parameter estimates of the intercept and slope are the same for GLM and GEE, which is expected. The standard errors, however, are different, which is also expected. GEE used the exchangeable cova
51,325
GEE model returns GLM results
Might know the answer. GEE won't work if your id variable is not 'sorted' in the data frame. Check to make sure it's 'clustered' in order. From the GEE documentation: "id: a vector which identifies the clusters. The length of id should be the same as the number of observations. Data are assumed to be sorted so that observations on a cluster are contiguous rows for all entities in the formula."
GEE model returns GLM results
Might know the answer. GEE won't work if your id variable is not 'sorted' in the data frame. Check to make sure it's 'clustered' in order. From the GEE documentation: "id: a vector which identifies t
GEE model returns GLM results Might know the answer. GEE won't work if your id variable is not 'sorted' in the data frame. Check to make sure it's 'clustered' in order. From the GEE documentation: "id: a vector which identifies the clusters. The length of id should be the same as the number of observations. Data are assumed to be sorted so that observations on a cluster are contiguous rows for all entities in the formula."
GEE model returns GLM results Might know the answer. GEE won't work if your id variable is not 'sorted' in the data frame. Check to make sure it's 'clustered' in order. From the GEE documentation: "id: a vector which identifies t
51,326
Do we have to tune the number of trees in a random forest?
It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all other hyper-parameters are fixed, the model’s loss stochastically decreases as the number of trees increases. Intuitive explanation Each tree in a random forest is identically distributed. The trees are identically distributed because each tree is grown using a randomization strategy that is repeated for each tree: boot-strap the training data, and then grow each tree by picking the best split for a feature from among the $m$ features selected for that node. The random forest procedure stands in contrast to boosting because the trees are grown on their own bootstrap subsample without regard to any of the other trees. (It is in this sense that the random forest algorithm is "embarrassingly parallel": you can parallelize tree construction because each tree is fit independently.) In the binary case, each random forest tree votes 1 for the positive class or 0 for the negative class for each sample. The average of all of these votes is taken as the classification score of the entire forest. (In the general $k$-nary case, we simply have a categorical distribution instead, but all of these arguments still apply.) The Weak Law of Large Numbers is applicable in these circumstances because the trees' decisions are identically-distributed r.v.s (in the sense that a random procedure determines whether the tree votes 1 or 0) and the variable of interest only takes values $\{0,1\}$ for each tree and therefore each experiment (tree decision) has finite variance (because all moments of countably finite r.v.s are finite). Applying WLLN in this case implies that, for each sample, the ensemble will tend toward a particular mean prediction value for that sample as the number of trees tends towards infinity. Additionally, for a given set of samples, a statistic of interest among those samples (such as the expected log-loss) will converge to a mean value as well, as the number of trees tends toward infinity. Elements of Statistical Learning Hastie et al. address this question very briefly in ESL (page 596). Another claim is that random forests “cannot overfit” the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter. Stated another way, for a fixed hyperparameter configuration, increasing the number of trees cannot overfit the data; however, the other hyperparameters might be a source of overfit. Mathematical explanation This section summarizes Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?". The key results are The expected error rate and area under the ROC curve can be a non-monotonous function of the number of trees. a. The expected error rate (equiv. $\text{error rate} = 1 - \text{accuracy}$) as a function of $T$ the number of trees is given by $$ E(e_i(T)) = P\left(\sum_{t=1}^T e_{it} > 0.5\cdot T\right) $$ where $e_{it}$ is a binomial r.v. with expectation $E(e_{it}) = \epsilon_i$, the decision of a particular tree indexed by $t$. This function is increasing in $T$ for $\epsilon_{i} > 0.5$ and decreasing in $T$ for $\epsilon_{i} < 0.5$. The authors observe We see that the convergence rate of the error rate curve is only dependent on the distribution of the $\epsilon_i$ of the observations. Hence, the convergence rate of the error rate curve is not directly dependent on the number of observations n or the number of features, but these characteristics could influence the empirical distribution of the $\epsilon_i$’s and hence possibly the convergence rate as outlined in Section 4.3.1 b. The authors note that ROC AUC (aka $c$-statistic) can be manipulated to have monotonous or non-monotonous curves as a function of $T$ depending on how the samples' expected scores align to their true classes. Probability-based measures, such as cross entropy and Brier score, are are monotonic as a function of the number of trees. a. The Breier Score has expectation $$ E(b_i(T)) = E(e_{it})^2 + \frac{\text{Var}(e_{it})}{T} $$ which is clearly a monotonously decreasing function of $T$. b. The log-loss (aka cross entropy loss) has expectation which can be approximated by a Taylor expansion $$ E(l_i(T)) \approx -\log(1 - \epsilon_i + a) + \frac{\epsilon_i (1 - \epsilon_i) }{ 2 T (1 - \epsilon_i + a)^2} $$ which is likewise a decreasing function of $T$. (The constant $a$ is a small positive number that keeps the values inside the logarithm and denominator away from zero.) Experimental results considering 306 data sets support these findings. Experimental Demonstration This is a practical demonstration using the diamonds data that ships with ggplot2. I turned it into a classification task by binarizing the price into "high" and "low" categories, with the dividing line determined by the median price. From the perspective of cross-entropy, model improvements are very smooth. (However, the plot is not monotonic -- the divergence from the theoretical results presented above is because the theoretical results pertain to the expectation, rather than to the particular realizations of any one experiment.) On the other hand, error rate is deceptive in the sense that it can swing up or down, and sometimes stay there for a number of additional trees, before reverting. This is because it does not measure the degree of incorrectness of the classification decision. This can cause the error rate to have "blips" of improved performance w.r.t. the number of trees, by which I mean that some sample which is on the decision boundary will bounce back and forth between predicted classes. A very large number of trees can be required for this behavior to be more-or-less suppressed. Also, look at the behavior of error rate for a very small number of trees -- the results are wildly divergent! This implies that a method premised on choosing the number of trees this way is subject to a large amount of randomness. Moreover, repeating the same experiment with a different random seed could lead one to select a different number of trees purely on the basis of this randomness. In this sense, the behavior of the error rate for a small number of trees is entirely an artifact, both because we know that the LLN means that as the number of trees increases, this will tend towards its expectation, and because of the theoretical results in section 2. (Cross-validated hosts a number of questions comparing the merits of error rate/accuracy to other statistics.) By contrast, the cross-entropy measurement is essentially stable after 200 trees, and virtually flat after 500. Finally, I repeated the exact same experiment for error rate with a different random seed. The results are strikingly different for small $T$. Code for this demonstration is available in this gist. "So how should I choose $T$ if I'm not tuning it?" Tuning the number of trees is unnecessary; instead, simply set the number of trees to a large, computationally feasible number, and let the asymptotic behavior of LLN do the rest. In the case that you have some kind of constraint (a cap on the total number of terminal nodes, a cap on the model estimation time, a limit to the size of the model on disk), this amounts to choosing the largest $T$ that satisfies your constraint. "Why do people tune over $T$ if it's wrong to do so?" This is purely speculation, but I think that the belief that tuning the number of trees in a random forest persists is related to two facts: Boosting algorithms like AdaBoost and XGBoost do require users to tune the number of trees in the ensemble and some software users are not sophisticated enough to distinguish between boosting and bagging. (For a discussion of the distinction between boosting and bagging, see Is random forest a boosting algorithm?) Standard random forest implementations, like R's randomForest (which is, basically, the R interface to Breiman's FORTRAN code), only report error rate (or, equivalently, accuracy) as a function of trees. This is deceptive, because the accuracy is not a monotonic function of the number of trees, whereas continuous proper scoring rules such as Brier score and logloss are monotonic functions. Citation Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?"
Do we have to tune the number of trees in a random forest?
It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all
Do we have to tune the number of trees in a random forest? It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all other hyper-parameters are fixed, the model’s loss stochastically decreases as the number of trees increases. Intuitive explanation Each tree in a random forest is identically distributed. The trees are identically distributed because each tree is grown using a randomization strategy that is repeated for each tree: boot-strap the training data, and then grow each tree by picking the best split for a feature from among the $m$ features selected for that node. The random forest procedure stands in contrast to boosting because the trees are grown on their own bootstrap subsample without regard to any of the other trees. (It is in this sense that the random forest algorithm is "embarrassingly parallel": you can parallelize tree construction because each tree is fit independently.) In the binary case, each random forest tree votes 1 for the positive class or 0 for the negative class for each sample. The average of all of these votes is taken as the classification score of the entire forest. (In the general $k$-nary case, we simply have a categorical distribution instead, but all of these arguments still apply.) The Weak Law of Large Numbers is applicable in these circumstances because the trees' decisions are identically-distributed r.v.s (in the sense that a random procedure determines whether the tree votes 1 or 0) and the variable of interest only takes values $\{0,1\}$ for each tree and therefore each experiment (tree decision) has finite variance (because all moments of countably finite r.v.s are finite). Applying WLLN in this case implies that, for each sample, the ensemble will tend toward a particular mean prediction value for that sample as the number of trees tends towards infinity. Additionally, for a given set of samples, a statistic of interest among those samples (such as the expected log-loss) will converge to a mean value as well, as the number of trees tends toward infinity. Elements of Statistical Learning Hastie et al. address this question very briefly in ESL (page 596). Another claim is that random forests “cannot overfit” the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter. Stated another way, for a fixed hyperparameter configuration, increasing the number of trees cannot overfit the data; however, the other hyperparameters might be a source of overfit. Mathematical explanation This section summarizes Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?". The key results are The expected error rate and area under the ROC curve can be a non-monotonous function of the number of trees. a. The expected error rate (equiv. $\text{error rate} = 1 - \text{accuracy}$) as a function of $T$ the number of trees is given by $$ E(e_i(T)) = P\left(\sum_{t=1}^T e_{it} > 0.5\cdot T\right) $$ where $e_{it}$ is a binomial r.v. with expectation $E(e_{it}) = \epsilon_i$, the decision of a particular tree indexed by $t$. This function is increasing in $T$ for $\epsilon_{i} > 0.5$ and decreasing in $T$ for $\epsilon_{i} < 0.5$. The authors observe We see that the convergence rate of the error rate curve is only dependent on the distribution of the $\epsilon_i$ of the observations. Hence, the convergence rate of the error rate curve is not directly dependent on the number of observations n or the number of features, but these characteristics could influence the empirical distribution of the $\epsilon_i$’s and hence possibly the convergence rate as outlined in Section 4.3.1 b. The authors note that ROC AUC (aka $c$-statistic) can be manipulated to have monotonous or non-monotonous curves as a function of $T$ depending on how the samples' expected scores align to their true classes. Probability-based measures, such as cross entropy and Brier score, are are monotonic as a function of the number of trees. a. The Breier Score has expectation $$ E(b_i(T)) = E(e_{it})^2 + \frac{\text{Var}(e_{it})}{T} $$ which is clearly a monotonously decreasing function of $T$. b. The log-loss (aka cross entropy loss) has expectation which can be approximated by a Taylor expansion $$ E(l_i(T)) \approx -\log(1 - \epsilon_i + a) + \frac{\epsilon_i (1 - \epsilon_i) }{ 2 T (1 - \epsilon_i + a)^2} $$ which is likewise a decreasing function of $T$. (The constant $a$ is a small positive number that keeps the values inside the logarithm and denominator away from zero.) Experimental results considering 306 data sets support these findings. Experimental Demonstration This is a practical demonstration using the diamonds data that ships with ggplot2. I turned it into a classification task by binarizing the price into "high" and "low" categories, with the dividing line determined by the median price. From the perspective of cross-entropy, model improvements are very smooth. (However, the plot is not monotonic -- the divergence from the theoretical results presented above is because the theoretical results pertain to the expectation, rather than to the particular realizations of any one experiment.) On the other hand, error rate is deceptive in the sense that it can swing up or down, and sometimes stay there for a number of additional trees, before reverting. This is because it does not measure the degree of incorrectness of the classification decision. This can cause the error rate to have "blips" of improved performance w.r.t. the number of trees, by which I mean that some sample which is on the decision boundary will bounce back and forth between predicted classes. A very large number of trees can be required for this behavior to be more-or-less suppressed. Also, look at the behavior of error rate for a very small number of trees -- the results are wildly divergent! This implies that a method premised on choosing the number of trees this way is subject to a large amount of randomness. Moreover, repeating the same experiment with a different random seed could lead one to select a different number of trees purely on the basis of this randomness. In this sense, the behavior of the error rate for a small number of trees is entirely an artifact, both because we know that the LLN means that as the number of trees increases, this will tend towards its expectation, and because of the theoretical results in section 2. (Cross-validated hosts a number of questions comparing the merits of error rate/accuracy to other statistics.) By contrast, the cross-entropy measurement is essentially stable after 200 trees, and virtually flat after 500. Finally, I repeated the exact same experiment for error rate with a different random seed. The results are strikingly different for small $T$. Code for this demonstration is available in this gist. "So how should I choose $T$ if I'm not tuning it?" Tuning the number of trees is unnecessary; instead, simply set the number of trees to a large, computationally feasible number, and let the asymptotic behavior of LLN do the rest. In the case that you have some kind of constraint (a cap on the total number of terminal nodes, a cap on the model estimation time, a limit to the size of the model on disk), this amounts to choosing the largest $T$ that satisfies your constraint. "Why do people tune over $T$ if it's wrong to do so?" This is purely speculation, but I think that the belief that tuning the number of trees in a random forest persists is related to two facts: Boosting algorithms like AdaBoost and XGBoost do require users to tune the number of trees in the ensemble and some software users are not sophisticated enough to distinguish between boosting and bagging. (For a discussion of the distinction between boosting and bagging, see Is random forest a boosting algorithm?) Standard random forest implementations, like R's randomForest (which is, basically, the R interface to Breiman's FORTRAN code), only report error rate (or, equivalently, accuracy) as a function of trees. This is deceptive, because the accuracy is not a monotonic function of the number of trees, whereas continuous proper scoring rules such as Brier score and logloss are monotonic functions. Citation Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?"
Do we have to tune the number of trees in a random forest? It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all
51,327
data are drawn from a probability distribution P?
A random variable is something that takes different values where there is some randomness to the value it can take. A probability distribution assigns a probability to each possible outcome of that random variable. In your case, you are observing data that could've been different. In other words, if you took another sample with the same sample size you'd likely observe something different. Hence what you observe is considered random. If your random variable is discrete, a probability distribution gives you a rule for the probability of each discrete value your random variable can take. If your random variable is continuous, it gives you a rule for the probability of any range of values your random variable can take.
data are drawn from a probability distribution P?
A random variable is something that takes different values where there is some randomness to the value it can take. A probability distribution assigns a probability to each possible outcome of that ra
data are drawn from a probability distribution P? A random variable is something that takes different values where there is some randomness to the value it can take. A probability distribution assigns a probability to each possible outcome of that random variable. In your case, you are observing data that could've been different. In other words, if you took another sample with the same sample size you'd likely observe something different. Hence what you observe is considered random. If your random variable is discrete, a probability distribution gives you a rule for the probability of each discrete value your random variable can take. If your random variable is continuous, it gives you a rule for the probability of any range of values your random variable can take.
data are drawn from a probability distribution P? A random variable is something that takes different values where there is some randomness to the value it can take. A probability distribution assigns a probability to each possible outcome of that ra
51,328
data are drawn from a probability distribution P?
A probability distribution assigns likelihoods to the values in its domain. A good way to think about it is a six-sided dice roll. Dice assign probabilities to each of the sides: we have a 1 in 6 chance of seeing each side. However, in practice, we would roll the dice 6 times and are not likely to see all 6 sides. Instead, the sides that a dice roll gives us are sampled (or, drawn) I.I.D (Independently and identically distributed). This means: each dice roll is independent of the next and each dice roll has the same probability distribution. Thus, getting the '1' this time does not influence getting the '1' next time. Eventually, as the number of dice rolls gets increasingly large, the number of times you see each side will be roughly 1/6th of the times you rolled the dice. You can convince yourself with the following python code: import matplotlib.pyplot as plt from numpy.random import randint X = 10 plt.hist( randint(1,6,X) ); plt.show() Increase X and watch how the histogram changes. This analogy applies to the continuous domain as well. In the discrete domain, we say that each discrete item has a certain amount of probability mass. In the continuous domain, ranges of values (1.0-2.0, for example) have probability density. But the analogy is basically the same. The more I.I.D. samples, the more it looks like probability distribution.
data are drawn from a probability distribution P?
A probability distribution assigns likelihoods to the values in its domain. A good way to think about it is a six-sided dice roll. Dice assign probabilities to each of the sides: we have a 1 in 6 cha
data are drawn from a probability distribution P? A probability distribution assigns likelihoods to the values in its domain. A good way to think about it is a six-sided dice roll. Dice assign probabilities to each of the sides: we have a 1 in 6 chance of seeing each side. However, in practice, we would roll the dice 6 times and are not likely to see all 6 sides. Instead, the sides that a dice roll gives us are sampled (or, drawn) I.I.D (Independently and identically distributed). This means: each dice roll is independent of the next and each dice roll has the same probability distribution. Thus, getting the '1' this time does not influence getting the '1' next time. Eventually, as the number of dice rolls gets increasingly large, the number of times you see each side will be roughly 1/6th of the times you rolled the dice. You can convince yourself with the following python code: import matplotlib.pyplot as plt from numpy.random import randint X = 10 plt.hist( randint(1,6,X) ); plt.show() Increase X and watch how the histogram changes. This analogy applies to the continuous domain as well. In the discrete domain, we say that each discrete item has a certain amount of probability mass. In the continuous domain, ranges of values (1.0-2.0, for example) have probability density. But the analogy is basically the same. The more I.I.D. samples, the more it looks like probability distribution.
data are drawn from a probability distribution P? A probability distribution assigns likelihoods to the values in its domain. A good way to think about it is a six-sided dice roll. Dice assign probabilities to each of the sides: we have a 1 in 6 cha
51,329
data are drawn from a probability distribution P?
Typically, it means that you make a computer generate pseudorandom numbers between 0 and 1, which is then used as input in the inverse of the cumulative density function (CDF) of the distribution P. The image below shows the CDF for the normal distribution with mean = 0 and standard deviation = 1: The computer is generating pseudorandom numbers between 0 and 1 and feeding it through the inverse of the CDF for the normal distribution. You can see how most of the values in the interval [0,1] on the Y-axis get mapped close to the mean, reflecting the characteristics of the normal distribution. E.g. the blue lines show that [~0.15, ~0.85] $\mapsto$ [-1,1], meaning most of the numbers in [0,1] on the Y-axis are ending up clustered around the mean. IMO, when a paper or a book says these "data are drawn from a probability distribution P", it means "we generated this data so it conforms to our theoretical notions about P". The alternative is to draw from some real population, i.e. a real sample. Then you don't know the distribution, and have to infer it (fancy guessing).
data are drawn from a probability distribution P?
Typically, it means that you make a computer generate pseudorandom numbers between 0 and 1, which is then used as input in the inverse of the cumulative density function (CDF) of the distribution P. T
data are drawn from a probability distribution P? Typically, it means that you make a computer generate pseudorandom numbers between 0 and 1, which is then used as input in the inverse of the cumulative density function (CDF) of the distribution P. The image below shows the CDF for the normal distribution with mean = 0 and standard deviation = 1: The computer is generating pseudorandom numbers between 0 and 1 and feeding it through the inverse of the CDF for the normal distribution. You can see how most of the values in the interval [0,1] on the Y-axis get mapped close to the mean, reflecting the characteristics of the normal distribution. E.g. the blue lines show that [~0.15, ~0.85] $\mapsto$ [-1,1], meaning most of the numbers in [0,1] on the Y-axis are ending up clustered around the mean. IMO, when a paper or a book says these "data are drawn from a probability distribution P", it means "we generated this data so it conforms to our theoretical notions about P". The alternative is to draw from some real population, i.e. a real sample. Then you don't know the distribution, and have to infer it (fancy guessing).
data are drawn from a probability distribution P? Typically, it means that you make a computer generate pseudorandom numbers between 0 and 1, which is then used as input in the inverse of the cumulative density function (CDF) of the distribution P. T
51,330
Which statistical methods could I use to determine if a price is good, based on a history of prices? [closed]
Build a good ARIMA model for your price history incorporating memory, events (e.g. Black Friday etc ...), day-of-the-week, seasonal dummies, level shifts , local time trends and the run a program that incorporates robust ARIMA (not AIC or BIC based as those procedures assume no outliers and restrict the model form to a list) and assess whether or not an intervention pulse has occurred at the last period. If it has then an exception has been detected ... if not then nothing exceptional in last data point. To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. For whoever knows the ways of Nature will more easily notice her deviations and, on the other hand, whoever knows her deviations will more accurately describe her ways. One learns the rules by observing when the current rules fail. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "errors of Nature, sports and monsters correct the understanding in regard to ordinary things, and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately describe her ways." [ II 29]
Which statistical methods could I use to determine if a price is good, based on a history of prices?
Build a good ARIMA model for your price history incorporating memory, events (e.g. Black Friday etc ...), day-of-the-week, seasonal dummies, level shifts , local time trends and the run a program that
Which statistical methods could I use to determine if a price is good, based on a history of prices? [closed] Build a good ARIMA model for your price history incorporating memory, events (e.g. Black Friday etc ...), day-of-the-week, seasonal dummies, level shifts , local time trends and the run a program that incorporates robust ARIMA (not AIC or BIC based as those procedures assume no outliers and restrict the model form to a list) and assess whether or not an intervention pulse has occurred at the last period. If it has then an exception has been detected ... if not then nothing exceptional in last data point. To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. For whoever knows the ways of Nature will more easily notice her deviations and, on the other hand, whoever knows her deviations will more accurately describe her ways. One learns the rules by observing when the current rules fail. The problem is that you can't catch an outlier without a model (at least a mild one) for your data. Else how would you know that a point violated that model? In fact, the process of growing understanding and finding and examining outliers must be iterative. This isn't a new thought. Bacon, writing in Novum Organum about 400 years ago said: "errors of Nature, sports and monsters correct the understanding in regard to ordinary things, and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows her deviations will more accurately describe her ways." [ II 29]
Which statistical methods could I use to determine if a price is good, based on a history of prices? Build a good ARIMA model for your price history incorporating memory, events (e.g. Black Friday etc ...), day-of-the-week, seasonal dummies, level shifts , local time trends and the run a program that
51,331
Which statistical methods could I use to determine if a price is good, based on a history of prices? [closed]
Clearly the answer depends on the dynamics of the prices, so having a model for that would be required, as a previous answer indicates. It seems to me that if one cannot assume anything about prices, this would be related to the secretary (or princess....) problem. If all, say $r$, suitors of a princess are summoned sequentially to her presence and she is able to rank them, the strategy that maximizes her chance of choosing the optimal candidate is to reject the first $\approx r/e$ and then pick the first one which dominates all the previously seen, in case there is one (otherwise, put up with the last one): see Billingsley, Probability Theory, for instance. However, this is a strategy that maximizes her chances of getting the very best suitor, a more conservative princess might prefer a strategy that produces a good enough partner. Seeing all prices in sequence and acting likewise might give you the maximum chance to get the lowest one, but perhaps at the cost of missing quite often prices which are nearly as good. Just a thought, probably irrelevant but which may give you some idea.
Which statistical methods could I use to determine if a price is good, based on a history of prices?
Clearly the answer depends on the dynamics of the prices, so having a model for that would be required, as a previous answer indicates. It seems to me that if one cannot assume anything about prices,
Which statistical methods could I use to determine if a price is good, based on a history of prices? [closed] Clearly the answer depends on the dynamics of the prices, so having a model for that would be required, as a previous answer indicates. It seems to me that if one cannot assume anything about prices, this would be related to the secretary (or princess....) problem. If all, say $r$, suitors of a princess are summoned sequentially to her presence and she is able to rank them, the strategy that maximizes her chance of choosing the optimal candidate is to reject the first $\approx r/e$ and then pick the first one which dominates all the previously seen, in case there is one (otherwise, put up with the last one): see Billingsley, Probability Theory, for instance. However, this is a strategy that maximizes her chances of getting the very best suitor, a more conservative princess might prefer a strategy that produces a good enough partner. Seeing all prices in sequence and acting likewise might give you the maximum chance to get the lowest one, but perhaps at the cost of missing quite often prices which are nearly as good. Just a thought, probably irrelevant but which may give you some idea.
Which statistical methods could I use to determine if a price is good, based on a history of prices? Clearly the answer depends on the dynamics of the prices, so having a model for that would be required, as a previous answer indicates. It seems to me that if one cannot assume anything about prices,
51,332
Conditional vs. Unconditional Maximum Likelihood
The main reason for using conditional maximum likelihood is the resulting distribution. For Y|X ~N(x'B,Var(eps)) holds because the variation of Y only depends on the (normal) variation of the eps. As you know the correct assumption of the underlying density is a crucial point in MML estimation and hence, with unconditional MML you would run into problems here. Think about how the unconditional distribution of Y would look like if X is for instance N(3, Var(eps)). Y would have a mixture distribution that is not even normal anymore. The density function that you write for unconditional likelihood (example 1) is therefore wrong and moreover, the variance of Y is definitely not σ2. If you don't condition on X the variation of X needs to be accounted for in the variation of Y. Again, the variation of X needs to be considered if you are using unconditional MML.
Conditional vs. Unconditional Maximum Likelihood
The main reason for using conditional maximum likelihood is the resulting distribution. For Y|X ~N(x'B,Var(eps)) holds because the variation of Y only depends on the (normal) variation of the eps. As
Conditional vs. Unconditional Maximum Likelihood The main reason for using conditional maximum likelihood is the resulting distribution. For Y|X ~N(x'B,Var(eps)) holds because the variation of Y only depends on the (normal) variation of the eps. As you know the correct assumption of the underlying density is a crucial point in MML estimation and hence, with unconditional MML you would run into problems here. Think about how the unconditional distribution of Y would look like if X is for instance N(3, Var(eps)). Y would have a mixture distribution that is not even normal anymore. The density function that you write for unconditional likelihood (example 1) is therefore wrong and moreover, the variance of Y is definitely not σ2. If you don't condition on X the variation of X needs to be accounted for in the variation of Y. Again, the variation of X needs to be considered if you are using unconditional MML.
Conditional vs. Unconditional Maximum Likelihood The main reason for using conditional maximum likelihood is the resulting distribution. For Y|X ~N(x'B,Var(eps)) holds because the variation of Y only depends on the (normal) variation of the eps. As
51,333
Bandits with mixed reward processes?
Wow, super super late coming back here. I just wanted to leave here that this question led to a published paper. Not having an answer was quite a good hint :) https://ieeexplore.ieee.org/document/7390272
Bandits with mixed reward processes?
Wow, super super late coming back here. I just wanted to leave here that this question led to a published paper. Not having an answer was quite a good hint :) https://ieeexplore.ieee.org/document/7390
Bandits with mixed reward processes? Wow, super super late coming back here. I just wanted to leave here that this question led to a published paper. Not having an answer was quite a good hint :) https://ieeexplore.ieee.org/document/7390272
Bandits with mixed reward processes? Wow, super super late coming back here. I just wanted to leave here that this question led to a published paper. Not having an answer was quite a good hint :) https://ieeexplore.ieee.org/document/7390
51,334
Should I control for random effects of participant in an individual differences design?
You're correct that the second model reduces the power to detect associations between the individual difference measure and the trial-level outcome by including a random intercept for each participant (including a random intercept is "what you are conceptually doing"). If that is the intention, the second model is specified correctly (1 random intercept for Stim, 1 random intercept for Participant, 1 fixed effect of Personality). In theory, including the random intercept by participant is necessary to preserve the assumption of independence (each participant has multiple measurements). However, I think I'm missing something, because the variability captured by the random intercept seems essential to understanding the fixed effect (i.e., individual difference). I know this is old but I'm running into the same question.
Should I control for random effects of participant in an individual differences design?
You're correct that the second model reduces the power to detect associations between the individual difference measure and the trial-level outcome by including a random intercept for each participant
Should I control for random effects of participant in an individual differences design? You're correct that the second model reduces the power to detect associations between the individual difference measure and the trial-level outcome by including a random intercept for each participant (including a random intercept is "what you are conceptually doing"). If that is the intention, the second model is specified correctly (1 random intercept for Stim, 1 random intercept for Participant, 1 fixed effect of Personality). In theory, including the random intercept by participant is necessary to preserve the assumption of independence (each participant has multiple measurements). However, I think I'm missing something, because the variability captured by the random intercept seems essential to understanding the fixed effect (i.e., individual difference). I know this is old but I'm running into the same question.
Should I control for random effects of participant in an individual differences design? You're correct that the second model reduces the power to detect associations between the individual difference measure and the trial-level outcome by including a random intercept for each participant
51,335
Should I control for random effects of participant in an individual differences design?
As far as I know, random effects generally only affect variance that is not explained by the fixed effects. That is, the model first explains all the variance in the dependent variable with fixed effects and then accounts for the remaining residual variance by random effects.
Should I control for random effects of participant in an individual differences design?
As far as I know, random effects generally only affect variance that is not explained by the fixed effects. That is, the model first explains all the variance in the dependent variable with fixed effe
Should I control for random effects of participant in an individual differences design? As far as I know, random effects generally only affect variance that is not explained by the fixed effects. That is, the model first explains all the variance in the dependent variable with fixed effects and then accounts for the remaining residual variance by random effects.
Should I control for random effects of participant in an individual differences design? As far as I know, random effects generally only affect variance that is not explained by the fixed effects. That is, the model first explains all the variance in the dependent variable with fixed effe
51,336
Interpreting Granger causality test's results – i.e. $X = f(Y)$
no. the null hypothesis would be: X does not granger cause Y or the other way. Also, you accept or reject your null hypothesis depending on the level of significance. if P value < Significance level, then Null hypothesis would be rejected. if P value > Significance level, then Null hypothesis cannot be rejected.
Interpreting Granger causality test's results – i.e. $X = f(Y)$
no. the null hypothesis would be: X does not granger cause Y or the other way. Also, you accept or reject your null hypothesis depending on the level of significance. if P value < Significance level,
Interpreting Granger causality test's results – i.e. $X = f(Y)$ no. the null hypothesis would be: X does not granger cause Y or the other way. Also, you accept or reject your null hypothesis depending on the level of significance. if P value < Significance level, then Null hypothesis would be rejected. if P value > Significance level, then Null hypothesis cannot be rejected.
Interpreting Granger causality test's results – i.e. $X = f(Y)$ no. the null hypothesis would be: X does not granger cause Y or the other way. Also, you accept or reject your null hypothesis depending on the level of significance. if P value < Significance level,
51,337
Interpreting Granger causality test's results – i.e. $X = f(Y)$
For your specific example, as you are using F test the rejection of null hypothesis will be done as follows: Compare obtained F_value to the F_critical, If F_value is greater than F_critical then you can reject the null hypothesis but before that you have to check P_value also. If P_value is less than Significance level (5% widely used) then only you can reject your null hypothesis.
Interpreting Granger causality test's results – i.e. $X = f(Y)$
For your specific example, as you are using F test the rejection of null hypothesis will be done as follows: Compare obtained F_value to the F_critical, If F_value is greater than F_critical then you
Interpreting Granger causality test's results – i.e. $X = f(Y)$ For your specific example, as you are using F test the rejection of null hypothesis will be done as follows: Compare obtained F_value to the F_critical, If F_value is greater than F_critical then you can reject the null hypothesis but before that you have to check P_value also. If P_value is less than Significance level (5% widely used) then only you can reject your null hypothesis.
Interpreting Granger causality test's results – i.e. $X = f(Y)$ For your specific example, as you are using F test the rejection of null hypothesis will be done as follows: Compare obtained F_value to the F_critical, If F_value is greater than F_critical then you
51,338
fit and cross-validate categorical sample data formed from observations [closed]
An possible algorithm would be to use a hash table with combinations of t1 and t2. Sth i use frequently (and relatively simple) i using t1_t2 as a string key to a map of successes Thus for your example you map would be sth like this: Hash MAP: Key Value (successes) 1_4 4 1_8 8 2_4 4 2_8 8 .. etc... Your map will only have each observed combination of t1,t2 only once and the value will be the total number of successes for this combination All other combinations not present in the map can be assumed to have value(successes) zero. This can be very efficient. In fact a single pass through the data can generate the map (in any language that supports hash maps) Subsequently the map can be used to do any other computation on the data UPDATE: In order to compute a pdf from the data using the hashmap , one can do sth like the following (for illustration purposes only): count = 0 for t1 in 1..4 # use the categories for t1 here for t2 in 1..4 # use the categories for t2 here count++ if 't1_t2' in map: pdf(t1_t2) += map.get(t1_t2)/count end end UPDATE2: if you want to store the number oftrials also in the map it canbe done like this (value will now be an array instead of single number): Hash MAP: Key Value ([successes, ntrials]) 1_4 [4, 1000] 1_8 [8, 999] 2_4 [4, 100] 2_8 [8, 1000] .. etc...
fit and cross-validate categorical sample data formed from observations [closed]
An possible algorithm would be to use a hash table with combinations of t1 and t2. Sth i use frequently (and relatively simple) i using t1_t2 as a string key to a map of successes Thus for your exampl
fit and cross-validate categorical sample data formed from observations [closed] An possible algorithm would be to use a hash table with combinations of t1 and t2. Sth i use frequently (and relatively simple) i using t1_t2 as a string key to a map of successes Thus for your example you map would be sth like this: Hash MAP: Key Value (successes) 1_4 4 1_8 8 2_4 4 2_8 8 .. etc... Your map will only have each observed combination of t1,t2 only once and the value will be the total number of successes for this combination All other combinations not present in the map can be assumed to have value(successes) zero. This can be very efficient. In fact a single pass through the data can generate the map (in any language that supports hash maps) Subsequently the map can be used to do any other computation on the data UPDATE: In order to compute a pdf from the data using the hashmap , one can do sth like the following (for illustration purposes only): count = 0 for t1 in 1..4 # use the categories for t1 here for t2 in 1..4 # use the categories for t2 here count++ if 't1_t2' in map: pdf(t1_t2) += map.get(t1_t2)/count end end UPDATE2: if you want to store the number oftrials also in the map it canbe done like this (value will now be an array instead of single number): Hash MAP: Key Value ([successes, ntrials]) 1_4 [4, 1000] 1_8 [8, 999] 2_4 [4, 100] 2_8 [8, 1000] .. etc...
fit and cross-validate categorical sample data formed from observations [closed] An possible algorithm would be to use a hash table with combinations of t1 and t2. Sth i use frequently (and relatively simple) i using t1_t2 as a string key to a map of successes Thus for your exampl
51,339
Testing for linear dependence among the columns of a matrix
You seem to ask a really provoking question: how to detect, given a singular correlation (or covariance, or sum-of-squares-and-cross-product) matrix, which column is linearly dependent on which. I tentatively suppose that sweep operation could help. Here is my probe in SPSS (not R) to illustrate. Let's generate some data: v1 v2 v3 v4 v5 -1.64454 .35119 -.06384 -1.05188 .25192 -1.78520 -.21598 1.20315 .40267 1.14790 1.36357 -.96107 -.46651 .92889 -1.38072 -.31455 -.74937 1.17505 1.27623 -1.04640 -.31795 .85860 .10061 .00145 .39644 -.97010 .19129 2.43890 -.83642 -.13250 -.66439 .29267 1.20405 .90068 -1.78066 .87025 -.89018 -.99386 -1.80001 .42768 -1.96219 -.27535 .58754 .34556 .12587 -1.03638 -.24645 -.11083 .07013 -.84446 Let's create some linear dependancy between V2, V4 and V5: compute V4 = .4*V2+1.2*V5. execute. So, we modified our column V4. matrix. get X. /*take the data*/ compute M = sscp(X). /*SSCP matrix, X'X; it is singular*/ print rank(M). /*with rank 5-1=4, because there's 1 group of interdependent columns*/ loop i= 1 to 5. /*Start iterative sweep operation on M from column 1 to column 5*/ -compute M = sweep(M,i). -print M. /*That's printout we want to trace*/ end loop. end matrix. The printouts of M in 5 iterations: M .06660028 -.12645565 -.54275426 -.19692972 -.12195621 .12645565 3.20350385 -.08946808 2.84946215 1.30671718 .54275426 -.08946808 7.38023317 -3.51467361 -2.89907198 .19692972 2.84946215 -3.51467361 13.88671851 10.62244471 .12195621 1.30671718 -2.89907198 10.62244471 8.41646486 M .07159201 .03947417 -.54628594 -.08444957 -.07037464 .03947417 .31215820 -.02792819 .88948298 .40790248 .54628594 .02792819 7.37773449 -3.43509328 -2.86257773 .08444957 -.88948298 -3.43509328 11.35217042 9.46014202 .07037464 -.40790248 -2.86257773 9.46014202 7.88345168 M .112041875 .041542117 .074045215 -.338801789 -.282334825 .041542117 .312263922 .003785470 .876479537 .397066281 .074045215 .003785470 .135542964 -.465602725 -.388002270 .338801789 -.876479537 .465602725 9.752781632 8.127318027 .282334825 -.397066281 .388002270 8.127318027 6.772765022 M .1238115070 .0110941027 .0902197842 .0347389906 .0000000000 .0110941027 .3910328733 -.0380581058 -.0898696977 -.3333333333 .0902197842 -.0380581058 .1577710733 .0477405054 .0000000000 .0347389906 -.0898696977 .0477405054 .1025348498 .8333333333 .0000000000 .3333333333 .0000000000 -.8333333333 .0000000000 M .1238115070 .0110941027 .0902197842 .0347389906 .0000000000 .0110941027 .3910328733 -.0380581058 -.0898696977 .0000000000 .0902197842 -.0380581058 .1577710733 .0477405054 .0000000000 .0347389906 -.0898696977 .0477405054 .1025348498 .0000000000 .0000000000 .0000000000 .0000000000 .0000000000 .0000000000 Notice that eventually column 5 got full of zeros. This means (as I understand it) that V5 is linearly tied with some of preceeding columns. Which columns? Look at iteration where column 5 is last not full of zeroes - iteration 4. We see there that V5 is tied with V2 and V4 with coefficients -.3333 and .8333: V5 = -.3333*V2+.8333*V4, which corresponds to what we've done with the data: V4 = .4*V2+1.2*V5. That's how we knew which column is linearly tied with which other. I didn't check how helpful is the above approach in more general case with many groups of interdependancies in the data. In the above example it appeared helpful, though.
Testing for linear dependence among the columns of a matrix
You seem to ask a really provoking question: how to detect, given a singular correlation (or covariance, or sum-of-squares-and-cross-product) matrix, which column is linearly dependent on which. I ten
Testing for linear dependence among the columns of a matrix You seem to ask a really provoking question: how to detect, given a singular correlation (or covariance, or sum-of-squares-and-cross-product) matrix, which column is linearly dependent on which. I tentatively suppose that sweep operation could help. Here is my probe in SPSS (not R) to illustrate. Let's generate some data: v1 v2 v3 v4 v5 -1.64454 .35119 -.06384 -1.05188 .25192 -1.78520 -.21598 1.20315 .40267 1.14790 1.36357 -.96107 -.46651 .92889 -1.38072 -.31455 -.74937 1.17505 1.27623 -1.04640 -.31795 .85860 .10061 .00145 .39644 -.97010 .19129 2.43890 -.83642 -.13250 -.66439 .29267 1.20405 .90068 -1.78066 .87025 -.89018 -.99386 -1.80001 .42768 -1.96219 -.27535 .58754 .34556 .12587 -1.03638 -.24645 -.11083 .07013 -.84446 Let's create some linear dependancy between V2, V4 and V5: compute V4 = .4*V2+1.2*V5. execute. So, we modified our column V4. matrix. get X. /*take the data*/ compute M = sscp(X). /*SSCP matrix, X'X; it is singular*/ print rank(M). /*with rank 5-1=4, because there's 1 group of interdependent columns*/ loop i= 1 to 5. /*Start iterative sweep operation on M from column 1 to column 5*/ -compute M = sweep(M,i). -print M. /*That's printout we want to trace*/ end loop. end matrix. The printouts of M in 5 iterations: M .06660028 -.12645565 -.54275426 -.19692972 -.12195621 .12645565 3.20350385 -.08946808 2.84946215 1.30671718 .54275426 -.08946808 7.38023317 -3.51467361 -2.89907198 .19692972 2.84946215 -3.51467361 13.88671851 10.62244471 .12195621 1.30671718 -2.89907198 10.62244471 8.41646486 M .07159201 .03947417 -.54628594 -.08444957 -.07037464 .03947417 .31215820 -.02792819 .88948298 .40790248 .54628594 .02792819 7.37773449 -3.43509328 -2.86257773 .08444957 -.88948298 -3.43509328 11.35217042 9.46014202 .07037464 -.40790248 -2.86257773 9.46014202 7.88345168 M .112041875 .041542117 .074045215 -.338801789 -.282334825 .041542117 .312263922 .003785470 .876479537 .397066281 .074045215 .003785470 .135542964 -.465602725 -.388002270 .338801789 -.876479537 .465602725 9.752781632 8.127318027 .282334825 -.397066281 .388002270 8.127318027 6.772765022 M .1238115070 .0110941027 .0902197842 .0347389906 .0000000000 .0110941027 .3910328733 -.0380581058 -.0898696977 -.3333333333 .0902197842 -.0380581058 .1577710733 .0477405054 .0000000000 .0347389906 -.0898696977 .0477405054 .1025348498 .8333333333 .0000000000 .3333333333 .0000000000 -.8333333333 .0000000000 M .1238115070 .0110941027 .0902197842 .0347389906 .0000000000 .0110941027 .3910328733 -.0380581058 -.0898696977 .0000000000 .0902197842 -.0380581058 .1577710733 .0477405054 .0000000000 .0347389906 -.0898696977 .0477405054 .1025348498 .0000000000 .0000000000 .0000000000 .0000000000 .0000000000 .0000000000 Notice that eventually column 5 got full of zeros. This means (as I understand it) that V5 is linearly tied with some of preceeding columns. Which columns? Look at iteration where column 5 is last not full of zeroes - iteration 4. We see there that V5 is tied with V2 and V4 with coefficients -.3333 and .8333: V5 = -.3333*V2+.8333*V4, which corresponds to what we've done with the data: V4 = .4*V2+1.2*V5. That's how we knew which column is linearly tied with which other. I didn't check how helpful is the above approach in more general case with many groups of interdependancies in the data. In the above example it appeared helpful, though.
Testing for linear dependence among the columns of a matrix You seem to ask a really provoking question: how to detect, given a singular correlation (or covariance, or sum-of-squares-and-cross-product) matrix, which column is linearly dependent on which. I ten
51,340
Testing for linear dependence among the columns of a matrix
Here's a straightforward approach: compute the rank of the matrix that results from removing each of the columns. The columns which, when removed, result in the highest rank are the linearly dependent ones (since removing those does not decrease rank, while removing a linearly independent column does). In R: rankifremoved <- sapply(1:ncol(your.matrix), function (x) qr(your.matrix[,-x])$rank) which(rankifremoved == max(rankifremoved))
Testing for linear dependence among the columns of a matrix
Here's a straightforward approach: compute the rank of the matrix that results from removing each of the columns. The columns which, when removed, result in the highest rank are the linearly dependen
Testing for linear dependence among the columns of a matrix Here's a straightforward approach: compute the rank of the matrix that results from removing each of the columns. The columns which, when removed, result in the highest rank are the linearly dependent ones (since removing those does not decrease rank, while removing a linearly independent column does). In R: rankifremoved <- sapply(1:ncol(your.matrix), function (x) qr(your.matrix[,-x])$rank) which(rankifremoved == max(rankifremoved))
Testing for linear dependence among the columns of a matrix Here's a straightforward approach: compute the rank of the matrix that results from removing each of the columns. The columns which, when removed, result in the highest rank are the linearly dependen
51,341
Testing for linear dependence among the columns of a matrix
The question asks about "identifying underlying [linear] relationships" among variables. The quick and easy way to detect relationships is to regress any other variable (use a constant, even) against those variables using your favorite software: any good regression procedure will detect and diagnose collinearity. (You will not even bother to look at the regression results: we're just relying on a useful side-effect of setting up and analyzing the regression matrix.) Assuming collinearity is detected, though, what next? Principal Components Analysis (PCA) is exactly what is needed: its smallest components correspond to near-linear relations. These relations can be read directly off the "loadings," which are linear combinations of the original variables. Small loadings (that is, those associated with small eigenvalues) correspond to near-collinearities. An eigenvalue of $0$ would correspond to a perfect linear relation. Slightly larger eigenvalues that are still much smaller than the largest would correspond to approximate linear relations. (There is an art and quite a lot of literature associated with identifying what a "small" loading is. For modeling a dependent variable, I would suggest including it within the independent variables in the PCA in order to identify the components--regardless of their sizes--in which the dependent variable plays an important role. From this point of view, "small" means much smaller than any such component.) Let's look at some examples. (These use R for the calculations and plotting.) Begin with a function to perform PCA, look for small components, plot them, and return the linear relations among them. pca <- function(x, threshold, ...) { fit <- princomp(x) # # Compute the relations among "small" components. # if(missing(threshold)) threshold <- max(fit$sdev) / ncol(x) i <- which(fit$sdev < threshold) relations <- fit$loadings[, i, drop=FALSE] relations <- round(t(t(relations) / apply(relations, 2, max)), digits=2) # # Plot the loadings, highlighting those for the small components. # matplot(x, pch=1, cex=.8, col="Gray", xlab="Observation", ylab="Value", ...) suppressWarnings(matplot(x %*% relations, pch=19, col="#e0404080", add=TRUE)) return(t(relations)) } Let's apply this to some random data. These are built on four variables (the $B,C,D,$ and $E$ of the question). Here is a little function to compute $A$ as a given linear combination of the others. It then adds i.i.d. Normally-distributed values to all five variables (to see how well the procedure performs when multicollinearity is only approximate and not exact). process <- function(z, beta, sd, ...) { x <- z %*% beta; colnames(x) <- "A" pca(cbind(x, z + rnorm(length(x), sd=sd)), ...) } We're all set to go: it remains only to generate $B, \ldots, E$ and apply these procedures. I use the two scenarios described in the question: $A=B+C+D+E$ (plus some error in each) and $A=B+(C+D)/2+E$ (plus some error in each). First, however, note that PCA is almost always applied to centered data, so these simulated data are centered (but not otherwise rescaled) using sweep. n.obs <- 80 # Number of cases n.vars <- 4 # Number of independent variables set.seed(17) z <- matrix(rnorm(n.obs*(n.vars)), ncol=n.vars) z.mean <- apply(z, 2, mean) z <- sweep(z, 2, z.mean) colnames(z) <- c("B","C","D","E") # Optional; modify to match `n.vars` in length Here we go with two scenarios and three levels of error applied to each. The original variables $B, \ldots, E$ are retained throughout without change: only $A$ and the error terms vary. The output associated with the upper left panel was A B C D E Comp.5 1 -1 -1 -1 -1 This says that the row of red dots--which is constantly at $0$, demonstrating a perfect multicollinearity--consists of the combination $0 \approx A -B-C-D-E$: exactly what was specified. The output for the upper middle panel was A B C D E Comp.5 1 -0.95 -1.03 -0.98 -1.02 The coefficients are still close to what we expected, but they are not quite the same due to the error introduced. It thickened the four-dimensional hyperplane within the five-dimensional space implied by $(A,B,C,D,E)$ and that tilted the estimated direction just a little. With more error, the thickening becomes comparable to the original spread of the points, making the hyperplane almost impossible to estimate. Now (in the upper right panel) the coefficients are A B C D E Comp.5 1 -1.33 -0.77 -0.74 -1.07 They have changed quite a bit but still reflect the basic underlying relationship $A' = B' + C' + D' + E'$ where the primes denote the values with the (unknown) error removed. The bottom row is interpreted the same way and its output similarly reflects the coefficients $1, 1/2, 1/2, 1$. In practice, it is often not the case that one variable is singled out as an obvious combination of the others: all coefficients may be of comparable sizes and of varying signs. Moreover, when there is more than one dimension of relations, there is no unique way to specify them: further analysis (such as row reduction) is needed to identify a useful basis for those relations. That's how the world works: all you can say is that these particular combinations that are output by PCA correspond to almost no variation in the data. To cope with this, some people use the largest ("principal") components directly as the independent variables in the regression or the subsequent analysis, whatever form it might take. If you do this, do not forget first to remove the dependent variable from the set of variables and redo the PCA! Here is the code to reproduce this figure: par(mfrow=c(2,3)) beta <- c(1,1,1,1) # Also can be a matrix with `n.obs` rows: try it! process(z, beta, sd=0, main="A=B+C+D+E; No error") process(z, beta, sd=1/10, main="A=B+C+D+E; Small error") process(z, beta, sd=1/3, threshold=2/3, main="A=B+C+D+E; Large error") beta <- c(1,1/2,1/2,1) process(z, beta, sd=0, main="A=B+(C+D)/2+E; No error") process(z, beta, sd=1/10, main="A=B+(C+D)/2+E; Small error") process(z, beta, sd=1/3, threshold=2/3, main="A=B+(C+D)/2+E; Large error") (I had to fiddle with the threshold in the large-error cases in order to display just a single component: that's the reason for supplying this value as a parameter to process.) User ttnphns has kindly directed our attention to a closely related thread. One of its answers (by J.M.) suggests the approach described here.
Testing for linear dependence among the columns of a matrix
The question asks about "identifying underlying [linear] relationships" among variables. The quick and easy way to detect relationships is to regress any other variable (use a constant, even) against
Testing for linear dependence among the columns of a matrix The question asks about "identifying underlying [linear] relationships" among variables. The quick and easy way to detect relationships is to regress any other variable (use a constant, even) against those variables using your favorite software: any good regression procedure will detect and diagnose collinearity. (You will not even bother to look at the regression results: we're just relying on a useful side-effect of setting up and analyzing the regression matrix.) Assuming collinearity is detected, though, what next? Principal Components Analysis (PCA) is exactly what is needed: its smallest components correspond to near-linear relations. These relations can be read directly off the "loadings," which are linear combinations of the original variables. Small loadings (that is, those associated with small eigenvalues) correspond to near-collinearities. An eigenvalue of $0$ would correspond to a perfect linear relation. Slightly larger eigenvalues that are still much smaller than the largest would correspond to approximate linear relations. (There is an art and quite a lot of literature associated with identifying what a "small" loading is. For modeling a dependent variable, I would suggest including it within the independent variables in the PCA in order to identify the components--regardless of their sizes--in which the dependent variable plays an important role. From this point of view, "small" means much smaller than any such component.) Let's look at some examples. (These use R for the calculations and plotting.) Begin with a function to perform PCA, look for small components, plot them, and return the linear relations among them. pca <- function(x, threshold, ...) { fit <- princomp(x) # # Compute the relations among "small" components. # if(missing(threshold)) threshold <- max(fit$sdev) / ncol(x) i <- which(fit$sdev < threshold) relations <- fit$loadings[, i, drop=FALSE] relations <- round(t(t(relations) / apply(relations, 2, max)), digits=2) # # Plot the loadings, highlighting those for the small components. # matplot(x, pch=1, cex=.8, col="Gray", xlab="Observation", ylab="Value", ...) suppressWarnings(matplot(x %*% relations, pch=19, col="#e0404080", add=TRUE)) return(t(relations)) } Let's apply this to some random data. These are built on four variables (the $B,C,D,$ and $E$ of the question). Here is a little function to compute $A$ as a given linear combination of the others. It then adds i.i.d. Normally-distributed values to all five variables (to see how well the procedure performs when multicollinearity is only approximate and not exact). process <- function(z, beta, sd, ...) { x <- z %*% beta; colnames(x) <- "A" pca(cbind(x, z + rnorm(length(x), sd=sd)), ...) } We're all set to go: it remains only to generate $B, \ldots, E$ and apply these procedures. I use the two scenarios described in the question: $A=B+C+D+E$ (plus some error in each) and $A=B+(C+D)/2+E$ (plus some error in each). First, however, note that PCA is almost always applied to centered data, so these simulated data are centered (but not otherwise rescaled) using sweep. n.obs <- 80 # Number of cases n.vars <- 4 # Number of independent variables set.seed(17) z <- matrix(rnorm(n.obs*(n.vars)), ncol=n.vars) z.mean <- apply(z, 2, mean) z <- sweep(z, 2, z.mean) colnames(z) <- c("B","C","D","E") # Optional; modify to match `n.vars` in length Here we go with two scenarios and three levels of error applied to each. The original variables $B, \ldots, E$ are retained throughout without change: only $A$ and the error terms vary. The output associated with the upper left panel was A B C D E Comp.5 1 -1 -1 -1 -1 This says that the row of red dots--which is constantly at $0$, demonstrating a perfect multicollinearity--consists of the combination $0 \approx A -B-C-D-E$: exactly what was specified. The output for the upper middle panel was A B C D E Comp.5 1 -0.95 -1.03 -0.98 -1.02 The coefficients are still close to what we expected, but they are not quite the same due to the error introduced. It thickened the four-dimensional hyperplane within the five-dimensional space implied by $(A,B,C,D,E)$ and that tilted the estimated direction just a little. With more error, the thickening becomes comparable to the original spread of the points, making the hyperplane almost impossible to estimate. Now (in the upper right panel) the coefficients are A B C D E Comp.5 1 -1.33 -0.77 -0.74 -1.07 They have changed quite a bit but still reflect the basic underlying relationship $A' = B' + C' + D' + E'$ where the primes denote the values with the (unknown) error removed. The bottom row is interpreted the same way and its output similarly reflects the coefficients $1, 1/2, 1/2, 1$. In practice, it is often not the case that one variable is singled out as an obvious combination of the others: all coefficients may be of comparable sizes and of varying signs. Moreover, when there is more than one dimension of relations, there is no unique way to specify them: further analysis (such as row reduction) is needed to identify a useful basis for those relations. That's how the world works: all you can say is that these particular combinations that are output by PCA correspond to almost no variation in the data. To cope with this, some people use the largest ("principal") components directly as the independent variables in the regression or the subsequent analysis, whatever form it might take. If you do this, do not forget first to remove the dependent variable from the set of variables and redo the PCA! Here is the code to reproduce this figure: par(mfrow=c(2,3)) beta <- c(1,1,1,1) # Also can be a matrix with `n.obs` rows: try it! process(z, beta, sd=0, main="A=B+C+D+E; No error") process(z, beta, sd=1/10, main="A=B+C+D+E; Small error") process(z, beta, sd=1/3, threshold=2/3, main="A=B+C+D+E; Large error") beta <- c(1,1/2,1/2,1) process(z, beta, sd=0, main="A=B+(C+D)/2+E; No error") process(z, beta, sd=1/10, main="A=B+(C+D)/2+E; Small error") process(z, beta, sd=1/3, threshold=2/3, main="A=B+(C+D)/2+E; Large error") (I had to fiddle with the threshold in the large-error cases in order to display just a single component: that's the reason for supplying this value as a parameter to process.) User ttnphns has kindly directed our attention to a closely related thread. One of its answers (by J.M.) suggests the approach described here.
Testing for linear dependence among the columns of a matrix The question asks about "identifying underlying [linear] relationships" among variables. The quick and easy way to detect relationships is to regress any other variable (use a constant, even) against
51,342
Testing for linear dependence among the columns of a matrix
What I'd try to do here for diagnostic purposes is to take the $502\times 480$ matrix (that is, the transpose) and determine the singular values of the matrix (for diagnostic purposes, you don't need the full singular value decomposition... yet). Once you have the 480 singular values, check how many of those are "small" (a usual criterion is that a singular value is "small" if it is less than the largest singular value times the machine precision). If there are any "small" singular values, then yes, you have linear dependence.
Testing for linear dependence among the columns of a matrix
What I'd try to do here for diagnostic purposes is to take the $502\times 480$ matrix (that is, the transpose) and determine the singular values of the matrix (for diagnostic purposes, you don't need
Testing for linear dependence among the columns of a matrix What I'd try to do here for diagnostic purposes is to take the $502\times 480$ matrix (that is, the transpose) and determine the singular values of the matrix (for diagnostic purposes, you don't need the full singular value decomposition... yet). Once you have the 480 singular values, check how many of those are "small" (a usual criterion is that a singular value is "small" if it is less than the largest singular value times the machine precision). If there are any "small" singular values, then yes, you have linear dependence.
Testing for linear dependence among the columns of a matrix What I'd try to do here for diagnostic purposes is to take the $502\times 480$ matrix (that is, the transpose) and determine the singular values of the matrix (for diagnostic purposes, you don't need
51,343
Testing for linear dependence among the columns of a matrix
Not that the answer @Whuber gave really needs to be expanded on but I thought I'd provide a brief description of the math. If the linear combination $\mathbf{X'Xv}=\mathbf{0}$ for $\mathbf{v}\neq\mathbf{0}$ then $\mathbf{v}$ is an eigenvector of $\mathbf{X'X}$ associated with eigenvalue $\lambda=0$. The eigenvectors and eigenvalues of $\mathbf{X'X}$ are also the eigenvectors and eigenvalues of $\mathbf{X}$, so the eigenvectors of $\mathbf{X'X}$ associated with eigenvalues near $\lambda=0$ represent the coefficients for approximate linear relationships among the regressors. Principal Component Analysis outputs the eigenvectors and eigenvalues of $\mathbf{X'X}$, so you can use the eigenvectors $\mathbf{v}$ associated with small $\lambda$ to determine if linear relationships exist among some of your regressors. One method of determining if an eigenvalue is appropriately small to constitute collinearity is to use the Condition Indices: $$ \mathbf{\kappa_j}=\frac{\lambda_{max}}{\lambda_j} $$ which measures the size of the smallest eigenvalues relative to the largest. A general rule of thumb is that modest multicollinearity is associated with a condition index between 100 and 1,000 while severe multicollinearity is associated with a condition index above 1,000 (Montgomery, 2009). It's important to use an appropriate method for determining if an eigenvalue is small because it's not the absolute size of the eigenvalues, it's the relative size of the condition index that's important, as can be seen in an example. Consider the matrix $$ \mathbf{X'X}=\left[\begin{array}{rr} 0.001 & 0 & 0 \\ 0 & 0.001 & 0 \\ 0 & 0 & 0.001 \\ \end{array} \right]. $$ The eigenvalues for this matrix are $\lambda_1=\lambda_2=\lambda_3=0.001$. Although these eigenvalues appear small the condition index is $$ \mathbf{\kappa}=\frac{\lambda_{max}}{\lambda_{min}}=1 $$ indicating absence of multicolinearity and , in fact, the columns of this matrix are linearly independent. Citations Montgomery, D. (2012). Introduction to Linear Regression Analysis, 5th Edition. John Wiley & Sons Inc.
Testing for linear dependence among the columns of a matrix
Not that the answer @Whuber gave really needs to be expanded on but I thought I'd provide a brief description of the math. If the linear combination $\mathbf{X'Xv}=\mathbf{0}$ for $\mathbf{v}\neq\math
Testing for linear dependence among the columns of a matrix Not that the answer @Whuber gave really needs to be expanded on but I thought I'd provide a brief description of the math. If the linear combination $\mathbf{X'Xv}=\mathbf{0}$ for $\mathbf{v}\neq\mathbf{0}$ then $\mathbf{v}$ is an eigenvector of $\mathbf{X'X}$ associated with eigenvalue $\lambda=0$. The eigenvectors and eigenvalues of $\mathbf{X'X}$ are also the eigenvectors and eigenvalues of $\mathbf{X}$, so the eigenvectors of $\mathbf{X'X}$ associated with eigenvalues near $\lambda=0$ represent the coefficients for approximate linear relationships among the regressors. Principal Component Analysis outputs the eigenvectors and eigenvalues of $\mathbf{X'X}$, so you can use the eigenvectors $\mathbf{v}$ associated with small $\lambda$ to determine if linear relationships exist among some of your regressors. One method of determining if an eigenvalue is appropriately small to constitute collinearity is to use the Condition Indices: $$ \mathbf{\kappa_j}=\frac{\lambda_{max}}{\lambda_j} $$ which measures the size of the smallest eigenvalues relative to the largest. A general rule of thumb is that modest multicollinearity is associated with a condition index between 100 and 1,000 while severe multicollinearity is associated with a condition index above 1,000 (Montgomery, 2009). It's important to use an appropriate method for determining if an eigenvalue is small because it's not the absolute size of the eigenvalues, it's the relative size of the condition index that's important, as can be seen in an example. Consider the matrix $$ \mathbf{X'X}=\left[\begin{array}{rr} 0.001 & 0 & 0 \\ 0 & 0.001 & 0 \\ 0 & 0 & 0.001 \\ \end{array} \right]. $$ The eigenvalues for this matrix are $\lambda_1=\lambda_2=\lambda_3=0.001$. Although these eigenvalues appear small the condition index is $$ \mathbf{\kappa}=\frac{\lambda_{max}}{\lambda_{min}}=1 $$ indicating absence of multicolinearity and , in fact, the columns of this matrix are linearly independent. Citations Montgomery, D. (2012). Introduction to Linear Regression Analysis, 5th Edition. John Wiley & Sons Inc.
Testing for linear dependence among the columns of a matrix Not that the answer @Whuber gave really needs to be expanded on but I thought I'd provide a brief description of the math. If the linear combination $\mathbf{X'Xv}=\mathbf{0}$ for $\mathbf{v}\neq\math
51,344
Testing for linear dependence among the columns of a matrix
I ran into this issue roughly two weeks ago and decided that I needed to revisit it because when dealing with massive data sets, it is impossible to do these things manually. I created a for() loop that calculates the rank of the matrix one column at a time. So for the first iteration, the rank will be 1. The second, 2. This occurs until the rank becomes LESS than the column number you are using. Very straightforward: for (i in 1:47) { print(qr(data.frame[1:i])$rank) print(i) print(colnames(data.frame)[i]) print("###") } for() loop breakdown calculates the rank for the ith column prints the iteration number prints the column name for reference divides the console with "###" so that you can easily scroll through I am sure that you can add an if statement, I don't need it yet because I am only dealing with 50ish columns. Hope this helps!
Testing for linear dependence among the columns of a matrix
I ran into this issue roughly two weeks ago and decided that I needed to revisit it because when dealing with massive data sets, it is impossible to do these things manually. I created a for() loop t
Testing for linear dependence among the columns of a matrix I ran into this issue roughly two weeks ago and decided that I needed to revisit it because when dealing with massive data sets, it is impossible to do these things manually. I created a for() loop that calculates the rank of the matrix one column at a time. So for the first iteration, the rank will be 1. The second, 2. This occurs until the rank becomes LESS than the column number you are using. Very straightforward: for (i in 1:47) { print(qr(data.frame[1:i])$rank) print(i) print(colnames(data.frame)[i]) print("###") } for() loop breakdown calculates the rank for the ith column prints the iteration number prints the column name for reference divides the console with "###" so that you can easily scroll through I am sure that you can add an if statement, I don't need it yet because I am only dealing with 50ish columns. Hope this helps!
Testing for linear dependence among the columns of a matrix I ran into this issue roughly two weeks ago and decided that I needed to revisit it because when dealing with massive data sets, it is impossible to do these things manually. I created a for() loop t
51,345
Testing for linear dependence among the columns of a matrix
Rank, r of a matrix = number of linearly independent columns (or rows) of a matrix. For a n by n matrix A, rank(A) = n => all columns (or rows) are linearly independent.
Testing for linear dependence among the columns of a matrix
Rank, r of a matrix = number of linearly independent columns (or rows) of a matrix. For a n by n matrix A, rank(A) = n => all columns (or rows) are linearly independent.
Testing for linear dependence among the columns of a matrix Rank, r of a matrix = number of linearly independent columns (or rows) of a matrix. For a n by n matrix A, rank(A) = n => all columns (or rows) are linearly independent.
Testing for linear dependence among the columns of a matrix Rank, r of a matrix = number of linearly independent columns (or rows) of a matrix. For a n by n matrix A, rank(A) = n => all columns (or rows) are linearly independent.
51,346
Needle-in-a-haystack Regularized Regression
Ultimately, I ended up abandoning regularized approaches, as they are simply too biased for unbalanced categorical/binary features. I've actually grown quite skeptical of regularization in general, at least in my problem domain. Instead, I went with OLS using stepwise feature selection with k-fold cross-validation. I had to implement some nontrivial machinery to make this work at scale, as the full $n \times k$ matrix doesn't fit in memory.
Needle-in-a-haystack Regularized Regression
Ultimately, I ended up abandoning regularized approaches, as they are simply too biased for unbalanced categorical/binary features. I've actually grown quite skeptical of regularization in general, at
Needle-in-a-haystack Regularized Regression Ultimately, I ended up abandoning regularized approaches, as they are simply too biased for unbalanced categorical/binary features. I've actually grown quite skeptical of regularization in general, at least in my problem domain. Instead, I went with OLS using stepwise feature selection with k-fold cross-validation. I had to implement some nontrivial machinery to make this work at scale, as the full $n \times k$ matrix doesn't fit in memory.
Needle-in-a-haystack Regularized Regression Ultimately, I ended up abandoning regularized approaches, as they are simply too biased for unbalanced categorical/binary features. I've actually grown quite skeptical of regularization in general, at
51,347
Confidence interval of RMSE
I might be able to give an answer to your question under certain conditions. Let $x_{i}$ be your true value for the $i^{th}$ data point and $\hat{x}_{i}$ the estimated value. If we assume that the differences between the estimated and true values have mean zero (i.e. the $\hat{x}_{i}$ are distributed around $x_{i}$) follow a Normal distribution and all have the same standard deviation $\sigma$ in short: $$\hat{x}_{i}-x_{i} \sim \mathcal{N}\left(0,\sigma^{2}\right),$$ then you really want a confidence interval for $\sigma$. If the above assumptions hold true $$\frac{n\mbox{RMSE}^{2}}{\sigma^{2}} = \frac{n\frac{1}{n}\sum_{i}\left(\hat{x_{i}}-x_{i}\right)^{2}}{\sigma^{2}}$$ follows a $\chi_{n}^{2}$ distribution with $n$ (not $n-1$) degrees of freedom. This means \begin{align} P\left(\chi_{\frac{\alpha}{2},n}^{2}\le\frac{n\mbox{RMSE}^{2}}{\sigma^{2}}\le\chi_{1-\frac{\alpha}{2},n}^{2}\right) = 1-\alpha\\ \Leftrightarrow P\left(\frac{n\mbox{RMSE}^{2}}{\chi_{1-\frac{\alpha}{2},n}^{2}}\le\sigma^{2}\le\frac{n\mbox{RMSE}^{2}}{\chi_{\frac{\alpha}{2},n}^{2}}\right) = 1-\alpha\\ \Leftrightarrow P\left(\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\le\sigma\le\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right) = 1-\alpha. \end{align} Therefore, $$\left[\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE},\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right]$$ is your confidence interval. Here is a python program that simulates your situation from scipy import stats from numpy import * s = 3 n=10 c1,c2 = stats.chi2.ppf([0.025,1-0.025],n) y = zeros(50000) for i in range(len(y)): y[i] =sqrt( mean((random.randn(n)*s)**2)) print "1-alpha=%.2f" % (mean( (sqrt(n/c2)*y < s) & (sqrt(n/c1)*y > s)),) Hope that helps. If you are not sure whether the assumptions apply or if you want to compare what I wrote to a different method, you could always try bootstrapping.
Confidence interval of RMSE
I might be able to give an answer to your question under certain conditions. Let $x_{i}$ be your true value for the $i^{th}$ data point and $\hat{x}_{i}$ the estimated value. If we assume that the dif
Confidence interval of RMSE I might be able to give an answer to your question under certain conditions. Let $x_{i}$ be your true value for the $i^{th}$ data point and $\hat{x}_{i}$ the estimated value. If we assume that the differences between the estimated and true values have mean zero (i.e. the $\hat{x}_{i}$ are distributed around $x_{i}$) follow a Normal distribution and all have the same standard deviation $\sigma$ in short: $$\hat{x}_{i}-x_{i} \sim \mathcal{N}\left(0,\sigma^{2}\right),$$ then you really want a confidence interval for $\sigma$. If the above assumptions hold true $$\frac{n\mbox{RMSE}^{2}}{\sigma^{2}} = \frac{n\frac{1}{n}\sum_{i}\left(\hat{x_{i}}-x_{i}\right)^{2}}{\sigma^{2}}$$ follows a $\chi_{n}^{2}$ distribution with $n$ (not $n-1$) degrees of freedom. This means \begin{align} P\left(\chi_{\frac{\alpha}{2},n}^{2}\le\frac{n\mbox{RMSE}^{2}}{\sigma^{2}}\le\chi_{1-\frac{\alpha}{2},n}^{2}\right) = 1-\alpha\\ \Leftrightarrow P\left(\frac{n\mbox{RMSE}^{2}}{\chi_{1-\frac{\alpha}{2},n}^{2}}\le\sigma^{2}\le\frac{n\mbox{RMSE}^{2}}{\chi_{\frac{\alpha}{2},n}^{2}}\right) = 1-\alpha\\ \Leftrightarrow P\left(\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\le\sigma\le\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right) = 1-\alpha. \end{align} Therefore, $$\left[\sqrt{\frac{n}{\chi_{1-\frac{\alpha}{2},n}^{2}}}\mbox{RMSE},\sqrt{\frac{n}{\chi_{\frac{\alpha}{2},n}^{2}}}\mbox{RMSE}\right]$$ is your confidence interval. Here is a python program that simulates your situation from scipy import stats from numpy import * s = 3 n=10 c1,c2 = stats.chi2.ppf([0.025,1-0.025],n) y = zeros(50000) for i in range(len(y)): y[i] =sqrt( mean((random.randn(n)*s)**2)) print "1-alpha=%.2f" % (mean( (sqrt(n/c2)*y < s) & (sqrt(n/c1)*y > s)),) Hope that helps. If you are not sure whether the assumptions apply or if you want to compare what I wrote to a different method, you could always try bootstrapping.
Confidence interval of RMSE I might be able to give an answer to your question under certain conditions. Let $x_{i}$ be your true value for the $i^{th}$ data point and $\hat{x}_{i}$ the estimated value. If we assume that the dif
51,348
Confidence interval of RMSE
The reasoning in the answer by fabee seems correct if applied to the STDE (standard deviation of the error), not the RMSE. Using similar nomenclature, $i=1,\,\ldots,\,n$ is an index representing each record of data, $x_i$ is the true value and $\hat{x}_i$ is a measurement or prediction. The error $\epsilon_i$, BIAS, MSE (mean squared error) and RMSE are given by: $$ \epsilon_i = \hat{x}_i-x_i\,,\\ \text{BIAS} = \overline{\epsilon} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i\,,\\ \text{MSE} = \overline{\epsilon^2} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i^2\,,\\ \text{RMSE} = \sqrt{\text{MSE}}\,. $$ Agreeing on these definitions, the BIAS corresponds to the sample mean of $\epsilon$, but MSE is not the biased sample variance. Instead: $$ \text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2} = \frac{1}{n}\sum_{i=1}^{n}(\epsilon_i-\overline{\epsilon})^2\,, $$ or, if both BIAS and RMSE were computed, $$ \text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2}=\overline{\epsilon^2}-\overline{\epsilon}^2 = \text{RMSE}^2 - \text{BIAS}^2\,. $$ Note that the biased sample variance is being used instead of the unbiased, to keep consistency with the previous definitions given for the MSE and RMSE. Thus, in my opinion the confidence intervals established by fabee refer to the sample standard deviation of $\epsilon$, STDE. Similarly, confidence intervals may be established for the BIAS based on the z-score (or t-score if $n<30$) and $\left.\text{STDE}\middle/\sqrt{n}\right.$.
Confidence interval of RMSE
The reasoning in the answer by fabee seems correct if applied to the STDE (standard deviation of the error), not the RMSE. Using similar nomenclature, $i=1,\,\ldots,\,n$ is an index representing each
Confidence interval of RMSE The reasoning in the answer by fabee seems correct if applied to the STDE (standard deviation of the error), not the RMSE. Using similar nomenclature, $i=1,\,\ldots,\,n$ is an index representing each record of data, $x_i$ is the true value and $\hat{x}_i$ is a measurement or prediction. The error $\epsilon_i$, BIAS, MSE (mean squared error) and RMSE are given by: $$ \epsilon_i = \hat{x}_i-x_i\,,\\ \text{BIAS} = \overline{\epsilon} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i\,,\\ \text{MSE} = \overline{\epsilon^2} = \frac{1}{n}\sum_{i=1}^{n}\epsilon_i^2\,,\\ \text{RMSE} = \sqrt{\text{MSE}}\,. $$ Agreeing on these definitions, the BIAS corresponds to the sample mean of $\epsilon$, but MSE is not the biased sample variance. Instead: $$ \text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2} = \frac{1}{n}\sum_{i=1}^{n}(\epsilon_i-\overline{\epsilon})^2\,, $$ or, if both BIAS and RMSE were computed, $$ \text{STDE}^2 = \overline{(\epsilon-\overline{\epsilon})^2}=\overline{\epsilon^2}-\overline{\epsilon}^2 = \text{RMSE}^2 - \text{BIAS}^2\,. $$ Note that the biased sample variance is being used instead of the unbiased, to keep consistency with the previous definitions given for the MSE and RMSE. Thus, in my opinion the confidence intervals established by fabee refer to the sample standard deviation of $\epsilon$, STDE. Similarly, confidence intervals may be established for the BIAS based on the z-score (or t-score if $n<30$) and $\left.\text{STDE}\middle/\sqrt{n}\right.$.
Confidence interval of RMSE The reasoning in the answer by fabee seems correct if applied to the STDE (standard deviation of the error), not the RMSE. Using similar nomenclature, $i=1,\,\ldots,\,n$ is an index representing each
51,349
Confidence interval of RMSE
Following Faaber 1999, the uncertainty of RMSE is given as $$\sigma (\hat{RMSE})/RMSE = \sqrt{\frac{1}{2n}}$$ where $n$ is the number of datapoints.
Confidence interval of RMSE
Following Faaber 1999, the uncertainty of RMSE is given as $$\sigma (\hat{RMSE})/RMSE = \sqrt{\frac{1}{2n}}$$ where $n$ is the number of datapoints.
Confidence interval of RMSE Following Faaber 1999, the uncertainty of RMSE is given as $$\sigma (\hat{RMSE})/RMSE = \sqrt{\frac{1}{2n}}$$ where $n$ is the number of datapoints.
Confidence interval of RMSE Following Faaber 1999, the uncertainty of RMSE is given as $$\sigma (\hat{RMSE})/RMSE = \sqrt{\frac{1}{2n}}$$ where $n$ is the number of datapoints.
51,350
Confidence interval of RMSE
Borrowing code from @Bryan Shalloway's link (https://gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e, which is in the comment in the accepted answer), you can calculate this in R with the RMSE value and the degrees of freedom, which @fabee suggests is n (not n-1) in this case. The R function: rmse_interval <- function(rmse, deg_free, p_lower = 0.025, p_upper = 0.975){ tibble(.pred_lower = sqrt(deg_free / qchisq(p_upper, df = deg_free)) * rmse, .pred_upper = sqrt(deg_free / qchisq(p_lower, df = deg_free)) * rmse) } A practical example: If I had an RMSE value of 0.3 and 1000 samples were used to calculate that value, I can then do rmse_interval(0.3, 1000) which would return: # A tibble: 1 x 2 .pred_lower .pred_upper <dbl> <dbl> 1 0.287 0.314
Confidence interval of RMSE
Borrowing code from @Bryan Shalloway's link (https://gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e, which is in the comment in the accepted answer), you can calculate this in R with the RM
Confidence interval of RMSE Borrowing code from @Bryan Shalloway's link (https://gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e, which is in the comment in the accepted answer), you can calculate this in R with the RMSE value and the degrees of freedom, which @fabee suggests is n (not n-1) in this case. The R function: rmse_interval <- function(rmse, deg_free, p_lower = 0.025, p_upper = 0.975){ tibble(.pred_lower = sqrt(deg_free / qchisq(p_upper, df = deg_free)) * rmse, .pred_upper = sqrt(deg_free / qchisq(p_lower, df = deg_free)) * rmse) } A practical example: If I had an RMSE value of 0.3 and 1000 samples were used to calculate that value, I can then do rmse_interval(0.3, 1000) which would return: # A tibble: 1 x 2 .pred_lower .pred_upper <dbl> <dbl> 1 0.287 0.314
Confidence interval of RMSE Borrowing code from @Bryan Shalloway's link (https://gist.github.com/brshallo/7eed49c743ac165ced2294a70e73e65e, which is in the comment in the accepted answer), you can calculate this in R with the RM
51,351
Coding for an ordered covariate
You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we propose to penalize differences between coefficients of adjacent categories in the estimation procedure. The rationale behind is as follows: the response $y$ is assumed to change slowly between two adjacent categories of the independent variable. In other words, we try to avoid high jumps and prefer a smoother coefficient vector.
Coding for an ordered covariate
You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we
Coding for an ordered covariate You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we propose to penalize differences between coefficients of adjacent categories in the estimation procedure. The rationale behind is as follows: the response $y$ is assumed to change slowly between two adjacent categories of the independent variable. In other words, we try to avoid high jumps and prefer a smoother coefficient vector.
Coding for an ordered covariate You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we
51,352
What does it imply when an estimate is not inside its 95% confidence interval? [closed]
Keep in mind that a 95% confidence interval will NOT contain the true parameter value 5% of the time if all of the assumptions are valid. Either your model is valid and you are experiencing the 5%, or your model is invalid and you need to check the assumptions.
What does it imply when an estimate is not inside its 95% confidence interval? [closed]
Keep in mind that a 95% confidence interval will NOT contain the true parameter value 5% of the time if all of the assumptions are valid. Either your model is valid and you are experiencing the 5%, or
What does it imply when an estimate is not inside its 95% confidence interval? [closed] Keep in mind that a 95% confidence interval will NOT contain the true parameter value 5% of the time if all of the assumptions are valid. Either your model is valid and you are experiencing the 5%, or your model is invalid and you need to check the assumptions.
What does it imply when an estimate is not inside its 95% confidence interval? [closed] Keep in mind that a 95% confidence interval will NOT contain the true parameter value 5% of the time if all of the assumptions are valid. Either your model is valid and you are experiencing the 5%, or
51,353
Question about classification with hidden Markov models using depmixS4 [closed]
You must use forward function to calculate the likelihood by sum forward's probability from first step until last one. sum(forward_prob(t))
Question about classification with hidden Markov models using depmixS4 [closed]
You must use forward function to calculate the likelihood by sum forward's probability from first step until last one. sum(forward_prob(t))
Question about classification with hidden Markov models using depmixS4 [closed] You must use forward function to calculate the likelihood by sum forward's probability from first step until last one. sum(forward_prob(t))
Question about classification with hidden Markov models using depmixS4 [closed] You must use forward function to calculate the likelihood by sum forward's probability from first step until last one. sum(forward_prob(t))
51,354
Principled way of collapsing categorical variables with many levels?
If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) the coefficients might be exactly the same. So perhaps the regression coefficients for Maths and Physics are the same, but different from those for Chemistry and Biology. In a simplest case, you would have a "one way ANOVA" linear model with a single categorical predictor: $$y_{ij} = \mu + \alpha_i + \epsilon_{ij},$$ where $i$ encodes the level of the categorical variable (the category). But you might prefer a solution that collapses some levels (categories) together, e.g. $$\begin{cases}\alpha_1=\alpha_2, \\ \alpha_3=\alpha_4=\alpha_5.\end{cases}$$ This suggests that one can try to use a regularization penalty that would penalize solutions with differing alphas. One penalty term that immediately comes to mind is $$L=\omega \sum_{i<j}|\alpha_i-\alpha_j|.$$ This resembles lasso and should enforce sparsity of the $\alpha_i-\alpha_j$ differences, which is exactly what you want: you want many of them to be zero. Regularization parameter $\omega$ should be selected with cross-validation. I have never dealt with models like that and the above is the first thing that came to my mind. Then I decided to see if there is something like that implemented. I made some google searches and soon realized that this is called fusion of categories; searching for lasso fusion categorical will give you a lot of references to read. Here are a few that I briefly looked at: Gerhard Tutz, Regression for Categorical Data, see pp. 175-175 in Google Books. Tutz mentions the following four papers: Land and Friedman, 1997, Variable fusion: a new adaptive signal regression method Bondell and Reich, 2009, Simultaneous factor selection and collapsing levels in ANOVA Gertheiss and Tutz, 2010, Sparse modeling of categorial explanatory variables Tibshirani et al. 2005, Sparsity and smoothness via the fused lasso is somewhat relevant even if not exactly the same (it is about ordinal variables) Gertheiss and Tutz 2010, published in the Annals of Applied Statistics, looks like a recent and very readable paper that contains other references. Here is its abstract: Shrinking methods in regression analysis are usually designed for metric predictors. In this article, however, shrinkage methods for categorial predictors are proposed. As an application we consider data from the Munich rent standard, where, for example, urban districts are treated as a categorial predictor. If independent variables are categorial, some modifications to usual shrinking procedures are necessary. Two $L_1$-penalty based methods for factor selection and clustering of categories are presented and investigated. The first approach is designed for nominal scale levels, the second one for ordinal predictors. Besides applying them to the Munich rent standard, methods are illustrated and compared in simulation studies. I like their Lasso-like solution paths that show how levels of two categorical variables get merged together when regularization strength increases:
Principled way of collapsing categorical variables with many levels?
If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) th
Principled way of collapsing categorical variables with many levels? If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) the coefficients might be exactly the same. So perhaps the regression coefficients for Maths and Physics are the same, but different from those for Chemistry and Biology. In a simplest case, you would have a "one way ANOVA" linear model with a single categorical predictor: $$y_{ij} = \mu + \alpha_i + \epsilon_{ij},$$ where $i$ encodes the level of the categorical variable (the category). But you might prefer a solution that collapses some levels (categories) together, e.g. $$\begin{cases}\alpha_1=\alpha_2, \\ \alpha_3=\alpha_4=\alpha_5.\end{cases}$$ This suggests that one can try to use a regularization penalty that would penalize solutions with differing alphas. One penalty term that immediately comes to mind is $$L=\omega \sum_{i<j}|\alpha_i-\alpha_j|.$$ This resembles lasso and should enforce sparsity of the $\alpha_i-\alpha_j$ differences, which is exactly what you want: you want many of them to be zero. Regularization parameter $\omega$ should be selected with cross-validation. I have never dealt with models like that and the above is the first thing that came to my mind. Then I decided to see if there is something like that implemented. I made some google searches and soon realized that this is called fusion of categories; searching for lasso fusion categorical will give you a lot of references to read. Here are a few that I briefly looked at: Gerhard Tutz, Regression for Categorical Data, see pp. 175-175 in Google Books. Tutz mentions the following four papers: Land and Friedman, 1997, Variable fusion: a new adaptive signal regression method Bondell and Reich, 2009, Simultaneous factor selection and collapsing levels in ANOVA Gertheiss and Tutz, 2010, Sparse modeling of categorial explanatory variables Tibshirani et al. 2005, Sparsity and smoothness via the fused lasso is somewhat relevant even if not exactly the same (it is about ordinal variables) Gertheiss and Tutz 2010, published in the Annals of Applied Statistics, looks like a recent and very readable paper that contains other references. Here is its abstract: Shrinking methods in regression analysis are usually designed for metric predictors. In this article, however, shrinkage methods for categorial predictors are proposed. As an application we consider data from the Munich rent standard, where, for example, urban districts are treated as a categorial predictor. If independent variables are categorial, some modifications to usual shrinking procedures are necessary. Two $L_1$-penalty based methods for factor selection and clustering of categories are presented and investigated. The first approach is designed for nominal scale levels, the second one for ordinal predictors. Besides applying them to the Munich rent standard, methods are illustrated and compared in simulation studies. I like their Lasso-like solution paths that show how levels of two categorical variables get merged together when regularization strength increases:
Principled way of collapsing categorical variables with many levels? If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) th
51,355
Principled way of collapsing categorical variables with many levels?
I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my equivalent of your major is a random effect. Also, in situations like this there seem to actually be two fusing decisions to make: 1) how to fuse the categories you have when you fit the model, and 2) what fused category becomes "other" where you will by default include any new majors that someone dreams up after you fit your model. (A random effect can handle this second case automatically.) When the fusing has any judgement involved (as opposed to totally automated procedures), I'm skeptical of the "other" category which is often a grab bag of the categories with few things in them rather than any kind of principled grouping. A random effect handles a lot of levels, dynamically pools ("draws strength from") different levels, can predict previously-unseen levels, etc. One downside might be that the distribution of the levels is almost always assumed to be normal.
Principled way of collapsing categorical variables with many levels?
I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my
Principled way of collapsing categorical variables with many levels? I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my equivalent of your major is a random effect. Also, in situations like this there seem to actually be two fusing decisions to make: 1) how to fuse the categories you have when you fit the model, and 2) what fused category becomes "other" where you will by default include any new majors that someone dreams up after you fit your model. (A random effect can handle this second case automatically.) When the fusing has any judgement involved (as opposed to totally automated procedures), I'm skeptical of the "other" category which is often a grab bag of the categories with few things in them rather than any kind of principled grouping. A random effect handles a lot of levels, dynamically pools ("draws strength from") different levels, can predict previously-unseen levels, etc. One downside might be that the distribution of the levels is almost always assumed to be normal.
Principled way of collapsing categorical variables with many levels? I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my
51,356
Principled way of collapsing categorical variables with many levels?
If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear discriminant analysis. Suppose that you wanted to map the college major into a single continuous metric, and suppose that a proper anchor is a pre-admission SAT quantitative test score. Compute the mean quantitative score for each major and replace the major with that mean. You can readily extend this to multiple anchors, creating more than one degree of freedom with which to summarize major. Note that unlike some of the earlier suggestions, optimum scoring represents an unsupervised learning approach, so the degrees of freedom (number of parameters estimated against Y) are few and well defined, resulted in proper statistical inference (if frequentist, accurate standard errors, confidence (compatibility) intervals, and p-values). I do very much like the penalization suggestion by https://stats.stackexchange.com/users/28666/amoeba @amoeba.
Principled way of collapsing categorical variables with many levels?
If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear
Principled way of collapsing categorical variables with many levels? If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear discriminant analysis. Suppose that you wanted to map the college major into a single continuous metric, and suppose that a proper anchor is a pre-admission SAT quantitative test score. Compute the mean quantitative score for each major and replace the major with that mean. You can readily extend this to multiple anchors, creating more than one degree of freedom with which to summarize major. Note that unlike some of the earlier suggestions, optimum scoring represents an unsupervised learning approach, so the degrees of freedom (number of parameters estimated against Y) are few and well defined, resulted in proper statistical inference (if frequentist, accurate standard errors, confidence (compatibility) intervals, and p-values). I do very much like the penalization suggestion by https://stats.stackexchange.com/users/28666/amoeba @amoeba.
Principled way of collapsing categorical variables with many levels? If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear
51,357
Principled way of collapsing categorical variables with many levels?
One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with categorical levels ${z^1, ..., z^K }$, and let $Y$ be the output/target/response variable. Replace $Z$ with $\operatorname{Impact}\left(Z\right)$, where $$ \operatorname{Impact}\left(z^k\right) = \operatorname{E}\left(Y\ |\ Z = z^k\right) - \operatorname{E}\left(Y\right) $$ for a continuous-valued $Y$. For binary-valued $Y$, use $\operatorname{logit} \circ \operatorname{E}$ instead of just $\operatorname{E}$. There is a Python implementation in the category_encoders library [2]. A variant called "impact coding" has been implemented in the R package Vtreat [3][4]. The package (and impact coding itself) is described in an article by those authors from 2016 [5], and in several blog posts [6]. Note that the current R implementation does not handle multinomial (categorical with more than 2 categories) or multivariate (vector-valued) responses. Daniele Micci-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. ACM SIGKDD Explorations Newsletter, Volume 3, Issue 1, July 2001, Pages 27-32. https://doi.org/10.1145/507533.507538 Category Encoders. http://contrib.scikit-learn.org/categorical-encoding/index.html John Mount and Nina Zumel (2017). vtreat: A Statistically Sound 'data.frame' Processor/Conditioner. R package version 0.5.32. https://CRAN.R-project.org/package=vtreat Win-Vector (2017). vtreat. GitHub repository at https://github.com/WinVector/vtreat Zumel, Nina and Mount, John (2016). vtreat: a data.frame Processor for Predictive Modeling. 1611.09477v3, ArXiv e-prints. Available at https://arxiv.org/abs/1611.09477v3. http://www.win-vector.com/blog/tag/vtreat/
Principled way of collapsing categorical variables with many levels?
One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with catego
Principled way of collapsing categorical variables with many levels? One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with categorical levels ${z^1, ..., z^K }$, and let $Y$ be the output/target/response variable. Replace $Z$ with $\operatorname{Impact}\left(Z\right)$, where $$ \operatorname{Impact}\left(z^k\right) = \operatorname{E}\left(Y\ |\ Z = z^k\right) - \operatorname{E}\left(Y\right) $$ for a continuous-valued $Y$. For binary-valued $Y$, use $\operatorname{logit} \circ \operatorname{E}$ instead of just $\operatorname{E}$. There is a Python implementation in the category_encoders library [2]. A variant called "impact coding" has been implemented in the R package Vtreat [3][4]. The package (and impact coding itself) is described in an article by those authors from 2016 [5], and in several blog posts [6]. Note that the current R implementation does not handle multinomial (categorical with more than 2 categories) or multivariate (vector-valued) responses. Daniele Micci-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. ACM SIGKDD Explorations Newsletter, Volume 3, Issue 1, July 2001, Pages 27-32. https://doi.org/10.1145/507533.507538 Category Encoders. http://contrib.scikit-learn.org/categorical-encoding/index.html John Mount and Nina Zumel (2017). vtreat: A Statistically Sound 'data.frame' Processor/Conditioner. R package version 0.5.32. https://CRAN.R-project.org/package=vtreat Win-Vector (2017). vtreat. GitHub repository at https://github.com/WinVector/vtreat Zumel, Nina and Mount, John (2016). vtreat: a data.frame Processor for Predictive Modeling. 1611.09477v3, ArXiv e-prints. Available at https://arxiv.org/abs/1611.09477v3. http://www.win-vector.com/blog/tag/vtreat/
Principled way of collapsing categorical variables with many levels? One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with catego
51,358
Principled way of collapsing categorical variables with many levels?
There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regression with sparse feature matrix and the paper by Maechler and Bates. But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility, which gives a prior hierarchy, not specifically related to the outcome. But you said: I care about producing higher-level categories that are coherent with respect to my regression outcome. Then you could try fused lasso, see other answers in this thread, which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels.
Principled way of collapsing categorical variables with many levels?
There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regr
Principled way of collapsing categorical variables with many levels? There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regression with sparse feature matrix and the paper by Maechler and Bates. But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility, which gives a prior hierarchy, not specifically related to the outcome. But you said: I care about producing higher-level categories that are coherent with respect to my regression outcome. Then you could try fused lasso, see other answers in this thread, which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels.
Principled way of collapsing categorical variables with many levels? There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regr
51,359
Principled way of collapsing categorical variables with many levels?
The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empirical Bayes' scheme at every pool/level to map the categorical variable into a posterior class probability, which can be used directly or as an input into other models.
Principled way of collapsing categorical variables with many levels?
The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empiri
Principled way of collapsing categorical variables with many levels? The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empirical Bayes' scheme at every pool/level to map the categorical variable into a posterior class probability, which can be used directly or as an input into other models.
Principled way of collapsing categorical variables with many levels? The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empiri
51,360
Optimizing matching of players in a tournament round
I haven't found an algorithm that guarantees the highest-total-value for pairings selections, but I have found one that guarantees the lowest-total-value, so we can modify the original matrix to accommodate that algorithm. Basically, we just have to subtract all of the match quality scores from 1.0 and assign a value of 2.0 along the diagonal (to ensure a player never gets paired with themself). When we do that, we get something like this: Player1 Player2 Player3 Player4 ===================================================== Player1 | 2.000 0.179 0.766 0.445 Player2 | 0.179 2.000 0.167 0.358 Player3 | 0.766 0.167 2.000 0.257 Player4 | 0.445 0.358 0.257 2.000 As mentioned, our goal now is to select pairings with the LOWEST overall score, not the highest and that problem can be solved using the Hungarian Algorithm. That algorithm guarantees finding an optimal set of lowest value pairings and because we subtracted the match quality score from 1.0, gives us the pairings with the highest summed match quality scores, which was the original question. Note: When considering if we're done at each step of the algorithm, we have to ignore the duplicate symmetrical values, so that complicates applying the algorithm a bit.
Optimizing matching of players in a tournament round
I haven't found an algorithm that guarantees the highest-total-value for pairings selections, but I have found one that guarantees the lowest-total-value, so we can modify the original matrix to accom
Optimizing matching of players in a tournament round I haven't found an algorithm that guarantees the highest-total-value for pairings selections, but I have found one that guarantees the lowest-total-value, so we can modify the original matrix to accommodate that algorithm. Basically, we just have to subtract all of the match quality scores from 1.0 and assign a value of 2.0 along the diagonal (to ensure a player never gets paired with themself). When we do that, we get something like this: Player1 Player2 Player3 Player4 ===================================================== Player1 | 2.000 0.179 0.766 0.445 Player2 | 0.179 2.000 0.167 0.358 Player3 | 0.766 0.167 2.000 0.257 Player4 | 0.445 0.358 0.257 2.000 As mentioned, our goal now is to select pairings with the LOWEST overall score, not the highest and that problem can be solved using the Hungarian Algorithm. That algorithm guarantees finding an optimal set of lowest value pairings and because we subtracted the match quality score from 1.0, gives us the pairings with the highest summed match quality scores, which was the original question. Note: When considering if we're done at each step of the algorithm, we have to ignore the duplicate symmetrical values, so that complicates applying the algorithm a bit.
Optimizing matching of players in a tournament round I haven't found an algorithm that guarantees the highest-total-value for pairings selections, but I have found one that guarantees the lowest-total-value, so we can modify the original matrix to accom
51,361
Central limit theorem with unknown variance
Perhaps you can bound your variance. Suppose, for example, that you know your data must be in the range $[a,b]$. Then Popoviciu's inequality bounds your variance by $\sigma^2 \le (1/4)(b-a)^2$. Using the upper bound in the formulas you found will be a bit of overkill, but it should satisfy your requirements.
Central limit theorem with unknown variance
Perhaps you can bound your variance. Suppose, for example, that you know your data must be in the range $[a,b]$. Then Popoviciu's inequality bounds your variance by $\sigma^2 \le (1/4)(b-a)^2$. Using
Central limit theorem with unknown variance Perhaps you can bound your variance. Suppose, for example, that you know your data must be in the range $[a,b]$. Then Popoviciu's inequality bounds your variance by $\sigma^2 \le (1/4)(b-a)^2$. Using the upper bound in the formulas you found will be a bit of overkill, but it should satisfy your requirements.
Central limit theorem with unknown variance Perhaps you can bound your variance. Suppose, for example, that you know your data must be in the range $[a,b]$. Then Popoviciu's inequality bounds your variance by $\sigma^2 \le (1/4)(b-a)^2$. Using
51,362
Central limit theorem with unknown variance
CTL is all about independent and identically distributed (i.i.d.) random variables, with finite mean and variance. I edit the answer just to add that you don't have to know your parameter , but be sure that this parameter is finite and identical along your runs. In order to estimate the parameter mean with unknownk variance you can build an interval using the t-student as $$ \bar X - t_{n-1,1-\alpha} \frac{S}{\sqrt{n}} $$ where $t$ is the value of a t-student with $n-1$ degrees of freedom with $1-a$ confidence level.
Central limit theorem with unknown variance
CTL is all about independent and identically distributed (i.i.d.) random variables, with finite mean and variance. I edit the answer just to add that you don't have to know your parameter , but be sur
Central limit theorem with unknown variance CTL is all about independent and identically distributed (i.i.d.) random variables, with finite mean and variance. I edit the answer just to add that you don't have to know your parameter , but be sure that this parameter is finite and identical along your runs. In order to estimate the parameter mean with unknownk variance you can build an interval using the t-student as $$ \bar X - t_{n-1,1-\alpha} \frac{S}{\sqrt{n}} $$ where $t$ is the value of a t-student with $n-1$ degrees of freedom with $1-a$ confidence level.
Central limit theorem with unknown variance CTL is all about independent and identically distributed (i.i.d.) random variables, with finite mean and variance. I edit the answer just to add that you don't have to know your parameter , but be sur
51,363
Quantile regression and heteroscedasticity/autocorrelation
Quantile regression does not assume that the error terms are normally distributed (nor does it assume some other shape for them). There are tons of references for this, e.g. Koenker 2005. However, while I haven't found anywhere a statement that quantile regression assumes independence, several papers e.g. Ceraci and Bottai make that assumption implicitly. It also aligns with my intuition. I think this perhaps comes from our usual lumping of distributional assumptions being iid - but the independence is a separate thing.
Quantile regression and heteroscedasticity/autocorrelation
Quantile regression does not assume that the error terms are normally distributed (nor does it assume some other shape for them). There are tons of references for this, e.g. Koenker 2005. However, whi
Quantile regression and heteroscedasticity/autocorrelation Quantile regression does not assume that the error terms are normally distributed (nor does it assume some other shape for them). There are tons of references for this, e.g. Koenker 2005. However, while I haven't found anywhere a statement that quantile regression assumes independence, several papers e.g. Ceraci and Bottai make that assumption implicitly. It also aligns with my intuition. I think this perhaps comes from our usual lumping of distributional assumptions being iid - but the independence is a separate thing.
Quantile regression and heteroscedasticity/autocorrelation Quantile regression does not assume that the error terms are normally distributed (nor does it assume some other shape for them). There are tons of references for this, e.g. Koenker 2005. However, whi
51,364
Quantile regression and heteroscedasticity/autocorrelation
The asymptotic results come from the general theory of extremum estimators which allows for the complications you bring up. However, the criterion function in this case is not smooth, so there might be issues with estimation of the derivatives, while the bootstrap is still reliable. Some modifications allow to speed it up while preserving validity which is relevant for huge data-sets or when you are interested in the band for the whole process.
Quantile regression and heteroscedasticity/autocorrelation
The asymptotic results come from the general theory of extremum estimators which allows for the complications you bring up. However, the criterion function in this case is not smooth, so there might b
Quantile regression and heteroscedasticity/autocorrelation The asymptotic results come from the general theory of extremum estimators which allows for the complications you bring up. However, the criterion function in this case is not smooth, so there might be issues with estimation of the derivatives, while the bootstrap is still reliable. Some modifications allow to speed it up while preserving validity which is relevant for huge data-sets or when you are interested in the band for the whole process.
Quantile regression and heteroscedasticity/autocorrelation The asymptotic results come from the general theory of extremum estimators which allows for the complications you bring up. However, the criterion function in this case is not smooth, so there might b
51,365
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet
What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable V3, which is treated as a linear fixed effect. Using lmer syntax, simplest model (M1) is: V1 ~ (1|V2) + V3 This model will estimate: P1: A global intercept P2: Random effect intercepts for V2 (i.e. for each level of V2, that level's intercept's deviation from the global intercept) P3: A single global estimate for the effect (slope) of V3 The next most complex model (M2) is: V1 ~ (1|V2) + V3 + (0+V3|V2) This model estimates all the parameters from M1, but will additionally estimate: P4: The effect of V3 within each level of V2 (more specifically, the degree to which the V3 effect within a given level deviates from the global effect of V3), while enforcing a zero correlation between the intercept deviations and V3 effect deviations across levels of V2. This latter restriction is relaxed in a final most complex model (M3): V1 ~ (1+V3|V2) + V3 In which all parameters from M2 are estimated while allowing correlation between the intercept deviations and V3 effect deviations within levels of V2. Thus, in M3, an additional parameter is estimated: P5: The correlation between intercept deviations and V3 deviations across levels of V2 Usually model pairs like M2 and M3 are computed then compared to evaluate the evidence for correlations between fixed effects (including the global intercept). Now consider adding another fixed effect predictor, V4. The model: V1 ~ (1+V3*V4|V2) + V3*V4 would estimate: P1: A global intercept P2: A single global estimate for the effect of V3 P3: A single global estimate for the effect of V4 P4: A single global estimate for the interaction between V3 and V4 P5: Deviations of the intercept from P1 in each level of V2 P6: Deviations of the V3 effect from P2 in each level of V2 P7: Deviations of the V4 effect from P3 in each level of V2 P8: Deviations of the V3-by-V4 interaction from P4 in each level of V2 P9 Correlation between P5 and P6 across levels of V2 P10 Correlation between P5 and P7 across levels of V2 P11 Correlation between P5 and P8 across levels of V2 P12 Correlation between P6 and P7 across levels of V2 P13 Correlation between P6 and P8 across levels of V2 P14 Correlation between P7 and P8 across levels of V2 Phew, That's a lot of parameters! And I didn't even bother to list the variance parameters estimated by the model. What's more, if you have a categorical variable with more than 2 levels that you want to model as a fixed effect, instead of a single effect for that variable you will always be estimating k-1 effects (where k is the number of levels), thereby exploding the number of parameters to be estimated by the model even further.
R's lmer cheat sheet What's the difference between (~1 +....) and (1 | ...) and (0 | ...) etc.? Say you have variable V1 predicted by categorical variable V2, which is treated as a random effect, and continuous variable
51,366
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet
The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors without any grouping and you can have additional random factors without any fixed factor (an intercept-only model). A + between factors indicates no interaction, a * indicates interaction. For random factors, you have three basic variants: Intercepts only by random factor: (1 | random.factor) Slopes only by random factor: (0 + fixed.factor | random.factor) Intercepts and slopes by random factor: (1 + fixed.factor | random.factor) Note that variant 3 has the slope and the intercept calculated in the same grouping, i.e. at the same time. If we want the slope and the intercept calculated independently, i.e. without any assumed correlation between the two, we need a fourth variant: Intercept and slope, separately, by random factor: (1 | random.factor) + (0 + fixed.factor | random.factor). An alternative way to write this is using the double-bar notation fixed.factor + (fixed.factor || random.factor). There's also a nice summary in another response to this question that you should look at. If you're up to digging into the math a bit, Barr et al. (2013) summarize the lmer syntax quite nicely in their Table 1, adapted here to meet the constraints of tableless markdown. That paper dealt with psycholinguistic data, so the two random effects are Subjectand Item. Models and equivalent lme4 formula syntax: $Y_{si} = β_0 + β_{1}X_{i} + e_{si}$ N/A (Not a mixed-effects model) $Y_{si} = β_0 + S_{0s} + β_{1}X_{i} + e_{si} $ Y ∼ X+(1∣Subject) $Y_{si} = β_0 + S_{0s} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject) $Y_{si} = β_0 + S_{0s} + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(1 + X∣Subject)+(1∣Item) $Y_{si} = β_0 + S_{0s} + I_{0i} + β_{1}X_{i} + e_{si}$ Y ∼ X+(1∣Subject)+(1∣Item) As (4), but $S_{0s}$, $S_{1s}$ independent Y ∼ X+(1∣Subject)+(0 + X∣ Subject)+(1∣Item) $Y_{si} = β_0 + I_{0i} + (β_{1} + S_{1s})X_i + e_{si}$ Y ∼ X+(0 + X∣Subject)+(1∣Item) References: Barr, Dale J, R. Levy, C. Scheepers und H. J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68:255– 278.
R's lmer cheat sheet The general trick is, as mentioned in another answer, is that the formula follows the form dependent ~ independent | grouping. The groupingis generally a random factor, you can include fixed factors w
51,367
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet
The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ primary | grouping where response is an expression for the response, primary is an expression for the primary covariate, and grouping is an expression for the grouping factor. Depending on which method you use to perform mixed methods analysis in R, you may need to create a groupedData object to be able to use the grouping in the analysis (see the nlme package for details, lme4 doesn't seem to need this). I can't speak to the way you have specified your lmer model statements because I don't know your data. However, having multiple (1|foo) in the model line is unusual from what I have seen. What are you trying to model?
R's lmer cheat sheet The | symbol indicates a grouping factor in mixed methods. As per Pinheiro & Bates: ...The formula also designates a response and, when available, a primary covariate. It is given as response ~ prima
51,368
Is there an easy way to calculate significant difference between two largely overlapping correlations from same sample?
You could fit a regression model with all three measures as predictors, then fit a new regression model with 1 or 2 dropped out and do a full-reduced model test to see if there is a significant difference in the models. This answers the question of "do the variables in the full, but not reduced, model contribute significantly above and beyond those in the reduced model?". As noted already given your sample size I doubt that you will see a difference, but this will give a p-value for those that feel the need for one. A bit more meaningful may be to fit 3 regression model, each using one of the measures and your dependent variable, then plot the 3 pairwise scatterplots of the fitted(predicted) values. This is best done with an aspect ratio of 1 in a square plot and with an $y=x$ reference line. This can show that the 3 measures give essentially the same predictions, or if not can show where they differ.
Is there an easy way to calculate significant difference between two largely overlapping correlation
You could fit a regression model with all three measures as predictors, then fit a new regression model with 1 or 2 dropped out and do a full-reduced model test to see if there is a significant differ
Is there an easy way to calculate significant difference between two largely overlapping correlations from same sample? You could fit a regression model with all three measures as predictors, then fit a new regression model with 1 or 2 dropped out and do a full-reduced model test to see if there is a significant difference in the models. This answers the question of "do the variables in the full, but not reduced, model contribute significantly above and beyond those in the reduced model?". As noted already given your sample size I doubt that you will see a difference, but this will give a p-value for those that feel the need for one. A bit more meaningful may be to fit 3 regression model, each using one of the measures and your dependent variable, then plot the 3 pairwise scatterplots of the fitted(predicted) values. This is best done with an aspect ratio of 1 in a square plot and with an $y=x$ reference line. This can show that the 3 measures give essentially the same predictions, or if not can show where they differ.
Is there an easy way to calculate significant difference between two largely overlapping correlation You could fit a regression model with all three measures as predictors, then fit a new regression model with 1 or 2 dropped out and do a full-reduced model test to see if there is a significant differ
51,369
Is there an easy way to calculate significant difference between two largely overlapping correlations from same sample?
I thought you were talking about less than 100. 4000 may be barely enough. but a difference of 0.02 is not very meaningful.
Is there an easy way to calculate significant difference between two largely overlapping correlation
I thought you were talking about less than 100. 4000 may be barely enough. but a difference of 0.02 is not very meaningful.
Is there an easy way to calculate significant difference between two largely overlapping correlations from same sample? I thought you were talking about less than 100. 4000 may be barely enough. but a difference of 0.02 is not very meaningful.
Is there an easy way to calculate significant difference between two largely overlapping correlation I thought you were talking about less than 100. 4000 may be barely enough. but a difference of 0.02 is not very meaningful.
51,370
Sequential clustering algorithm
Constrained clustering maintains data order. There is a package in R called 'rioja' that implements this in the function 'chclust'. The procedure isn't too complex though: Calculate inter-point distance Find the smallest distance between adjacent points Average the value of the two points to generate a single value Spit the list out again and start from one until you have a single point. You need to maintain some sort of tree structure, but with some elementary programming experience you should be able to do it.
Sequential clustering algorithm
Constrained clustering maintains data order. There is a package in R called 'rioja' that implements this in the function 'chclust'. The procedure isn't too complex though: Calculate inter-point dist
Sequential clustering algorithm Constrained clustering maintains data order. There is a package in R called 'rioja' that implements this in the function 'chclust'. The procedure isn't too complex though: Calculate inter-point distance Find the smallest distance between adjacent points Average the value of the two points to generate a single value Spit the list out again and start from one until you have a single point. You need to maintain some sort of tree structure, but with some elementary programming experience you should be able to do it.
Sequential clustering algorithm Constrained clustering maintains data order. There is a package in R called 'rioja' that implements this in the function 'chclust'. The procedure isn't too complex though: Calculate inter-point dist
51,371
What do the terms "nearly-optimal rate", "near-minimax rate", "minimax optimal rate" and "minimax rate" mean in the context of posterior consistency?
Can't comment due to lack of rep, but think I may be able to add some useful input if you haven't figured it out already. I believe what you are missing here is that the speed of convergence of the mass of the posterior on the true value is an important factor and that this is likely what is being referred to as the rate (it would be useful if you gave a reference to the literature you're talking about). Showing that updates to the posterior are a contraction about the true value of the parameter demonstrates consistency. Then placing bounds on that contraction rate (via minmax) will allow you to guarantee some rate of convergence on the true value. So convergence and contraction are sort of related concepts here. The Terminology might be more usefully explained in texts on real analysis, dynamical systems, optimisation etc. than statistics.
What do the terms "nearly-optimal rate", "near-minimax rate", "minimax optimal rate" and "minimax ra
Can't comment due to lack of rep, but think I may be able to add some useful input if you haven't figured it out already. I believe what you are missing here is that the speed of convergence of the ma
What do the terms "nearly-optimal rate", "near-minimax rate", "minimax optimal rate" and "minimax rate" mean in the context of posterior consistency? Can't comment due to lack of rep, but think I may be able to add some useful input if you haven't figured it out already. I believe what you are missing here is that the speed of convergence of the mass of the posterior on the true value is an important factor and that this is likely what is being referred to as the rate (it would be useful if you gave a reference to the literature you're talking about). Showing that updates to the posterior are a contraction about the true value of the parameter demonstrates consistency. Then placing bounds on that contraction rate (via minmax) will allow you to guarantee some rate of convergence on the true value. So convergence and contraction are sort of related concepts here. The Terminology might be more usefully explained in texts on real analysis, dynamical systems, optimisation etc. than statistics.
What do the terms "nearly-optimal rate", "near-minimax rate", "minimax optimal rate" and "minimax ra Can't comment due to lack of rep, but think I may be able to add some useful input if you haven't figured it out already. I believe what you are missing here is that the speed of convergence of the ma
51,372
Parameter identification v. causal identification
Let's take the simple linear model and discuss three different definitions of the parameter of interest in the linear model, with very different identification settings. All three scenarios are very common in empirical work in economics and it's typically only clear from context of them characterise the analysis. Here's the linear model. $$y_i = x_i^T \beta + \epsilon_i$$ It's clear what $y_i$ and $x_i$ are. But what are $\beta$ and $\epsilon$ supposed to be? The answer determines how we should think about identification. 1.) Prediction We could simply be interested in predicting the observable $y_i$ when we see the observable $x_i$, using the best linear function of $x_i$ to do so. Then the target parameter should be defined as $\beta := \arg \min_{b \in R^k} E[ (y_i - x_i^T b)^2]$ and the error as $\epsilon_i := y_i - x_i^T\beta$. The expectation $E[\cdot]$ is with respect to the distribution of observables. This distribution is identified if we observe $y_i$ and $x_i$. The solution for $\beta$ is the well known formula $E[x_ix_i^T]^{-1} E[x_iy_i]$. Note that $E[\epsilon_i x_i^T]=0$ holds by construction. If we find the best linear predictor, then the prediction error doesn't have a linear relationship with the predictor. If it did, we would have failed at finding the best linear predictor! What about $E[\epsilon_i |x_i]$? That will be zero only if $E[y_i|x_i] = x_i ^T \beta$, in which case $x_i ^T \beta$ isn't just the best linear predictor but the best predictor. The only failure of identification for this definition of $\beta$ would be that $E[x_ix_i^T]^{-1}$ might not exist, perhaps because some of the $x_i$ are linear functions of others. Then $\beta$ is not identified, there are different $\beta$ that all do the job of forming the best linear predictor. But that wouldn't worry us because we just want to predict and they are all equally good at that. 2.) Prediction with the underlying, not the measured covariate Often, $x_i$ is a proxy for something we are interested in. If $x_i$ is the body mass index we might be interested in how it predicts an outcome but we only get to work with a self reported estimate of it, $w_i$. If we are content with finding out the predictive relationship between that and an outcome we are back in the first scenario. If not, we face an identification problem. We are interested in $\beta$ as defined in the previous scenario, but we do not observe $x_i$. We observe only $w_i = x_i + u_i$. Now $\beta$ is not identified. We have $y_i = (w_i - u_it)^T \beta+ \epsilon_i$ or $y_i = w_i^T \beta+ \epsilon_i - u_i \beta$. And now $E[(\epsilon_i - u_i \beta) w_i^T]$ is no longer $0$ by construction and OLS won't estimate $\beta$ The most common identification strategy for this problem is to find an instrumental variable: Something that correlates with $x_i$, but not with $u_i$, and not with $\epsilon_i$ either. Maybe a friend's anonymous estimate of that person's body mass index. Note how nothing so far has been about causality, just about prediction, but there is an identification problem anyway, because an important variable is unobserved. 3.) Causal inference Probably the most common scenario: We actually want to know how $x_i$ affects $y_i$ not just how it predicts it. How should we define that? We can use structural equations to do it, which express how a variable really, is determined: $y = f(x, \epsilon)$. This unknown function tells us what the value $y$ would be if we set $x$ for any possible value, just like proper scientific models do. It also depends on $\epsilon$ which is now the effect of other causes of $y$ and this may well be correlated with $x$ in the data we see. Let's suppose $f$ is linear: $y_i = x_i^T \beta + \epsilon_i$ as before. This $\beta$ is the one from the first scenario only if $\epsilon_i$ is uncorrelated with $x_i$. In most empirical analyses, one can think of reasons why $x_i$ and $\epsilon_i$ might be correlated, and so an identification strategy is needed. To clearly distinguish this scenario from the first one, different notation is sometimes used: The potential outcomes notation, or causal graphs, or structural models. The best way of thinking about causality in economics is an ongoing area of research and there is no consensus yet. As it stands most researchers will will simply use the standard regression equation, expected values etc and you need to infer from the context whether the goal is prediction, prediction with an imperfectly measured covariate, or causal inference. A very good textbook covering this material is Mostly Harmless Econometrics.
Parameter identification v. causal identification
Let's take the simple linear model and discuss three different definitions of the parameter of interest in the linear model, with very different identification settings. All three scenarios are very c
Parameter identification v. causal identification Let's take the simple linear model and discuss three different definitions of the parameter of interest in the linear model, with very different identification settings. All three scenarios are very common in empirical work in economics and it's typically only clear from context of them characterise the analysis. Here's the linear model. $$y_i = x_i^T \beta + \epsilon_i$$ It's clear what $y_i$ and $x_i$ are. But what are $\beta$ and $\epsilon$ supposed to be? The answer determines how we should think about identification. 1.) Prediction We could simply be interested in predicting the observable $y_i$ when we see the observable $x_i$, using the best linear function of $x_i$ to do so. Then the target parameter should be defined as $\beta := \arg \min_{b \in R^k} E[ (y_i - x_i^T b)^2]$ and the error as $\epsilon_i := y_i - x_i^T\beta$. The expectation $E[\cdot]$ is with respect to the distribution of observables. This distribution is identified if we observe $y_i$ and $x_i$. The solution for $\beta$ is the well known formula $E[x_ix_i^T]^{-1} E[x_iy_i]$. Note that $E[\epsilon_i x_i^T]=0$ holds by construction. If we find the best linear predictor, then the prediction error doesn't have a linear relationship with the predictor. If it did, we would have failed at finding the best linear predictor! What about $E[\epsilon_i |x_i]$? That will be zero only if $E[y_i|x_i] = x_i ^T \beta$, in which case $x_i ^T \beta$ isn't just the best linear predictor but the best predictor. The only failure of identification for this definition of $\beta$ would be that $E[x_ix_i^T]^{-1}$ might not exist, perhaps because some of the $x_i$ are linear functions of others. Then $\beta$ is not identified, there are different $\beta$ that all do the job of forming the best linear predictor. But that wouldn't worry us because we just want to predict and they are all equally good at that. 2.) Prediction with the underlying, not the measured covariate Often, $x_i$ is a proxy for something we are interested in. If $x_i$ is the body mass index we might be interested in how it predicts an outcome but we only get to work with a self reported estimate of it, $w_i$. If we are content with finding out the predictive relationship between that and an outcome we are back in the first scenario. If not, we face an identification problem. We are interested in $\beta$ as defined in the previous scenario, but we do not observe $x_i$. We observe only $w_i = x_i + u_i$. Now $\beta$ is not identified. We have $y_i = (w_i - u_it)^T \beta+ \epsilon_i$ or $y_i = w_i^T \beta+ \epsilon_i - u_i \beta$. And now $E[(\epsilon_i - u_i \beta) w_i^T]$ is no longer $0$ by construction and OLS won't estimate $\beta$ The most common identification strategy for this problem is to find an instrumental variable: Something that correlates with $x_i$, but not with $u_i$, and not with $\epsilon_i$ either. Maybe a friend's anonymous estimate of that person's body mass index. Note how nothing so far has been about causality, just about prediction, but there is an identification problem anyway, because an important variable is unobserved. 3.) Causal inference Probably the most common scenario: We actually want to know how $x_i$ affects $y_i$ not just how it predicts it. How should we define that? We can use structural equations to do it, which express how a variable really, is determined: $y = f(x, \epsilon)$. This unknown function tells us what the value $y$ would be if we set $x$ for any possible value, just like proper scientific models do. It also depends on $\epsilon$ which is now the effect of other causes of $y$ and this may well be correlated with $x$ in the data we see. Let's suppose $f$ is linear: $y_i = x_i^T \beta + \epsilon_i$ as before. This $\beta$ is the one from the first scenario only if $\epsilon_i$ is uncorrelated with $x_i$. In most empirical analyses, one can think of reasons why $x_i$ and $\epsilon_i$ might be correlated, and so an identification strategy is needed. To clearly distinguish this scenario from the first one, different notation is sometimes used: The potential outcomes notation, or causal graphs, or structural models. The best way of thinking about causality in economics is an ongoing area of research and there is no consensus yet. As it stands most researchers will will simply use the standard regression equation, expected values etc and you need to infer from the context whether the goal is prediction, prediction with an imperfectly measured covariate, or causal inference. A very good textbook covering this material is Mostly Harmless Econometrics.
Parameter identification v. causal identification Let's take the simple linear model and discuss three different definitions of the parameter of interest in the linear model, with very different identification settings. All three scenarios are very c
51,373
Parameter identification v. causal identification
I think that the way you explained your two different takes on "identification" was clear but I'm not familiar with your second take. My understanding ( I wouldn't even call myself a novice in this. I'd call myself super novice ) is that the notion you use in your second description, "identification strategy", is the part where you try to figure out which variables should be in the model and which should not be. OTOH, in the identification of causal effects ( again, this is just my experience ), you try to figure out whether the variables you chose in the model are really CAUSING the response or rather just correlated with the response. In econometrics, Clive Granger came up with "Granger Causality" as a method for addressing whether one variable is truly "causing" the other but his definition is only way of defining it. At the same time, Judea Pearl says that the causal relationship work done in econometrics and statistics is mostly (if not completely ? ) junky. I recommend trying to read Judea Pearl's work. He's kind of world-renown as far as his work on the notion of causality. Unfortunately, I tried to read one of his papers a long time ago and it seemed that it might take the rest of my life to possibly understand what it was saying so I stopped. I hope this helped some.
Parameter identification v. causal identification
I think that the way you explained your two different takes on "identification" was clear but I'm not familiar with your second take. My understanding ( I wouldn't even call myself a novice in this. I
Parameter identification v. causal identification I think that the way you explained your two different takes on "identification" was clear but I'm not familiar with your second take. My understanding ( I wouldn't even call myself a novice in this. I'd call myself super novice ) is that the notion you use in your second description, "identification strategy", is the part where you try to figure out which variables should be in the model and which should not be. OTOH, in the identification of causal effects ( again, this is just my experience ), you try to figure out whether the variables you chose in the model are really CAUSING the response or rather just correlated with the response. In econometrics, Clive Granger came up with "Granger Causality" as a method for addressing whether one variable is truly "causing" the other but his definition is only way of defining it. At the same time, Judea Pearl says that the causal relationship work done in econometrics and statistics is mostly (if not completely ? ) junky. I recommend trying to read Judea Pearl's work. He's kind of world-renown as far as his work on the notion of causality. Unfortunately, I tried to read one of his papers a long time ago and it seemed that it might take the rest of my life to possibly understand what it was saying so I stopped. I hope this helped some.
Parameter identification v. causal identification I think that the way you explained your two different takes on "identification" was clear but I'm not familiar with your second take. My understanding ( I wouldn't even call myself a novice in this. I
51,374
Parameter identification v. causal identification
https://autobox.com/pdfs/PREFERRED.pdf presents single equation- multivariable procedures to identify a useful model. http://www.autobox.com/pdfs/WHY-WE-FILTER.ppt also sheds some light on model identification. Parameter identification can lead to revising tentative model identification via necessity and sufficiency checks culminating in a SARMAX MODEL https://autobox.com/pdfs/SARMAX.pdf incorporating the memory of Y and any needed structure from both known(user-suggested X's ) and latent X's ( the I series … Intervention Detected ) . If your problem is multi-equation the VARIMA methods are available and if no known candidate X's are available then SARIMA is suggested https://autobox.com/pdfs/ARIMA%20FLOW%20CHART.pdf .
Parameter identification v. causal identification
https://autobox.com/pdfs/PREFERRED.pdf presents single equation- multivariable procedures to identify a useful model. http://www.autobox.com/pdfs/WHY-WE-FILTER.ppt also sheds some light on model ident
Parameter identification v. causal identification https://autobox.com/pdfs/PREFERRED.pdf presents single equation- multivariable procedures to identify a useful model. http://www.autobox.com/pdfs/WHY-WE-FILTER.ppt also sheds some light on model identification. Parameter identification can lead to revising tentative model identification via necessity and sufficiency checks culminating in a SARMAX MODEL https://autobox.com/pdfs/SARMAX.pdf incorporating the memory of Y and any needed structure from both known(user-suggested X's ) and latent X's ( the I series … Intervention Detected ) . If your problem is multi-equation the VARIMA methods are available and if no known candidate X's are available then SARIMA is suggested https://autobox.com/pdfs/ARIMA%20FLOW%20CHART.pdf .
Parameter identification v. causal identification https://autobox.com/pdfs/PREFERRED.pdf presents single equation- multivariable procedures to identify a useful model. http://www.autobox.com/pdfs/WHY-WE-FILTER.ppt also sheds some light on model ident
51,375
What's the state of the art for time series forecasting in 2019? [duplicate]
How to predict the next number in a series while having additional series of data that might affect it? lays out the arguements for pursuing ARMAX models when you have 1 endogenous time series. For cases where you have more than one consider following VECTOR ARIMA threads.
What's the state of the art for time series forecasting in 2019? [duplicate]
How to predict the next number in a series while having additional series of data that might affect it? lays out the arguements for pursuing ARMAX models when you have 1 endogenous time series. For ca
What's the state of the art for time series forecasting in 2019? [duplicate] How to predict the next number in a series while having additional series of data that might affect it? lays out the arguements for pursuing ARMAX models when you have 1 endogenous time series. For cases where you have more than one consider following VECTOR ARIMA threads.
What's the state of the art for time series forecasting in 2019? [duplicate] How to predict the next number in a series while having additional series of data that might affect it? lays out the arguements for pursuing ARMAX models when you have 1 endogenous time series. For ca
51,376
Modelling Time Series of Ratios
Some advice from the font of all time series knowledge ( grin ! ) when you convert two column to one column ... you lose information thus dp not do this model y as a function of x .. predict x to obtain a prediction of y and then if you are so inclined compute the ratio of the predicted y to the predicted x to obtain a predicted ratio Encode the uncertainty in the prediction of x into the transfer function model between y and x while detect unusual activity be it pulses , level shifts, seasonal and/or local time trends . Include the possiblity of future pulses in the prediction limits around the expected value of y 5 detect possible changes in error variance that may have occured within time segments as is the case with your data verify constancy of model parameters over time suggesting data segmentation If you wish you can post your data and I will try to be of more help ..
Modelling Time Series of Ratios
Some advice from the font of all time series knowledge ( grin ! ) when you convert two column to one column ... you lose information thus dp not do this model y as a function of x .. predict x to obt
Modelling Time Series of Ratios Some advice from the font of all time series knowledge ( grin ! ) when you convert two column to one column ... you lose information thus dp not do this model y as a function of x .. predict x to obtain a prediction of y and then if you are so inclined compute the ratio of the predicted y to the predicted x to obtain a predicted ratio Encode the uncertainty in the prediction of x into the transfer function model between y and x while detect unusual activity be it pulses , level shifts, seasonal and/or local time trends . Include the possiblity of future pulses in the prediction limits around the expected value of y 5 detect possible changes in error variance that may have occured within time segments as is the case with your data verify constancy of model parameters over time suggesting data segmentation If you wish you can post your data and I will try to be of more help ..
Modelling Time Series of Ratios Some advice from the font of all time series knowledge ( grin ! ) when you convert two column to one column ... you lose information thus dp not do this model y as a function of x .. predict x to obt
51,377
How do I find multiple change points in an online dataset?
Please find the description of the algorithm called SaRa here. It can be modified and used as an "online" version of Circular-Binary Segmentation algorithm. HMM can be modified for your purposes (3 states: normal, above and below, after change point the state is switching to normal again, finding its location according to the online points in a robust way). But you should also understand that your method will have some "resolution" and will not be able to detect short events if your noise level will be large enough. Again, super-simple method is to look at mean/median ratio within a sliding window. Also this paper can be considered as an answer.
How do I find multiple change points in an online dataset?
Please find the description of the algorithm called SaRa here. It can be modified and used as an "online" version of Circular-Binary Segmentation algorithm. HMM can be modified for your purposes (3 st
How do I find multiple change points in an online dataset? Please find the description of the algorithm called SaRa here. It can be modified and used as an "online" version of Circular-Binary Segmentation algorithm. HMM can be modified for your purposes (3 states: normal, above and below, after change point the state is switching to normal again, finding its location according to the online points in a robust way). But you should also understand that your method will have some "resolution" and will not be able to detect short events if your noise level will be large enough. Again, super-simple method is to look at mean/median ratio within a sliding window. Also this paper can be considered as an answer.
How do I find multiple change points in an online dataset? Please find the description of the algorithm called SaRa here. It can be modified and used as an "online" version of Circular-Binary Segmentation algorithm. HMM can be modified for your purposes (3 st
51,378
How do I find multiple change points in an online dataset?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This seems to be a classical anomaly detection question. There are several tools for this kind of questions using Matrix Profile: The details about Matrix Profile could be found here: https://www.cs.ucr.edu/~eamonn/MatrixProfile.html Tools that implement this method in Python can be found: https://tslearn.readthedocs.io/en/stable/auto_examples/misc/plot_matrix_profile.html#sphx-glr-auto-examples-misc-plot-matrix-profile-py https://github.com/TDAmeritrade/stumpy https://github.com/target/matrixprofile-ts
How do I find multiple change points in an online dataset?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How do I find multiple change points in an online dataset? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This seems to be a classical anomaly detection question. There are several tools for this kind of questions using Matrix Profile: The details about Matrix Profile could be found here: https://www.cs.ucr.edu/~eamonn/MatrixProfile.html Tools that implement this method in Python can be found: https://tslearn.readthedocs.io/en/stable/auto_examples/misc/plot_matrix_profile.html#sphx-glr-auto-examples-misc-plot-matrix-profile-py https://github.com/TDAmeritrade/stumpy https://github.com/target/matrixprofile-ts
How do I find multiple change points in an online dataset? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
51,379
R: statistical test to identify samples with too high variability
I just looked at this. My approach was: compute the mean, standard deviation, and count for each set of samples compute the critical t-threshold given alpha, the sample size, and the nature of the fit (quadratic). I was using excel so I used "T.inv". transform the data by subtracting the mean, then dividing by the standard deviation, then comparing the absolute value to the t-threshold. If it is above the threshold then it is classified as an outlier Note: alpha is a parameter. If you want to make your fit "wider" then use a smaller value. If you want more data to be classified as possible outlier then use a higher value. It is exceptionally good if you can take the time to understand what "alpha" means in the statistical sense of this threshold. I notice you have rows with 3 samples - that is dangerous: Having two samples and computing the standard deviation is like having one sample and computing the mean. The math gives you a number, but it is as sample-sparse as mathematics can go and still give a value - it is on the edge of the cliff of oblivion and is not very informative. Get more samples. There are rules of thumb that say 5, 10, 30, 100 or 300 are sufficient. If you are going below 5 then you had best have a great defense for why the math isn't bad.
R: statistical test to identify samples with too high variability
I just looked at this. My approach was: compute the mean, standard deviation, and count for each set of samples compute the critical t-threshold given alpha, the sample size, and the nature of the fi
R: statistical test to identify samples with too high variability I just looked at this. My approach was: compute the mean, standard deviation, and count for each set of samples compute the critical t-threshold given alpha, the sample size, and the nature of the fit (quadratic). I was using excel so I used "T.inv". transform the data by subtracting the mean, then dividing by the standard deviation, then comparing the absolute value to the t-threshold. If it is above the threshold then it is classified as an outlier Note: alpha is a parameter. If you want to make your fit "wider" then use a smaller value. If you want more data to be classified as possible outlier then use a higher value. It is exceptionally good if you can take the time to understand what "alpha" means in the statistical sense of this threshold. I notice you have rows with 3 samples - that is dangerous: Having two samples and computing the standard deviation is like having one sample and computing the mean. The math gives you a number, but it is as sample-sparse as mathematics can go and still give a value - it is on the edge of the cliff of oblivion and is not very informative. Get more samples. There are rules of thumb that say 5, 10, 30, 100 or 300 are sufficient. If you are going below 5 then you had best have a great defense for why the math isn't bad.
R: statistical test to identify samples with too high variability I just looked at this. My approach was: compute the mean, standard deviation, and count for each set of samples compute the critical t-threshold given alpha, the sample size, and the nature of the fi
51,380
R: statistical test to identify samples with too high variability
The "average variability" that you want to measure, should translate in Standard Deviation for statistics. It's pretty easy to compute STD in R, so look up the definition of Standard Deviation on google to see if it matches with what you want to find.
R: statistical test to identify samples with too high variability
The "average variability" that you want to measure, should translate in Standard Deviation for statistics. It's pretty easy to compute STD in R, so look up the definition of Standard Deviation on goog
R: statistical test to identify samples with too high variability The "average variability" that you want to measure, should translate in Standard Deviation for statistics. It's pretty easy to compute STD in R, so look up the definition of Standard Deviation on google to see if it matches with what you want to find.
R: statistical test to identify samples with too high variability The "average variability" that you want to measure, should translate in Standard Deviation for statistics. It's pretty easy to compute STD in R, so look up the definition of Standard Deviation on goog
51,381
PCA on non-centered data [duplicate]
PCA is sensitive to the scaling of the variables. ... One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses the fluctuations in all dimensions of the signal space to unit variance. Mean subtraction (a.k.a. "mean centering") is necessary for performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. From PCA, Further considerations. This way PCA on covariance matrix of non-standardized data will be affected by the non-zero means present. Standardizing the data or explicit use of correlation matrix will cancel this effect (though loosing relative scales of homogenous variables may be considered harmful).
PCA on non-centered data [duplicate]
PCA is sensitive to the scaling of the variables. ... One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autoc
PCA on non-centered data [duplicate] PCA is sensitive to the scaling of the variables. ... One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses the fluctuations in all dimensions of the signal space to unit variance. Mean subtraction (a.k.a. "mean centering") is necessary for performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. From PCA, Further considerations. This way PCA on covariance matrix of non-standardized data will be affected by the non-zero means present. Standardizing the data or explicit use of correlation matrix will cancel this effect (though loosing relative scales of homogenous variables may be considered harmful).
PCA on non-centered data [duplicate] PCA is sensitive to the scaling of the variables. ... One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autoc
51,382
How does cross validation work in R's gbm package?
If you want to estimate the error of the model and its corresponding variability when predicting new observations, after step 3. After fitting all the trees. Here the model that is being validated is the whole ensemble of weak learners. But naturally you could tune the hyperparameters using CV too. For example the optimal number of boosted trees. In the package 'dismo' the function gbm.step does exactly this. Example of usage: brtTuning<- gbm.step(data=yourData, gbm.x = 1:18, gbm.y = 19, family = "gaussian", tree.complexity = 5, learning.rate = 0.005, bag.fraction = 0.5) If you want to tune and then validate I beleive you need to do nested cross-validation.
How does cross validation work in R's gbm package?
If you want to estimate the error of the model and its corresponding variability when predicting new observations, after step 3. After fitting all the trees. Here the model that is being validated is
How does cross validation work in R's gbm package? If you want to estimate the error of the model and its corresponding variability when predicting new observations, after step 3. After fitting all the trees. Here the model that is being validated is the whole ensemble of weak learners. But naturally you could tune the hyperparameters using CV too. For example the optimal number of boosted trees. In the package 'dismo' the function gbm.step does exactly this. Example of usage: brtTuning<- gbm.step(data=yourData, gbm.x = 1:18, gbm.y = 19, family = "gaussian", tree.complexity = 5, learning.rate = 0.005, bag.fraction = 0.5) If you want to tune and then validate I beleive you need to do nested cross-validation.
How does cross validation work in R's gbm package? If you want to estimate the error of the model and its corresponding variability when predicting new observations, after step 3. After fitting all the trees. Here the model that is being validated is
51,383
How does cross validation work in R's gbm package?
Cross validation works by randomly (or by some other means) selecting rows into $K$ equally sized folds that are approximately balanced, training a classifier on $K-$ folds, testing on the remaining fold and then calculating a predictive loss function. This is repeated so that each fold is used as the test set. If you are randomly sampling rows for the folds you can then resample as needed. There are a number of packages that can do this in R, and it is pretty easy to code it up yourself. Using some form of cross-validation with boosting is a bit more complicated (I'm not terribly familiar with boosting). This question seems to provide some insight into that though.
How does cross validation work in R's gbm package?
Cross validation works by randomly (or by some other means) selecting rows into $K$ equally sized folds that are approximately balanced, training a classifier on $K-$ folds, testing on the remaining f
How does cross validation work in R's gbm package? Cross validation works by randomly (or by some other means) selecting rows into $K$ equally sized folds that are approximately balanced, training a classifier on $K-$ folds, testing on the remaining fold and then calculating a predictive loss function. This is repeated so that each fold is used as the test set. If you are randomly sampling rows for the folds you can then resample as needed. There are a number of packages that can do this in R, and it is pretty easy to code it up yourself. Using some form of cross-validation with boosting is a bit more complicated (I'm not terribly familiar with boosting). This question seems to provide some insight into that though.
How does cross validation work in R's gbm package? Cross validation works by randomly (or by some other means) selecting rows into $K$ equally sized folds that are approximately balanced, training a classifier on $K-$ folds, testing on the remaining f
51,384
How do I compare date-ranges from a time series? [closed]
Estimate an ARIMA model for both time sections making sure that there are no Pulses/Level Shifts/Seasonal Pulses or Local Time trends unaccounted for. Estimate the model globally then use the CHOW Test to test the hypothesis that the parameters are the same for the two periods.
How do I compare date-ranges from a time series? [closed]
Estimate an ARIMA model for both time sections making sure that there are no Pulses/Level Shifts/Seasonal Pulses or Local Time trends unaccounted for. Estimate the model globally then use the CHOW Tes
How do I compare date-ranges from a time series? [closed] Estimate an ARIMA model for both time sections making sure that there are no Pulses/Level Shifts/Seasonal Pulses or Local Time trends unaccounted for. Estimate the model globally then use the CHOW Test to test the hypothesis that the parameters are the same for the two periods.
How do I compare date-ranges from a time series? [closed] Estimate an ARIMA model for both time sections making sure that there are no Pulses/Level Shifts/Seasonal Pulses or Local Time trends unaccounted for. Estimate the model globally then use the CHOW Tes
51,385
Regression estimator where exponents are freely varying?
You will need to use non-linear optimization to solve this problem. Excel's solver should be able to find the parameters easily. Simply set the objective to minimize the sum of the squared residuals between the actual values you observed and the estimated values from your model. He is an example. http://www.csupomona.edu/~seskandari/documents/Curve_Fitting_William_Lee.pdf If you are looking for a programmatic way to do this, consider using NLOPT. http://ab-initio.mit.edu/wiki/index.php/NLopt On a side note, taking the natural log of the your dependent variable and doing least squares regression via the normal equations will create an exponential model. If we ignore the error term, then... $$\ln(y) = b_0 + b_1x_1 + b_2x_2$$ will yield $$y = e^{(b_0 + b_1x_1 + b_2x_2)}$$ which yields $$y = e^{(b_0)}e^{(b_1x_1)}e^{(b_2x_2)}$$ This information might be helpful in future endeavors. Let me know if need anymore help.
Regression estimator where exponents are freely varying?
You will need to use non-linear optimization to solve this problem. Excel's solver should be able to find the parameters easily. Simply set the objective to minimize the sum of the squared residuals b
Regression estimator where exponents are freely varying? You will need to use non-linear optimization to solve this problem. Excel's solver should be able to find the parameters easily. Simply set the objective to minimize the sum of the squared residuals between the actual values you observed and the estimated values from your model. He is an example. http://www.csupomona.edu/~seskandari/documents/Curve_Fitting_William_Lee.pdf If you are looking for a programmatic way to do this, consider using NLOPT. http://ab-initio.mit.edu/wiki/index.php/NLopt On a side note, taking the natural log of the your dependent variable and doing least squares regression via the normal equations will create an exponential model. If we ignore the error term, then... $$\ln(y) = b_0 + b_1x_1 + b_2x_2$$ will yield $$y = e^{(b_0 + b_1x_1 + b_2x_2)}$$ which yields $$y = e^{(b_0)}e^{(b_1x_1)}e^{(b_2x_2)}$$ This information might be helpful in future endeavors. Let me know if need anymore help.
Regression estimator where exponents are freely varying? You will need to use non-linear optimization to solve this problem. Excel's solver should be able to find the parameters easily. Simply set the objective to minimize the sum of the squared residuals b
51,386
How to model time-varying correlation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It may be a little bit late for you, but for future readers. I think what you are looking for is some sort of cross-correlation. https://en.wikipedia.org/wiki/Cross-correlation
How to model time-varying correlation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to model time-varying correlation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It may be a little bit late for you, but for future readers. I think what you are looking for is some sort of cross-correlation. https://en.wikipedia.org/wiki/Cross-correlation
How to model time-varying correlation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
51,387
What must someone know in statistics and machine learning? [closed]
The two worlds that you describe aren't really two different kinds of statistician, but rather: "statistics on rails," to coin a phrase: an attempt to teach non-technical people enough to be able to use statistics in a few narrow contexts. statistics proper, as understood by mathematicians, statisticians, data scientists, etc. The deal is this. To understand statistics in even moderate depth, you need to know a considerable amount of mathematics. You need to be comfortable with set theory, outer product spaces, functions between high dimensional spaces, a bit of linear algebra, a bit of calculus, and a smidgen of measure theory. It's not as bad as it sounds: all this is usually covered adequately in the first 2-3 years of undergraduate for hard science majors. But for other majors... I can't even formally define a random variable or the normal distribution for someone who doesn't have those prerequisites. Yet, most people only need to know how to conduct a simple A/B test or the like. And the fact is, we can give someone without those prerequisites a set of formulas and look-up tables and tell them to plug-and-chug. Or today, more commonly a user-friendly GUI program like SPSS. As long as they follow some reasonable rules of experiment design and follow a step-by-step procedure, they will be able to accomplish what they need to. The problem is that without a fairly in-depth understanding, they: are very likely to misuse statistics can't stray from the garden path Issue one is so common it even gets its own Wikipedia article, and issue two can only really be addressed by going back to fundamentals and explaining where those tests came from in the first place. Or by continually exhorting people to stay within the lines, follow the checklist, and consult with a statistician if anything seems weird. The following poem comes to mind: A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again. - Alexander Pope, A Little Learning I would liken the "on rails" version of statistics that you see in AP stats or early undergraduate classes for non-majors as the difference between WebMD articles and going to med school. The information in the WebMD article is the most essential conclusion and summary of current medical recommendations. But its not intended as a replacement for medical school, and I wouldn't call someone who had read an WebMD article "Doctor." What do you consider as must to know in statistics and machine learning? The Kolmogorov axioms, the definition of a random variable (including random vectors, matrices, etc.) the algebra of random variables, the concept of a distribution and the various theorems that tie these together. You should know about moments. You should know the law of large numbers, the various inequality theorems such as Chebyshev's inequality and the central limit theorems, although if you want to know how to prove them (optional) you will also need to learn about characteristic functions, which can occasionally be useful in their own right if you ever need to calculate exact closed form distributions for say, a ratio distribution. This stuff would usually be covered in the first (or maybe second?) semester of a class on mathematical statistics. There is also a reasonably good and completely free online textbook which I mainly use for reference but which does develop the topic starting from first principles. There are a few crucial distributions everyone must know: Normal, Binomial, Beta, Chi-Squared, F, Student's t, Multivariate Normal. Possibly also Poisson and Exponential for Poisson processes, Multivariate/Dirichlet if you work with multi-class data a lot, and others as needed. Oh, and Uniform - can't forget Uniform! At this point, you're ready to learn the basic structure of a hypothesis test; which is to say, what a "sample" is, and about null hypothesis and critical values, etc. You will be able to use the algebra of random variables and integrals involving distributions to derive pretty much all of the statistical hypothesis tests you've seen in AP stats. But you're not really done, in fact we're just getting to the good part: fitting models to data. There are various procedures, but the first one to learn is MLE. For me personally, this is the only reason why developed all the above machinery. The key thing to understand about fitting models is that we pose each one as an optimization problem where we (or rather, very powerful computers) find the "best" possible set of "parameters" for the model that "fit" a sample. The resulting model can be validated, examined and interpreted in various ways. The first two models to learn are linear regression and logistic regression, although if you've come through the hard way you might as well study the GLM (generalized linear model) which includes them both and more besides. A very good book on using logistic regression in practice is Hosmer et al.. Understanding these models in detail is very demanding, and encompasses ANVOA, regularization and many other useful techniques. If you're going to go around calling yourself a statistician, you will definitely want to complement all that theoretical knowledge with a solid, thorough understanding of the design of experiments and power analysis. This is one of the most common thing statisticians are asked to provide input on. Depending on how much model building you're doing, you may also need to know about cross validation, feature selection, model selection, etc. Although maybe I'm biased towards model building and you could get away without this stuff? In any case, a reasonably good book, especially if you're using R, is Applied Predictive Modeling by Max Kuhn. At this point you'll have the "must know" knowledge you asked about. But you'll also have learned that inventing a new model is as easy as adding a new term to a loss function, and consequently a huge number of models and approaches exist. No one can learn them all. Sometimes it seems as if which ones are in fashion in a given field is completely arbitrary, or an accident of history. Instead of trying to learn them all, rest assured that you can you the foundation to built to understand any particular model you need if a few hours of study, and focus on those that are commonly used in your field or which seem promising to you. What tests/ methods would you put in your toolbox? All right, laundry list time! A lot of these come from The Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman which is a very good book by three highly respected authors. Another good resource is scikit-learn, which tends to most of the most mature and popular models. Ditto for R's caret package, although it's really focused on predictive modeling. Others are just models I've seen mentioned and/or used frequently. In roughly descending order of popularity: Ridge, Lasso, and ElasticNet Regression Local Regression (LOESS) Kernel Density Estimates PCA Factor Analysis K-means GMM (and other mixture models) Decision Trees, Random Forest, and XGBoost Time Series Analysis: ARIMA, possible exponential smoothing SVM (Support Vector Machines) Hidden Markov Models GAM (General Additive Models) Bayes Networks and Structual Equation Modeling Robust Regression Imputation Neural Nets, CNNs (for images), RNN (for sequences). See the Deep Learning Book by Goodfellow, Bengio, and Courville. Bayesian Inference with MCMC a la Stan Survival Analysis (Cox PH, Kaplan-Meier estimator, etc.) Extreme value theory Vapnik–Chervonenkis theory Causality Pairwise/Perference modling e.g. Bradley-Terry IRT (item response theory, used for surveys and tests) Martingales Copulas This is a pretty idiosyncratic list. Certainly I don't know everything on that, and even where I do my knowledge level varies from superficial to long experience. That's going to be true for everyone. Everyone is going to have their own additions to this list, and above all their own priorities. Some people will tell you to dive right in to neural nets and ignore the rest. Some people (actuaries) spend their entire career focusing on survival analysis and extreme value theory. I can't give you any real guidance except to study techniques that are used in your field and apply to your problems.
What must someone know in statistics and machine learning? [closed]
The two worlds that you describe aren't really two different kinds of statistician, but rather: "statistics on rails," to coin a phrase: an attempt to teach non-technical people enough to be able to
What must someone know in statistics and machine learning? [closed] The two worlds that you describe aren't really two different kinds of statistician, but rather: "statistics on rails," to coin a phrase: an attempt to teach non-technical people enough to be able to use statistics in a few narrow contexts. statistics proper, as understood by mathematicians, statisticians, data scientists, etc. The deal is this. To understand statistics in even moderate depth, you need to know a considerable amount of mathematics. You need to be comfortable with set theory, outer product spaces, functions between high dimensional spaces, a bit of linear algebra, a bit of calculus, and a smidgen of measure theory. It's not as bad as it sounds: all this is usually covered adequately in the first 2-3 years of undergraduate for hard science majors. But for other majors... I can't even formally define a random variable or the normal distribution for someone who doesn't have those prerequisites. Yet, most people only need to know how to conduct a simple A/B test or the like. And the fact is, we can give someone without those prerequisites a set of formulas and look-up tables and tell them to plug-and-chug. Or today, more commonly a user-friendly GUI program like SPSS. As long as they follow some reasonable rules of experiment design and follow a step-by-step procedure, they will be able to accomplish what they need to. The problem is that without a fairly in-depth understanding, they: are very likely to misuse statistics can't stray from the garden path Issue one is so common it even gets its own Wikipedia article, and issue two can only really be addressed by going back to fundamentals and explaining where those tests came from in the first place. Or by continually exhorting people to stay within the lines, follow the checklist, and consult with a statistician if anything seems weird. The following poem comes to mind: A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again. - Alexander Pope, A Little Learning I would liken the "on rails" version of statistics that you see in AP stats or early undergraduate classes for non-majors as the difference between WebMD articles and going to med school. The information in the WebMD article is the most essential conclusion and summary of current medical recommendations. But its not intended as a replacement for medical school, and I wouldn't call someone who had read an WebMD article "Doctor." What do you consider as must to know in statistics and machine learning? The Kolmogorov axioms, the definition of a random variable (including random vectors, matrices, etc.) the algebra of random variables, the concept of a distribution and the various theorems that tie these together. You should know about moments. You should know the law of large numbers, the various inequality theorems such as Chebyshev's inequality and the central limit theorems, although if you want to know how to prove them (optional) you will also need to learn about characteristic functions, which can occasionally be useful in their own right if you ever need to calculate exact closed form distributions for say, a ratio distribution. This stuff would usually be covered in the first (or maybe second?) semester of a class on mathematical statistics. There is also a reasonably good and completely free online textbook which I mainly use for reference but which does develop the topic starting from first principles. There are a few crucial distributions everyone must know: Normal, Binomial, Beta, Chi-Squared, F, Student's t, Multivariate Normal. Possibly also Poisson and Exponential for Poisson processes, Multivariate/Dirichlet if you work with multi-class data a lot, and others as needed. Oh, and Uniform - can't forget Uniform! At this point, you're ready to learn the basic structure of a hypothesis test; which is to say, what a "sample" is, and about null hypothesis and critical values, etc. You will be able to use the algebra of random variables and integrals involving distributions to derive pretty much all of the statistical hypothesis tests you've seen in AP stats. But you're not really done, in fact we're just getting to the good part: fitting models to data. There are various procedures, but the first one to learn is MLE. For me personally, this is the only reason why developed all the above machinery. The key thing to understand about fitting models is that we pose each one as an optimization problem where we (or rather, very powerful computers) find the "best" possible set of "parameters" for the model that "fit" a sample. The resulting model can be validated, examined and interpreted in various ways. The first two models to learn are linear regression and logistic regression, although if you've come through the hard way you might as well study the GLM (generalized linear model) which includes them both and more besides. A very good book on using logistic regression in practice is Hosmer et al.. Understanding these models in detail is very demanding, and encompasses ANVOA, regularization and many other useful techniques. If you're going to go around calling yourself a statistician, you will definitely want to complement all that theoretical knowledge with a solid, thorough understanding of the design of experiments and power analysis. This is one of the most common thing statisticians are asked to provide input on. Depending on how much model building you're doing, you may also need to know about cross validation, feature selection, model selection, etc. Although maybe I'm biased towards model building and you could get away without this stuff? In any case, a reasonably good book, especially if you're using R, is Applied Predictive Modeling by Max Kuhn. At this point you'll have the "must know" knowledge you asked about. But you'll also have learned that inventing a new model is as easy as adding a new term to a loss function, and consequently a huge number of models and approaches exist. No one can learn them all. Sometimes it seems as if which ones are in fashion in a given field is completely arbitrary, or an accident of history. Instead of trying to learn them all, rest assured that you can you the foundation to built to understand any particular model you need if a few hours of study, and focus on those that are commonly used in your field or which seem promising to you. What tests/ methods would you put in your toolbox? All right, laundry list time! A lot of these come from The Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman which is a very good book by three highly respected authors. Another good resource is scikit-learn, which tends to most of the most mature and popular models. Ditto for R's caret package, although it's really focused on predictive modeling. Others are just models I've seen mentioned and/or used frequently. In roughly descending order of popularity: Ridge, Lasso, and ElasticNet Regression Local Regression (LOESS) Kernel Density Estimates PCA Factor Analysis K-means GMM (and other mixture models) Decision Trees, Random Forest, and XGBoost Time Series Analysis: ARIMA, possible exponential smoothing SVM (Support Vector Machines) Hidden Markov Models GAM (General Additive Models) Bayes Networks and Structual Equation Modeling Robust Regression Imputation Neural Nets, CNNs (for images), RNN (for sequences). See the Deep Learning Book by Goodfellow, Bengio, and Courville. Bayesian Inference with MCMC a la Stan Survival Analysis (Cox PH, Kaplan-Meier estimator, etc.) Extreme value theory Vapnik–Chervonenkis theory Causality Pairwise/Perference modling e.g. Bradley-Terry IRT (item response theory, used for surveys and tests) Martingales Copulas This is a pretty idiosyncratic list. Certainly I don't know everything on that, and even where I do my knowledge level varies from superficial to long experience. That's going to be true for everyone. Everyone is going to have their own additions to this list, and above all their own priorities. Some people will tell you to dive right in to neural nets and ignore the rest. Some people (actuaries) spend their entire career focusing on survival analysis and extreme value theory. I can't give you any real guidance except to study techniques that are used in your field and apply to your problems.
What must someone know in statistics and machine learning? [closed] The two worlds that you describe aren't really two different kinds of statistician, but rather: "statistics on rails," to coin a phrase: an attempt to teach non-technical people enough to be able to
51,388
What must someone know in statistics and machine learning? [closed]
Speaking from a professional perspective (not an academic one), and based on having interviewed several candidates and having been interviewed myself many times as well, I would argue that deep or wide knowledge in stats is not considered as a "must know", but having a very solid grasp of the basics (linear regression, hypothesis testing, probability 101, etc..) is essential, as well as some basic knowledge of algorithms (merging/joining tables, dynamic programming, search methods, etc...). I would rather have someone who understands very well how to apply Bayes’ rule and who knows how to unit test a python function, than someone who can give me a fancy explanation of how Bayesian optimization works and has experience with Tensorflow, but doesn't seem to grasp the concept of conditional probability or how to sort an array. Beyond the basics, most good companies or teams will quiz you on what you claim you know, not what they think you should know. If you put SVM on your resume, make sure you truly understand SVM, and have some experience using it. Also, good companies or teams will also test your hands experience more so than the depth of your theoretical knowledge.
What must someone know in statistics and machine learning? [closed]
Speaking from a professional perspective (not an academic one), and based on having interviewed several candidates and having been interviewed myself many times as well, I would argue that deep or wid
What must someone know in statistics and machine learning? [closed] Speaking from a professional perspective (not an academic one), and based on having interviewed several candidates and having been interviewed myself many times as well, I would argue that deep or wide knowledge in stats is not considered as a "must know", but having a very solid grasp of the basics (linear regression, hypothesis testing, probability 101, etc..) is essential, as well as some basic knowledge of algorithms (merging/joining tables, dynamic programming, search methods, etc...). I would rather have someone who understands very well how to apply Bayes’ rule and who knows how to unit test a python function, than someone who can give me a fancy explanation of how Bayesian optimization works and has experience with Tensorflow, but doesn't seem to grasp the concept of conditional probability or how to sort an array. Beyond the basics, most good companies or teams will quiz you on what you claim you know, not what they think you should know. If you put SVM on your resume, make sure you truly understand SVM, and have some experience using it. Also, good companies or teams will also test your hands experience more so than the depth of your theoretical knowledge.
What must someone know in statistics and machine learning? [closed] Speaking from a professional perspective (not an academic one), and based on having interviewed several candidates and having been interviewed myself many times as well, I would argue that deep or wid
51,389
What must someone know in statistics and machine learning? [closed]
What a person needs to know is going to depend on a lot of things. I can only answer from my perspective. I've worked as a data analyst for 20 years, working with researchers in the social, behavioral and medical sciences. I say "data analyst" to make clear that I view my job as a practical one: I help people figure out what their data means. (In an ideal situation, I also help them figure out what data they need, but ... the world is not ideal). What my clients need to know is to consult me (or someone else) early and often. I find it fascinating but rather odd that scientists with advanced degrees and a lot of experience in their fields will simultaneously Say that statistics is hard Admit that they have little training or expertise in it and Do it on their own anyway. No. This is the wrong way to proceed. And if this question is viewed as an attempt to figure out what a researcher needs to know, then I think the question is rather wrong-headed. It's like asking how much medicine you need to know in order to visit the doctor. What I need to know is When I am out of my depth. No one knows all this stuff, certainly I don't. A whole lot about models, methods and such, when each can be applied, what each does, how it goes wrong, alternatives etc. Also, how to run these models in some statistical package and read the results, detect bugs etc. (I use SAS and R, but other choices are fine). How to ask questions. A good data analyst asks a lot of questions. Enough matrix algebra and calculus to at least read articles. But that's not all that much. Others will say that this is inadequate and that I should really have a full grasp of (some list of advanced math here). All I can say is that I have not felt the lack, nor have my clients. True, I cannot invent new methods but 1) I have rarely felt the need - there are a huge variety of existant methods and 2) Most of my client have a hard enough time recognizing that you can't always use OLS regression, trying to get them to accept a totally new method would be nearly impossible and, if they did accept it, their PHBs would not. (PHB = pointy haired boss, a la Dilbert and could be a committee chair, a journal editor, a colleague or an actual boss).
What must someone know in statistics and machine learning? [closed]
What a person needs to know is going to depend on a lot of things. I can only answer from my perspective. I've worked as a data analyst for 20 years, working with researchers in the social, behavioral
What must someone know in statistics and machine learning? [closed] What a person needs to know is going to depend on a lot of things. I can only answer from my perspective. I've worked as a data analyst for 20 years, working with researchers in the social, behavioral and medical sciences. I say "data analyst" to make clear that I view my job as a practical one: I help people figure out what their data means. (In an ideal situation, I also help them figure out what data they need, but ... the world is not ideal). What my clients need to know is to consult me (or someone else) early and often. I find it fascinating but rather odd that scientists with advanced degrees and a lot of experience in their fields will simultaneously Say that statistics is hard Admit that they have little training or expertise in it and Do it on their own anyway. No. This is the wrong way to proceed. And if this question is viewed as an attempt to figure out what a researcher needs to know, then I think the question is rather wrong-headed. It's like asking how much medicine you need to know in order to visit the doctor. What I need to know is When I am out of my depth. No one knows all this stuff, certainly I don't. A whole lot about models, methods and such, when each can be applied, what each does, how it goes wrong, alternatives etc. Also, how to run these models in some statistical package and read the results, detect bugs etc. (I use SAS and R, but other choices are fine). How to ask questions. A good data analyst asks a lot of questions. Enough matrix algebra and calculus to at least read articles. But that's not all that much. Others will say that this is inadequate and that I should really have a full grasp of (some list of advanced math here). All I can say is that I have not felt the lack, nor have my clients. True, I cannot invent new methods but 1) I have rarely felt the need - there are a huge variety of existant methods and 2) Most of my client have a hard enough time recognizing that you can't always use OLS regression, trying to get them to accept a totally new method would be nearly impossible and, if they did accept it, their PHBs would not. (PHB = pointy haired boss, a la Dilbert and could be a committee chair, a journal editor, a colleague or an actual boss).
What must someone know in statistics and machine learning? [closed] What a person needs to know is going to depend on a lot of things. I can only answer from my perspective. I've worked as a data analyst for 20 years, working with researchers in the social, behavioral
51,390
What must someone know in statistics and machine learning? [closed]
For a person doing work in statistics or doing work associated with statistics there is not really much clear must-know knowledge. Obviously, people should be able to do simple and ordinary things, e.g. simple arithmetic. But beyond that, statistics and machine learning is enormously broad and multidisciplinary. You might have a person doing only work writing SQL and managing databases or a person collecting data for the state, e.g. stuff like eurostat (statistics is etymologically derived from 'state'). Should those people know the Kolmogorov axiom's or should they know all types of Pearson distributions? It is a bit similar like asking what tools a construction worker must be able to work with. An electrician is not like a carpenter, and a plumber is not like a plasterer. There is very little that they all must know and it will only be a fraction of their abilities.
What must someone know in statistics and machine learning? [closed]
For a person doing work in statistics or doing work associated with statistics there is not really much clear must-know knowledge. Obviously, people should be able to do simple and ordinary things, e
What must someone know in statistics and machine learning? [closed] For a person doing work in statistics or doing work associated with statistics there is not really much clear must-know knowledge. Obviously, people should be able to do simple and ordinary things, e.g. simple arithmetic. But beyond that, statistics and machine learning is enormously broad and multidisciplinary. You might have a person doing only work writing SQL and managing databases or a person collecting data for the state, e.g. stuff like eurostat (statistics is etymologically derived from 'state'). Should those people know the Kolmogorov axiom's or should they know all types of Pearson distributions? It is a bit similar like asking what tools a construction worker must be able to work with. An electrician is not like a carpenter, and a plumber is not like a plasterer. There is very little that they all must know and it will only be a fraction of their abilities.
What must someone know in statistics and machine learning? [closed] For a person doing work in statistics or doing work associated with statistics there is not really much clear must-know knowledge. Obviously, people should be able to do simple and ordinary things, e
51,391
What is the probability that a person will die on their birthday?
Sorry, a bit new here so please excuse me if this doesn't help too much. The US Social Security Administration keeps records of births and deaths and has their information available for purchase (apparently for a hefty price): Here However I found a source that claims to have bought it and is offering it for free (as well as offering the data sorted by date on the site): Here I'm assuming you can just use that as your sample and go through all the data with a script and find how many people actually die on their birthday. I would do that myself but I have 20 min left to download (they're about 1.5GB) so I'll try to get back to you on the statistics myself if I find the time to write up a script. Of course the United States can't represent the entire world's population but it is a good start. I'm assuming you will see a higher rate in deaths on birthdays because of "first world problems" because we're using the United States and I think the effect would be less visible across the world... Update - Numbers :D I've ran through the Social Security Death Master File from the free source, so there's no way knowing if the information is valid. However, given the size that they're ~3 Gigabytes each and that there's no reason for anyone to spoof these kind of files... I'll assume they are valid. You can see the code that I used to run through it here: http://pastebin.com/9wUFuvpN It's written in C#, it reads through the lines of the death index one by one and then parses the date using regex. I assumed that the file was basically this format: `(Social Security Number)(First Name) (LastName) (Middle Name) (Some Letter)(MM-DD-YYYY of Death)(MM-DD-YYYY Of Birth)` I had regex just pick out the last part for the dates of birth/death, check if any of the fields are just 0 (which I'm assuming it means that Social Security couldn't get a valid month/date for the record), and discard the 0's. Then it'll check if the day of birth and month of birth match the day of death/month of death and add that to the died on birthday count. It'll add all records that aren't 0's to the death count. It outputs the results in this format: Deaths On Birthday/Total Deaths Lines Looked Through - People With a 0 in any of their record It's be great if someone could double check that code, as I've found quite a few errors I've made before and could only tell because my results made no statistical sense. Here is the console output: Doing some math... File 1 had 44665 Deaths on a Birthday out of 14879058 Deaths in Total File 2 had 47060 Deaths on a Birthday out of 15278724 Deaths in Total File 3 had 49289 Deaths on a Birthday out of 15374049 Deaths in Total Total we have 141014 Deaths on a Birthday out of 45531831. So we have ~0.3097% chance of dying on a birthday while statistically (1/365) would lead us to believe there is only ~0.27397% chance of dying on a birthday. That is indeed a 13% increase in chance of death on a birthday from 1/365. Of course this sample is only for Americans and only has 45 million records, I'm sure organizations who originally published their paper had access to much more reliable and larger death indexes. However, I think that it is indeed valid that deaths on a birthday is more likely than death on any other day. Here's a Time article citing jumps in reasons for death on birthdays: Article Edit 2: @cbeleites pointed out that I forgot to account for same day deaths, which would be a huge factor in increasing deaths on birthdays. Strictly speaking my data is still valid but I did not throw out if a person died on the same day they were born. It's interesting that my results were not affected too heavily by this error so it seems that these records don't include death on first day. I'll look into it later. I'm thinking there would be very interesting statistics I can look for such as death on days of the month and make a heatmap of some sort. I'll probably try to do that sometime...
What is the probability that a person will die on their birthday?
Sorry, a bit new here so please excuse me if this doesn't help too much. The US Social Security Administration keeps records of births and deaths and has their information available for purchase (appa
What is the probability that a person will die on their birthday? Sorry, a bit new here so please excuse me if this doesn't help too much. The US Social Security Administration keeps records of births and deaths and has their information available for purchase (apparently for a hefty price): Here However I found a source that claims to have bought it and is offering it for free (as well as offering the data sorted by date on the site): Here I'm assuming you can just use that as your sample and go through all the data with a script and find how many people actually die on their birthday. I would do that myself but I have 20 min left to download (they're about 1.5GB) so I'll try to get back to you on the statistics myself if I find the time to write up a script. Of course the United States can't represent the entire world's population but it is a good start. I'm assuming you will see a higher rate in deaths on birthdays because of "first world problems" because we're using the United States and I think the effect would be less visible across the world... Update - Numbers :D I've ran through the Social Security Death Master File from the free source, so there's no way knowing if the information is valid. However, given the size that they're ~3 Gigabytes each and that there's no reason for anyone to spoof these kind of files... I'll assume they are valid. You can see the code that I used to run through it here: http://pastebin.com/9wUFuvpN It's written in C#, it reads through the lines of the death index one by one and then parses the date using regex. I assumed that the file was basically this format: `(Social Security Number)(First Name) (LastName) (Middle Name) (Some Letter)(MM-DD-YYYY of Death)(MM-DD-YYYY Of Birth)` I had regex just pick out the last part for the dates of birth/death, check if any of the fields are just 0 (which I'm assuming it means that Social Security couldn't get a valid month/date for the record), and discard the 0's. Then it'll check if the day of birth and month of birth match the day of death/month of death and add that to the died on birthday count. It'll add all records that aren't 0's to the death count. It outputs the results in this format: Deaths On Birthday/Total Deaths Lines Looked Through - People With a 0 in any of their record It's be great if someone could double check that code, as I've found quite a few errors I've made before and could only tell because my results made no statistical sense. Here is the console output: Doing some math... File 1 had 44665 Deaths on a Birthday out of 14879058 Deaths in Total File 2 had 47060 Deaths on a Birthday out of 15278724 Deaths in Total File 3 had 49289 Deaths on a Birthday out of 15374049 Deaths in Total Total we have 141014 Deaths on a Birthday out of 45531831. So we have ~0.3097% chance of dying on a birthday while statistically (1/365) would lead us to believe there is only ~0.27397% chance of dying on a birthday. That is indeed a 13% increase in chance of death on a birthday from 1/365. Of course this sample is only for Americans and only has 45 million records, I'm sure organizations who originally published their paper had access to much more reliable and larger death indexes. However, I think that it is indeed valid that deaths on a birthday is more likely than death on any other day. Here's a Time article citing jumps in reasons for death on birthdays: Article Edit 2: @cbeleites pointed out that I forgot to account for same day deaths, which would be a huge factor in increasing deaths on birthdays. Strictly speaking my data is still valid but I did not throw out if a person died on the same day they were born. It's interesting that my results were not affected too heavily by this error so it seems that these records don't include death on first day. I'll look into it later. I'm thinking there would be very interesting statistics I can look for such as death on days of the month and make a heatmap of some sort. I'll probably try to do that sometime...
What is the probability that a person will die on their birthday? Sorry, a bit new here so please excuse me if this doesn't help too much. The US Social Security Administration keeps records of births and deaths and has their information available for purchase (appa
51,392
What is the probability that a person will die on their birthday?
We can be even more precise than @Mike Shi's data: the most dangerous of all birthdays is the very first one. The 1st day mortality rates reported there are around 0.2 % for industrialized countries and 0.8 % average for all countries. Which means that the risk of dying on the day of birth is at least as high as the risk of dying at any of the following birth days*. * I think it is a safe assumption that 1st day deaths do not appear in @Mark Shi's file, as the US 1st day mortality rates are reported to be 0.3 % (other source: 0.26 %). Which is almost the total birth day death rate in the social security file. So either babies who die at the day of birth do not get a social security number, or dying on a birth day > 1 year is extremely improbable. side note: There are other days, such as Chirstmas and New Years Eve which are known to have higher-than-average mortality rates as well.
What is the probability that a person will die on their birthday?
We can be even more precise than @Mike Shi's data: the most dangerous of all birthdays is the very first one. The 1st day mortality rates reported there are around 0.2 % for industrialized countries a
What is the probability that a person will die on their birthday? We can be even more precise than @Mike Shi's data: the most dangerous of all birthdays is the very first one. The 1st day mortality rates reported there are around 0.2 % for industrialized countries and 0.8 % average for all countries. Which means that the risk of dying on the day of birth is at least as high as the risk of dying at any of the following birth days*. * I think it is a safe assumption that 1st day deaths do not appear in @Mark Shi's file, as the US 1st day mortality rates are reported to be 0.3 % (other source: 0.26 %). Which is almost the total birth day death rate in the social security file. So either babies who die at the day of birth do not get a social security number, or dying on a birth day > 1 year is extremely improbable. side note: There are other days, such as Chirstmas and New Years Eve which are known to have higher-than-average mortality rates as well.
What is the probability that a person will die on their birthday? We can be even more precise than @Mike Shi's data: the most dangerous of all birthdays is the very first one. The 1st day mortality rates reported there are around 0.2 % for industrialized countries a
51,393
What is the probability that a person will die on their birthday?
Here's an argument why the probability of death on the birthday may be higher than on other days: Birthdays are emotionally charged days. More over, people tend to celebrate it somehow.. So there is an excess of factors (relative to the person's usual life style) that increase biological stress (excess emotions, excess drinking, excess eating, excess dancing, excess banjee jumping etc). Statistically speaking, this situation increases the chances of dying on a birthday, since it intensifies any health issues a person may have, or because it exposes the person to situations and risks for which the person is inexperienced.
What is the probability that a person will die on their birthday?
Here's an argument why the probability of death on the birthday may be higher than on other days: Birthdays are emotionally charged days. More over, people tend to celebrate it somehow.. So there is a
What is the probability that a person will die on their birthday? Here's an argument why the probability of death on the birthday may be higher than on other days: Birthdays are emotionally charged days. More over, people tend to celebrate it somehow.. So there is an excess of factors (relative to the person's usual life style) that increase biological stress (excess emotions, excess drinking, excess eating, excess dancing, excess banjee jumping etc). Statistically speaking, this situation increases the chances of dying on a birthday, since it intensifies any health issues a person may have, or because it exposes the person to situations and risks for which the person is inexperienced.
What is the probability that a person will die on their birthday? Here's an argument why the probability of death on the birthday may be higher than on other days: Birthdays are emotionally charged days. More over, people tend to celebrate it somehow.. So there is a
51,394
What is the probability that a person will die on their birthday?
The probability that a newborn dies within a year can be found in the life tables. For example, you can check out the periodic life tables and look at the column $q_x$ for $x=0$ in the human mortality database. This is not exactly want you want, but will give you an idea.
What is the probability that a person will die on their birthday?
The probability that a newborn dies within a year can be found in the life tables. For example, you can check out the periodic life tables and look at the column $q_x$ for $x=0$ in the human mortality
What is the probability that a person will die on their birthday? The probability that a newborn dies within a year can be found in the life tables. For example, you can check out the periodic life tables and look at the column $q_x$ for $x=0$ in the human mortality database. This is not exactly want you want, but will give you an idea.
What is the probability that a person will die on their birthday? The probability that a newborn dies within a year can be found in the life tables. For example, you can check out the periodic life tables and look at the column $q_x$ for $x=0$ in the human mortality
51,395
What is the probability that a person will die on their birthday?
In addition to the other excellent answers, but there is a point none of them discussed: Birthdays are not uniformly distributed over the year, and neither are deathdays. That conspires such that the "statistical" probability is not 1/365. To get an idea of this effect, lets first assume they are both almost uniform, only 29 february has a probability 1/4 of the others. That gives $$ 365 p + \frac14 p=1 $$ so $p= 0.002737851$. That leads to probability of birth and death on the same day equal $356\cdot p^2 + (p/4)^2= 0.002736445 > 0.00273224=\frac1{366}$ which is the minimum possible value (with 366 days). With a bit more generality, let $p_i, i=1, \dotsc, n$ be the birthday probabilities, and $q_i, i=1,\dotsc,n$ the deathday probabilities, for a year with $n$ days. Then, if birthday and deathday for a person are statistically independent, we will find that $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(\text{Birth and death on same day}) = \sum_{i=1}^n p_i q_i $$ so if $p_i=q_i$ then that is $\sum_i p_i^2$. That is a quantity known (in biology) as Simpsons index of (bio)diversity. Its inverse could then be taken as "effective number of days (in a year)"! The minimum value of $\sum_i p_i^2$ is $1/n$. To see that use convexity. But assuming $p_i=q_i$ is quite a stretch, lets first look at some data, birthday probabilities for Norway calculated from data from ssb.no: Clearly not uniform, the high outlier is 1. july. That is not real, it is caused by immigrants without documented birthday registered that date. One max in spring, around beginning of april, another maximum in autumn, in september. The simpson index calculated from this is $ 0.002750224$, and the inverse is $363.6067$, so the "effective number of birthdays" is about 363 and a half, rather close to 366. So the nonuniformity maybe is not to important. It is more difficult to find data for deathday, but I found the paper (in norwegian) (this is the official journal of the Norwegian medical association) they report around 12% higher rate of death in winter than in summer. They also report a slightly increased risk of death at Mondays! In fact, international comparisons reported by that paper shows that winter overmortality is lowest in scandinavia, in countries like Irland or England it is about double. That might be surprising, might have to do with us Scandinavians having warmer and better isolated houses? From that we can reconstruct a deathday distribution. I take winter halfyear as november-april. Then we can calculate $$ p_w =1.12 p_s \\ (182 \cdot 1.12 + 184) p_s = 1 $$ leading to $p_s=0.002578383, p_w= 0.002887789$ and finally $\sum_i p_i q_i = 0.00273151$, its inverse, the "effective number of days" being 366.1, pretty close to 366! The anticorrelation ($\rho(p_i,q_i)=-0.06$) seems to offset the nonuniformity in such a way that we could as well assume uniformity (and equal distribution for birthday and deathday). That is quite interesting. EDIT: Here is a published paper on nonuniformity in the birthday problem.
What is the probability that a person will die on their birthday?
In addition to the other excellent answers, but there is a point none of them discussed: Birthdays are not uniformly distributed over the year, and neither are deathdays. That conspires such that the
What is the probability that a person will die on their birthday? In addition to the other excellent answers, but there is a point none of them discussed: Birthdays are not uniformly distributed over the year, and neither are deathdays. That conspires such that the "statistical" probability is not 1/365. To get an idea of this effect, lets first assume they are both almost uniform, only 29 february has a probability 1/4 of the others. That gives $$ 365 p + \frac14 p=1 $$ so $p= 0.002737851$. That leads to probability of birth and death on the same day equal $356\cdot p^2 + (p/4)^2= 0.002736445 > 0.00273224=\frac1{366}$ which is the minimum possible value (with 366 days). With a bit more generality, let $p_i, i=1, \dotsc, n$ be the birthday probabilities, and $q_i, i=1,\dotsc,n$ the deathday probabilities, for a year with $n$ days. Then, if birthday and deathday for a person are statistically independent, we will find that $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(\text{Birth and death on same day}) = \sum_{i=1}^n p_i q_i $$ so if $p_i=q_i$ then that is $\sum_i p_i^2$. That is a quantity known (in biology) as Simpsons index of (bio)diversity. Its inverse could then be taken as "effective number of days (in a year)"! The minimum value of $\sum_i p_i^2$ is $1/n$. To see that use convexity. But assuming $p_i=q_i$ is quite a stretch, lets first look at some data, birthday probabilities for Norway calculated from data from ssb.no: Clearly not uniform, the high outlier is 1. july. That is not real, it is caused by immigrants without documented birthday registered that date. One max in spring, around beginning of april, another maximum in autumn, in september. The simpson index calculated from this is $ 0.002750224$, and the inverse is $363.6067$, so the "effective number of birthdays" is about 363 and a half, rather close to 366. So the nonuniformity maybe is not to important. It is more difficult to find data for deathday, but I found the paper (in norwegian) (this is the official journal of the Norwegian medical association) they report around 12% higher rate of death in winter than in summer. They also report a slightly increased risk of death at Mondays! In fact, international comparisons reported by that paper shows that winter overmortality is lowest in scandinavia, in countries like Irland or England it is about double. That might be surprising, might have to do with us Scandinavians having warmer and better isolated houses? From that we can reconstruct a deathday distribution. I take winter halfyear as november-april. Then we can calculate $$ p_w =1.12 p_s \\ (182 \cdot 1.12 + 184) p_s = 1 $$ leading to $p_s=0.002578383, p_w= 0.002887789$ and finally $\sum_i p_i q_i = 0.00273151$, its inverse, the "effective number of days" being 366.1, pretty close to 366! The anticorrelation ($\rho(p_i,q_i)=-0.06$) seems to offset the nonuniformity in such a way that we could as well assume uniformity (and equal distribution for birthday and deathday). That is quite interesting. EDIT: Here is a published paper on nonuniformity in the birthday problem.
What is the probability that a person will die on their birthday? In addition to the other excellent answers, but there is a point none of them discussed: Birthdays are not uniformly distributed over the year, and neither are deathdays. That conspires such that the
51,396
What is the probability that a person will die on their birthday?
1 out of 365 would be the correct odds, because you are guaranteed to die on one day out of a 365 day year... Therefore odds are 1 out of 365.
What is the probability that a person will die on their birthday?
1 out of 365 would be the correct odds, because you are guaranteed to die on one day out of a 365 day year... Therefore odds are 1 out of 365.
What is the probability that a person will die on their birthday? 1 out of 365 would be the correct odds, because you are guaranteed to die on one day out of a 365 day year... Therefore odds are 1 out of 365.
What is the probability that a person will die on their birthday? 1 out of 365 would be the correct odds, because you are guaranteed to die on one day out of a 365 day year... Therefore odds are 1 out of 365.
51,397
Are all continuous random variables normally distributed?
No. Lots of real life variables have distributions which are better described as other distributions. t-distributions (heavier tails) are common, as are various skewed distributions, for example, many real measurements must be positive, so greater than or equal to zero, but can have a long tail of high values. Quite a lot of real world data is counts, or similar integer data, which is often better described by a Poisson distribution. In my personal experience, in epidemiology, bio-medicine, and sociology, genuinely 'normal' distributions, that is real data which can best described as a normal distribution, are uncommon, but it does depend on the field you work in, and exactly what data you are looking at.
Are all continuous random variables normally distributed?
No. Lots of real life variables have distributions which are better described as other distributions. t-distributions (heavier tails) are common, as are various skewed distributions, for example, many
Are all continuous random variables normally distributed? No. Lots of real life variables have distributions which are better described as other distributions. t-distributions (heavier tails) are common, as are various skewed distributions, for example, many real measurements must be positive, so greater than or equal to zero, but can have a long tail of high values. Quite a lot of real world data is counts, or similar integer data, which is often better described by a Poisson distribution. In my personal experience, in epidemiology, bio-medicine, and sociology, genuinely 'normal' distributions, that is real data which can best described as a normal distribution, are uncommon, but it does depend on the field you work in, and exactly what data you are looking at.
Are all continuous random variables normally distributed? No. Lots of real life variables have distributions which are better described as other distributions. t-distributions (heavier tails) are common, as are various skewed distributions, for example, many
51,398
Are all continuous random variables normally distributed?
No. There are many continuous probability distributions out of all the probability distributions. There are whole books containing nothing but such things. Some of the non-normal continuous distributions introduced to new students of statistics include: The continuous uniform distribution Student's T distribution The exponential distribution The normal/Gaussian distribution is important because of the Central Limit Theorem (CLT), which shows that for very many situations the sum of randomly distributed independent variables will tend to have a normal distribution, regardless of the constituent variables' original distributions. This can be useful for performing certain kinds of commonly-used statistical inference, which probably contributes to the frequency with which one encounters the normal/Gaussian distribution. Student's T distribution mentioned above gives some formalism to the "tend" in the CLT's "...will tend to have a normal distribution...", and is therefore also useful in these commonly used forms of statistical inference.
Are all continuous random variables normally distributed?
No. There are many continuous probability distributions out of all the probability distributions. There are whole books containing nothing but such things. Some of the non-normal continuous distributi
Are all continuous random variables normally distributed? No. There are many continuous probability distributions out of all the probability distributions. There are whole books containing nothing but such things. Some of the non-normal continuous distributions introduced to new students of statistics include: The continuous uniform distribution Student's T distribution The exponential distribution The normal/Gaussian distribution is important because of the Central Limit Theorem (CLT), which shows that for very many situations the sum of randomly distributed independent variables will tend to have a normal distribution, regardless of the constituent variables' original distributions. This can be useful for performing certain kinds of commonly-used statistical inference, which probably contributes to the frequency with which one encounters the normal/Gaussian distribution. Student's T distribution mentioned above gives some formalism to the "tend" in the CLT's "...will tend to have a normal distribution...", and is therefore also useful in these commonly used forms of statistical inference.
Are all continuous random variables normally distributed? No. There are many continuous probability distributions out of all the probability distributions. There are whole books containing nothing but such things. Some of the non-normal continuous distributi
51,399
Are all continuous random variables normally distributed?
Not necessarily. The shape of the distribution is contingent on the continuous random variable's PDF - which isn't expressly Gaussian. Some counterexamples include the student-t distribution and the Laplace distribution.
Are all continuous random variables normally distributed?
Not necessarily. The shape of the distribution is contingent on the continuous random variable's PDF - which isn't expressly Gaussian. Some counterexamples include the student-t distribution and the L
Are all continuous random variables normally distributed? Not necessarily. The shape of the distribution is contingent on the continuous random variable's PDF - which isn't expressly Gaussian. Some counterexamples include the student-t distribution and the Laplace distribution.
Are all continuous random variables normally distributed? Not necessarily. The shape of the distribution is contingent on the continuous random variable's PDF - which isn't expressly Gaussian. Some counterexamples include the student-t distribution and the L
51,400
Is R output reliable (specially IRT package ltm) [duplicate]
A good way to evaluate the quality of software is to perform simulations with known population parameters and observe how well these values can be recovered. Better yet, comparing parameter recovery to other known software is also a good idea since then you will have a general idea of what's happening if there are peculiarities in the results. In my experience, ltm seems to work pretty well with dichotomous datasets but doesn't always behave nicely with polytomous models. For instance, consider these two randomly simulated datasets which are estimated with both ltm (version 1.0-0) and mirt (version 1.17.1): library(ltm) library(mirt) #this seed agrees just fine... set.seed(1) a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, 500, itemtype = 'graded') ltmmod <- grm(dat, IRT.param=FALSE) logLik(ltmmod) 'log Lik.' -11827.25 (df=100) mirtmod <- mirt(dat, 1) Iteration: 17, Log-Lik: -11826.036, Max-Change: 0.00006 mirtmod@logLik [1] -11826.04 The first seed seems to be estimated well enough by both packages, resting on similar maximum likelihood locations. However, for other random seeds: # this one is way different set.seed(1234) a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, 500, itemtype = 'graded') ltmmod <- grm(dat, IRT.param=FALSE) logLik(ltmmod) 'log Lik.' -13462.7 (df=100) mirtmod <- mirt(dat, 1) Iteration: 18, Log-Lik: -11913.724, Max-Change: 0.00002 mirtmod@logLik [1] -11913.72 Clearly, ltm is getting hung up somewhere when estimating these models, perhaps resting on local minimums. How often this occurs, and what it means for inferring population parameters, can be checked out by running a quick simulation: library(SimDesign) # SimFunctions(summarise=FALSE) mbias <- function(sample, pop) mean(sample - pop) mRMSE <- function(sample, pop) sqrt(mean((sample - pop)^2)) Design <- data.frame(N=500) #------------------------------------------------------------------- Generate <- function(condition, fixed_objects = NULL) { while(TRUE){ a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, condition$N, itemtype = 'graded') ncats <- apply(dat, 2, function(x) length(unique(x))) if(all(ncats == 5)) break } ret <- list(data=dat, a=a, d=d) ret } Analyse <- function(condition, dat, fixed_objects = NULL) { Attach(dat) ltmmod <- grm(data, IRT.param=FALSE) mirtmod <- mirt(data, 1, verbose = FALSE) if(ltmmod$convergence != 0) stop('ltm failed to converge') if(!extract.mirt(mirtmod, 'converged')) stop('mirt failed to converge') cfs <- coef(ltmmod) ltm_as <- cfs[,'beta', drop=FALSE] ltm_ds <- -cfs[,1:4] mv <- mod2values(mirtmod) mirt_as <- mv$value[mv$name == 'a1'] mirt_ds <- matrix(mv$value[mv$name %in% c('d1', 'd2', 'd3', 'd4')], 20, byrow=TRUE) ret <- data.frame(ltm_logLik = logLik(ltmmod), mirt_logLik = logLik(mirtmod), ltm_bias_a = mbias(ltm_as, a), mirt_bias_a = mbias(mirt_as, a), ltm_bias_d = mbias(ltm_ds, d), mirt_bias_d = mbias(mirt_ds, d), ltm_RMSE_a = mRMSE(ltm_as, a), mirt_RMSE_a = mRMSE(mirt_as, a), ltm_RMSE_d = mRMSE(ltm_ds, d), mirt_RMSE_d = mRMSE(mirt_ds, d)) ret } #------------------------------------------------------------------- results <- runSimulation(design=Design, replications=100, generate=Generate, analyse=Analyse, packages=c('mirt', 'ltm'), parallel=TRUE) The results object is a data.frame here containing all the replications. Summarising the results gives # post analysis round(apply(results[,3:10], 2, function(x) c(mean=mean(x), sd=sd(x), max=max(x))), 3) ltm_bias_a mirt_bias_a ltm_bias_d mirt_bias_d mean 0.051 0.010 -0.018 0.000 sd 0.182 0.044 0.168 0.063 max 0.923 0.138 0.442 0.185 ltm_RMSE_a mirt_RMSE_a ltm_RMSE_d mirt_RMSE_d mean 0.217 0.128 0.262 0.173 sd 0.228 0.027 0.234 0.033 max 1.118 0.202 1.261 0.286 From this very small simulation we can see that ltm potentially can be very off from the population parameters (it's especially prevalent by looking at the max statistic above). The same is seen in the log-likelihoods comparing the models, which should in theory be very close. plot(results$mirt_logLik, results$ltm_logLik, ylab = 'ltm log-lik', xlab = 'mirt log-lik') For the most part the log-likelihoods are comparable, though ltm occasionally had much higher minimums than mirt. This problem has been discussed elsewhere as well (https://groups.google.com/forum/#!topic/mirt-package/uK3W4XAMQ9Q). Because this appears to happen so often whenever running the ltm::grm() function I would highly recommend re-estimating with different starting values for every model you fit to see if the ML estimates are consistent. Or, choose alternative IRT software which do not have this property.
Is R output reliable (specially IRT package ltm) [duplicate]
A good way to evaluate the quality of software is to perform simulations with known population parameters and observe how well these values can be recovered. Better yet, comparing parameter recovery t
Is R output reliable (specially IRT package ltm) [duplicate] A good way to evaluate the quality of software is to perform simulations with known population parameters and observe how well these values can be recovered. Better yet, comparing parameter recovery to other known software is also a good idea since then you will have a general idea of what's happening if there are peculiarities in the results. In my experience, ltm seems to work pretty well with dichotomous datasets but doesn't always behave nicely with polytomous models. For instance, consider these two randomly simulated datasets which are estimated with both ltm (version 1.0-0) and mirt (version 1.17.1): library(ltm) library(mirt) #this seed agrees just fine... set.seed(1) a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, 500, itemtype = 'graded') ltmmod <- grm(dat, IRT.param=FALSE) logLik(ltmmod) 'log Lik.' -11827.25 (df=100) mirtmod <- mirt(dat, 1) Iteration: 17, Log-Lik: -11826.036, Max-Change: 0.00006 mirtmod@logLik [1] -11826.04 The first seed seems to be estimated well enough by both packages, resting on similar maximum likelihood locations. However, for other random seeds: # this one is way different set.seed(1234) a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, 500, itemtype = 'graded') ltmmod <- grm(dat, IRT.param=FALSE) logLik(ltmmod) 'log Lik.' -13462.7 (df=100) mirtmod <- mirt(dat, 1) Iteration: 18, Log-Lik: -11913.724, Max-Change: 0.00002 mirtmod@logLik [1] -11913.72 Clearly, ltm is getting hung up somewhere when estimating these models, perhaps resting on local minimums. How often this occurs, and what it means for inferring population parameters, can be checked out by running a quick simulation: library(SimDesign) # SimFunctions(summarise=FALSE) mbias <- function(sample, pop) mean(sample - pop) mRMSE <- function(sample, pop) sqrt(mean((sample - pop)^2)) Design <- data.frame(N=500) #------------------------------------------------------------------- Generate <- function(condition, fixed_objects = NULL) { while(TRUE){ a <- matrix(rlnorm(20, .2, .3)) d <- matrix(rnorm(20*4, 0, 2), 20) d <- t(apply(d, 1, sort, decreasing=TRUE)) dat <- simdata(a, d, condition$N, itemtype = 'graded') ncats <- apply(dat, 2, function(x) length(unique(x))) if(all(ncats == 5)) break } ret <- list(data=dat, a=a, d=d) ret } Analyse <- function(condition, dat, fixed_objects = NULL) { Attach(dat) ltmmod <- grm(data, IRT.param=FALSE) mirtmod <- mirt(data, 1, verbose = FALSE) if(ltmmod$convergence != 0) stop('ltm failed to converge') if(!extract.mirt(mirtmod, 'converged')) stop('mirt failed to converge') cfs <- coef(ltmmod) ltm_as <- cfs[,'beta', drop=FALSE] ltm_ds <- -cfs[,1:4] mv <- mod2values(mirtmod) mirt_as <- mv$value[mv$name == 'a1'] mirt_ds <- matrix(mv$value[mv$name %in% c('d1', 'd2', 'd3', 'd4')], 20, byrow=TRUE) ret <- data.frame(ltm_logLik = logLik(ltmmod), mirt_logLik = logLik(mirtmod), ltm_bias_a = mbias(ltm_as, a), mirt_bias_a = mbias(mirt_as, a), ltm_bias_d = mbias(ltm_ds, d), mirt_bias_d = mbias(mirt_ds, d), ltm_RMSE_a = mRMSE(ltm_as, a), mirt_RMSE_a = mRMSE(mirt_as, a), ltm_RMSE_d = mRMSE(ltm_ds, d), mirt_RMSE_d = mRMSE(mirt_ds, d)) ret } #------------------------------------------------------------------- results <- runSimulation(design=Design, replications=100, generate=Generate, analyse=Analyse, packages=c('mirt', 'ltm'), parallel=TRUE) The results object is a data.frame here containing all the replications. Summarising the results gives # post analysis round(apply(results[,3:10], 2, function(x) c(mean=mean(x), sd=sd(x), max=max(x))), 3) ltm_bias_a mirt_bias_a ltm_bias_d mirt_bias_d mean 0.051 0.010 -0.018 0.000 sd 0.182 0.044 0.168 0.063 max 0.923 0.138 0.442 0.185 ltm_RMSE_a mirt_RMSE_a ltm_RMSE_d mirt_RMSE_d mean 0.217 0.128 0.262 0.173 sd 0.228 0.027 0.234 0.033 max 1.118 0.202 1.261 0.286 From this very small simulation we can see that ltm potentially can be very off from the population parameters (it's especially prevalent by looking at the max statistic above). The same is seen in the log-likelihoods comparing the models, which should in theory be very close. plot(results$mirt_logLik, results$ltm_logLik, ylab = 'ltm log-lik', xlab = 'mirt log-lik') For the most part the log-likelihoods are comparable, though ltm occasionally had much higher minimums than mirt. This problem has been discussed elsewhere as well (https://groups.google.com/forum/#!topic/mirt-package/uK3W4XAMQ9Q). Because this appears to happen so often whenever running the ltm::grm() function I would highly recommend re-estimating with different starting values for every model you fit to see if the ML estimates are consistent. Or, choose alternative IRT software which do not have this property.
Is R output reliable (specially IRT package ltm) [duplicate] A good way to evaluate the quality of software is to perform simulations with known population parameters and observe how well these values can be recovered. Better yet, comparing parameter recovery t