idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,701 | Derivation of the distribution of $\hat{\phi}=[\hat{\phi}_1, \cdots, \hat{\phi}_p]$ in AR(p) models | It doesn't have to be thru Yule-Walker. You can use the OLS approach, namely,
$$X_t = \pmb X_{t-1}^T \pmb \phi + n_t , \qquad t = p+1,p+2,\ldots$$
where $\pmb X_{t-1} = [X_{t-1} \ldots X_{t-p} ]$and $\pmb \phi = [\phi_1 \ldots \phi_p] $.
Stacking all such equations, we get
$$\pmb y = \pmb X \pmb \phi + \pmb n$$
where $\pmb y = [X_{p+1} \ldots X_N]^T$ and $$\pmb X = \begin{bmatrix}
\pmb X_{p}^T\\
\pmb X_{p+1}^T\\
\vdots \\
\pmb X_{n-1}^T
\end{bmatrix} $$
Using OLS, we have
$$\hat{\pmb{\phi}} = (\pmb X^T \pmb X)^{-1} \pmb X^T \pmb y = (\pmb X^T \pmb X)^{-1} \pmb X^T (\pmb X \pmb \phi + \pmb n) = \pmb \phi + (\pmb X^T \pmb X)^{-1} \pmb X^T \pmb n $$
Notice that from the above equation
$$\mathsf{E}( \hat{\pmb{\phi}} ) = \pmb \phi +(\pmb X^T \pmb X)^{-1} \underbrace{\mathsf{E} \pmb X^T \pmb n}_0 \pmb \phi$$.
and
$$\mathsf{var}(\hat{\pmb{\phi}}) = \mathsf{E}( (\hat{\pmb{\phi}} - \pmb\phi)(\hat{\pmb{\phi}} - \pmb\phi)^T ) =(\pmb X^T \pmb X)^{-1} \pmb X^T \mathsf{E} (\pmb n \pmb n^T) \pmb X (\pmb X^T \pmb X)^{-1} $$
assuming white noise with same variance across time $\sigma_\epsilon^2$, we get
$$\mathsf{var}(\hat{\pmb{\phi}}) = \mathsf{E}( (\hat{\pmb{\phi}} - \pmb\phi)(\hat{\pmb{\phi}} - \pmb\phi)^T ) =(\pmb X^T \pmb X)^{-1} \pmb X^T \sigma_\epsilon^2 \pmb I \pmb X (\pmb X^T \pmb X)^{-1} = \sigma_\epsilon^2 (\pmb X^T \pmb X)^{-1}(\pmb X^T \pmb X)(\pmb X^T \pmb X)^{-1}=\sigma_\epsilon^2 (\pmb X^T \pmb X)^{-1}$$
Now assuming your AR(p) is strictly stationary and ergodic and that $\mathsf{E}(X_t^4)$ exists, then the central limit theorem applies
$$\sqrt{n}( \hat{\pmb{\phi}} - \pmb{\phi}) \longrightarrow N(0,\mathsf{var}(\hat{\pmb{\phi}}))$$
Note that $(\pmb X^T \pmb X)^{-1} = \Gamma^{-1}$ | Derivation of the distribution of $\hat{\phi}=[\hat{\phi}_1, \cdots, \hat{\phi}_p]$ in AR(p) models | It doesn't have to be thru Yule-Walker. You can use the OLS approach, namely,
$$X_t = \pmb X_{t-1}^T \pmb \phi + n_t , \qquad t = p+1,p+2,\ldots$$
where $\pmb X_{t-1} = [X_{t-1} \ldots X_{t-p} ]$and $ | Derivation of the distribution of $\hat{\phi}=[\hat{\phi}_1, \cdots, \hat{\phi}_p]$ in AR(p) models
It doesn't have to be thru Yule-Walker. You can use the OLS approach, namely,
$$X_t = \pmb X_{t-1}^T \pmb \phi + n_t , \qquad t = p+1,p+2,\ldots$$
where $\pmb X_{t-1} = [X_{t-1} \ldots X_{t-p} ]$and $\pmb \phi = [\phi_1 \ldots \phi_p] $.
Stacking all such equations, we get
$$\pmb y = \pmb X \pmb \phi + \pmb n$$
where $\pmb y = [X_{p+1} \ldots X_N]^T$ and $$\pmb X = \begin{bmatrix}
\pmb X_{p}^T\\
\pmb X_{p+1}^T\\
\vdots \\
\pmb X_{n-1}^T
\end{bmatrix} $$
Using OLS, we have
$$\hat{\pmb{\phi}} = (\pmb X^T \pmb X)^{-1} \pmb X^T \pmb y = (\pmb X^T \pmb X)^{-1} \pmb X^T (\pmb X \pmb \phi + \pmb n) = \pmb \phi + (\pmb X^T \pmb X)^{-1} \pmb X^T \pmb n $$
Notice that from the above equation
$$\mathsf{E}( \hat{\pmb{\phi}} ) = \pmb \phi +(\pmb X^T \pmb X)^{-1} \underbrace{\mathsf{E} \pmb X^T \pmb n}_0 \pmb \phi$$.
and
$$\mathsf{var}(\hat{\pmb{\phi}}) = \mathsf{E}( (\hat{\pmb{\phi}} - \pmb\phi)(\hat{\pmb{\phi}} - \pmb\phi)^T ) =(\pmb X^T \pmb X)^{-1} \pmb X^T \mathsf{E} (\pmb n \pmb n^T) \pmb X (\pmb X^T \pmb X)^{-1} $$
assuming white noise with same variance across time $\sigma_\epsilon^2$, we get
$$\mathsf{var}(\hat{\pmb{\phi}}) = \mathsf{E}( (\hat{\pmb{\phi}} - \pmb\phi)(\hat{\pmb{\phi}} - \pmb\phi)^T ) =(\pmb X^T \pmb X)^{-1} \pmb X^T \sigma_\epsilon^2 \pmb I \pmb X (\pmb X^T \pmb X)^{-1} = \sigma_\epsilon^2 (\pmb X^T \pmb X)^{-1}(\pmb X^T \pmb X)(\pmb X^T \pmb X)^{-1}=\sigma_\epsilon^2 (\pmb X^T \pmb X)^{-1}$$
Now assuming your AR(p) is strictly stationary and ergodic and that $\mathsf{E}(X_t^4)$ exists, then the central limit theorem applies
$$\sqrt{n}( \hat{\pmb{\phi}} - \pmb{\phi}) \longrightarrow N(0,\mathsf{var}(\hat{\pmb{\phi}}))$$
Note that $(\pmb X^T \pmb X)^{-1} = \Gamma^{-1}$ | Derivation of the distribution of $\hat{\phi}=[\hat{\phi}_1, \cdots, \hat{\phi}_p]$ in AR(p) models
It doesn't have to be thru Yule-Walker. You can use the OLS approach, namely,
$$X_t = \pmb X_{t-1}^T \pmb \phi + n_t , \qquad t = p+1,p+2,\ldots$$
where $\pmb X_{t-1} = [X_{t-1} \ldots X_{t-p} ]$and $ |
55,702 | How to interpret the basics of a logistic regression calibration plot please? | The ticks across the x-axis represent the frequency distribution (may be called a rug plot) of the predicted probabilities. This is a way to see where there is sparsity in your predictions and where there is a relative abundance of predictions in a given area of predicted probabilities.
The "Apparent" line is essentially the in-sample calibration.
The "Ideal" line represents perfect prediction as the predicted probabilities equal the observed probabilities.
The "Bias Corrected" line is derived via a resampling procedure to help add "uncertainty" to the calibration plot to get an idea of how this might perform "out-of-sample" and adjusts for "optimistic" (better than actual) calibration that is really an artifact of fitting a model to the data at hand. This is the line we want to look at to get an idea about generalization (until we have new data to try the model on).
When either of the two lines is above the "Ideal" line, this tells us the model underpredicts in that range of predicted probabilities. When either line is below the "Ideal" line, the model overpredicts in that range of predicted probabilities.
Applying to your specific plot, it appears most of the predicted probabilities are in the higher end (per rug plot). The model overall appears to be reasonably well calibrated based on the Bias-Corrected line closely following the Ideal line; there is some underprediction at lower predicted probabilities because the Bias-Corrected line is above the Ideal line around < 0.3 predicted probability.
The mean absolute error is the "average" absolute difference (disregard a positive or negative error) between predicted probability and actual probability. Ideally, we want this to be small (0 would be perfect indicating no error). This seems small in your plot, but may be situation dependent on how small is small. The other measure that Frank Harrell's program returns is the 90th percentile absolute error (90% of the errors are smaller than this number); this should be looked at as well. | How to interpret the basics of a logistic regression calibration plot please? | The ticks across the x-axis represent the frequency distribution (may be called a rug plot) of the predicted probabilities. This is a way to see where there is sparsity in your predictions and where t | How to interpret the basics of a logistic regression calibration plot please?
The ticks across the x-axis represent the frequency distribution (may be called a rug plot) of the predicted probabilities. This is a way to see where there is sparsity in your predictions and where there is a relative abundance of predictions in a given area of predicted probabilities.
The "Apparent" line is essentially the in-sample calibration.
The "Ideal" line represents perfect prediction as the predicted probabilities equal the observed probabilities.
The "Bias Corrected" line is derived via a resampling procedure to help add "uncertainty" to the calibration plot to get an idea of how this might perform "out-of-sample" and adjusts for "optimistic" (better than actual) calibration that is really an artifact of fitting a model to the data at hand. This is the line we want to look at to get an idea about generalization (until we have new data to try the model on).
When either of the two lines is above the "Ideal" line, this tells us the model underpredicts in that range of predicted probabilities. When either line is below the "Ideal" line, the model overpredicts in that range of predicted probabilities.
Applying to your specific plot, it appears most of the predicted probabilities are in the higher end (per rug plot). The model overall appears to be reasonably well calibrated based on the Bias-Corrected line closely following the Ideal line; there is some underprediction at lower predicted probabilities because the Bias-Corrected line is above the Ideal line around < 0.3 predicted probability.
The mean absolute error is the "average" absolute difference (disregard a positive or negative error) between predicted probability and actual probability. Ideally, we want this to be small (0 would be perfect indicating no error). This seems small in your plot, but may be situation dependent on how small is small. The other measure that Frank Harrell's program returns is the 90th percentile absolute error (90% of the errors are smaller than this number); this should be looked at as well. | How to interpret the basics of a logistic regression calibration plot please?
The ticks across the x-axis represent the frequency distribution (may be called a rug plot) of the predicted probabilities. This is a way to see where there is sparsity in your predictions and where t |
55,703 | Using Random Forest variable importance for feature selection | You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection. | Using Random Forest variable importance for feature selection | You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in | Using Random Forest variable importance for feature selection
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in my thesis, although it has not been published yet.
Since the ordering of the variables depends on all samples, the
selection step is performed using information of all samples and
thus, the OOB error of the subsequent model no longer has the
properties of an independent test set as it is not independent from
the previous selection step.
- Marc H.
For references, see section 4.1 of 'A new variable selection approach using Random Forests' by Hapfelmeier and Ulm or 'Application of Breiman’s Random Forest to Modeling Structure-Activity Relationships of Pharmaceutical Molecules
' by Svetnik et al., who address this issue in context of forward-/backward feature selection. | Using Random Forest variable importance for feature selection
You are entirely correct!
A couple of months ago I was in the exact same position when justifying a different feature selection approach in front of my supervisors. I will cite the sentence I used in |
55,704 | IPTW for multiple treatments | You'll want to check out McCaffrey et al. (2013) for advice on this, not Austin & Stuart (2015), which is for binary treatments only. It's not clear to me which causal estimand you want, so I'll explain how to get weights for both.
The ATE for any pair of treatments is the effect of moving everyone from one treatment to the another. In your example, one ATE would be the effect of moving the entire population from A to B, while another might be the effect of moving the entire population from B to D.
To estimate ATE weights, you take the inverse of the estimated probability of being the group actually assigned. So, for an individual in group A, their weight would be $w_{ATE,i}=\frac{1}{e_{A,i}}$. More generally, the weights are
$$w_{ATE,i} = \sum_{j=1}^p{\frac{I(Z_i=j)}{e_{j,i}}}$$
where $j$ indexes treatment group, $I(Z_i=j)=1$ if $Z_i=j$ and $0$ otherwise, and $e_{j,i}=P(Z_i=j|X_i)$.
The ATT involves choosing one group to be the "treated" or focal group. Each ATT is a comparison between another treatment group and this focal group for members of the focal groups. If we let group B be the focal group, one ATT is the effect of moving from A to B for those in group B. Another ATT is the effect of moving from D to B for those in group B.
The weights for the focal group are equal to 1, and the weights for the non-focal group are equal to the probability of being in the focal group divided by the probability of being the group actually assigned. So,
$$w_{ATT(f),i} = I(Z_i=j)+e_{f,i}\sum_{j \ne f}^p{\frac{I(Z_i=j)}{e_{j,i}}}= e_{f,i} w_{ATE,i}$$
where $f$ is the focal group. So, just as in the binary ATT case, the ATT weights are formed by multiplying the ATE weights by the propensity score for the focal group (i.e., the probability of being in the "treated" group). The binary ATT case, the focal group is group 1, so the probability of being in the focal group is just the propensity score.
Note all of these formulas apply to the binary treatment case.
Using WeightIt in R, you would specify
w.out <- weightit(Treatment ~ X1 + X2 + X2, data = data, estimand = "ATT", focal = "B")
to estimate the ATT weights for B as the focal group using multinomial logistic regression. After checking balance (e.g., using cobalt), you can estimate the outcome model as
fit <- glm(Y ~ relevel(Treatment, "B"), data = data, weights = w.out$weights)
You need to make sure the focal group is the reference level of the treatment variable for the coefficients to be valid ATT estimates. | IPTW for multiple treatments | You'll want to check out McCaffrey et al. (2013) for advice on this, not Austin & Stuart (2015), which is for binary treatments only. It's not clear to me which causal estimand you want, so I'll expla | IPTW for multiple treatments
You'll want to check out McCaffrey et al. (2013) for advice on this, not Austin & Stuart (2015), which is for binary treatments only. It's not clear to me which causal estimand you want, so I'll explain how to get weights for both.
The ATE for any pair of treatments is the effect of moving everyone from one treatment to the another. In your example, one ATE would be the effect of moving the entire population from A to B, while another might be the effect of moving the entire population from B to D.
To estimate ATE weights, you take the inverse of the estimated probability of being the group actually assigned. So, for an individual in group A, their weight would be $w_{ATE,i}=\frac{1}{e_{A,i}}$. More generally, the weights are
$$w_{ATE,i} = \sum_{j=1}^p{\frac{I(Z_i=j)}{e_{j,i}}}$$
where $j$ indexes treatment group, $I(Z_i=j)=1$ if $Z_i=j$ and $0$ otherwise, and $e_{j,i}=P(Z_i=j|X_i)$.
The ATT involves choosing one group to be the "treated" or focal group. Each ATT is a comparison between another treatment group and this focal group for members of the focal groups. If we let group B be the focal group, one ATT is the effect of moving from A to B for those in group B. Another ATT is the effect of moving from D to B for those in group B.
The weights for the focal group are equal to 1, and the weights for the non-focal group are equal to the probability of being in the focal group divided by the probability of being the group actually assigned. So,
$$w_{ATT(f),i} = I(Z_i=j)+e_{f,i}\sum_{j \ne f}^p{\frac{I(Z_i=j)}{e_{j,i}}}= e_{f,i} w_{ATE,i}$$
where $f$ is the focal group. So, just as in the binary ATT case, the ATT weights are formed by multiplying the ATE weights by the propensity score for the focal group (i.e., the probability of being in the "treated" group). The binary ATT case, the focal group is group 1, so the probability of being in the focal group is just the propensity score.
Note all of these formulas apply to the binary treatment case.
Using WeightIt in R, you would specify
w.out <- weightit(Treatment ~ X1 + X2 + X2, data = data, estimand = "ATT", focal = "B")
to estimate the ATT weights for B as the focal group using multinomial logistic regression. After checking balance (e.g., using cobalt), you can estimate the outcome model as
fit <- glm(Y ~ relevel(Treatment, "B"), data = data, weights = w.out$weights)
You need to make sure the focal group is the reference level of the treatment variable for the coefficients to be valid ATT estimates. | IPTW for multiple treatments
You'll want to check out McCaffrey et al. (2013) for advice on this, not Austin & Stuart (2015), which is for binary treatments only. It's not clear to me which causal estimand you want, so I'll expla |
55,705 | Why do anova(type='marginal') and anova(type='III') yield different results on lmer() models? | This is admittedly confusing, but there are a bunch of differences/limitations between anova() and car::Anova() and between lme and lmer fits. tl;dr probably best to use car::Anova() for consistent results across model types.
anova on lme: allows type="sequential" or type="marginal" (only). type="marginal" should be closest to type-3. Returns F-statistics with denominator degrees of freedom (ddf) calculated by "inner-outer" method (group/parameter counting).
Anova on lme: allows type="II" or type="III". Returns chi-square statistics (i.e. infinite denominator df/no finite-size correction) only.
anova on lmer: returns sequential F statistics, with no p-values, unless you have the lmerTest package loaded, in which case you get a choice of type II vs III and a choice of ddf calculations
Anova on lmer: if you fit with REML=TRUE you can specify test="F" and get a choice of ddf calculations.
So reviewing the tests you did:
anova(model_lme, type='marginal'): type-III/marginal F tests, inner-outer ddf
Anova(model_lme,type='III'): similar, but returns chi-square statistics/p-values instead, so the p-values are slightly anti-conservative (e.g. p-value for time effect is 0.0012 for F(42) and $1.49 \times 10^{-5}$ for chi-square)
anova(model_lmer, type='III'): the type argument is ignored, so you get sequential/type-I F values (similar but ??? not identical to anova(model_lme, type='sequential')
anova(model_lmer, type='marginal'): ditto
Anova(model_lmer,type='III'): type-3, but chi-square: identical to Anova(model_lme, type='III')
If you refit both models with REML, Anova(model_lmer,type="III",test="F") and anova(model_lme,type="marginal") give similar results. | Why do anova(type='marginal') and anova(type='III') yield different results on lmer() models? | This is admittedly confusing, but there are a bunch of differences/limitations between anova() and car::Anova() and between lme and lmer fits. tl;dr probably best to use car::Anova() for consistent r | Why do anova(type='marginal') and anova(type='III') yield different results on lmer() models?
This is admittedly confusing, but there are a bunch of differences/limitations between anova() and car::Anova() and between lme and lmer fits. tl;dr probably best to use car::Anova() for consistent results across model types.
anova on lme: allows type="sequential" or type="marginal" (only). type="marginal" should be closest to type-3. Returns F-statistics with denominator degrees of freedom (ddf) calculated by "inner-outer" method (group/parameter counting).
Anova on lme: allows type="II" or type="III". Returns chi-square statistics (i.e. infinite denominator df/no finite-size correction) only.
anova on lmer: returns sequential F statistics, with no p-values, unless you have the lmerTest package loaded, in which case you get a choice of type II vs III and a choice of ddf calculations
Anova on lmer: if you fit with REML=TRUE you can specify test="F" and get a choice of ddf calculations.
So reviewing the tests you did:
anova(model_lme, type='marginal'): type-III/marginal F tests, inner-outer ddf
Anova(model_lme,type='III'): similar, but returns chi-square statistics/p-values instead, so the p-values are slightly anti-conservative (e.g. p-value for time effect is 0.0012 for F(42) and $1.49 \times 10^{-5}$ for chi-square)
anova(model_lmer, type='III'): the type argument is ignored, so you get sequential/type-I F values (similar but ??? not identical to anova(model_lme, type='sequential')
anova(model_lmer, type='marginal'): ditto
Anova(model_lmer,type='III'): type-3, but chi-square: identical to Anova(model_lme, type='III')
If you refit both models with REML, Anova(model_lmer,type="III",test="F") and anova(model_lme,type="marginal") give similar results. | Why do anova(type='marginal') and anova(type='III') yield different results on lmer() models?
This is admittedly confusing, but there are a bunch of differences/limitations between anova() and car::Anova() and between lme and lmer fits. tl;dr probably best to use car::Anova() for consistent r |
55,706 | Does online data augmentation make sense? | From an optimization standpoint, repetition is nice (we want to optimize the same function). From a modeling standpoint, repetition can risk memorizing the training data without learning anything generalizable. For image data, online augmentation is motivated by observing that we can translate or add noise or otherwise distort an image but retain its key semantic content (so a human can still recognize it). The hypothesis of online augmentation is that the model probably won't see the exact same image twice, so memorization is unlikely, so the model will generalize well. | Does online data augmentation make sense? | From an optimization standpoint, repetition is nice (we want to optimize the same function). From a modeling standpoint, repetition can risk memorizing the training data without learning anything gene | Does online data augmentation make sense?
From an optimization standpoint, repetition is nice (we want to optimize the same function). From a modeling standpoint, repetition can risk memorizing the training data without learning anything generalizable. For image data, online augmentation is motivated by observing that we can translate or add noise or otherwise distort an image but retain its key semantic content (so a human can still recognize it). The hypothesis of online augmentation is that the model probably won't see the exact same image twice, so memorization is unlikely, so the model will generalize well. | Does online data augmentation make sense?
From an optimization standpoint, repetition is nice (we want to optimize the same function). From a modeling standpoint, repetition can risk memorizing the training data without learning anything gene |
55,707 | Is stochastic gradient descent pseudo-stochastic? | Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $\mathsf{A}\spadesuit$ (Ace of Spades), and set it aside. You'll never see another $\mathsf{A}\spadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $\mathsf{2}\color{red}{\heartsuit}$ or $\mathsf{J}\clubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $\mathsf{A}\spadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $\mathsf{A}\spadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs? | Is stochastic gradient descent pseudo-stochastic? | Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the | Is stochastic gradient descent pseudo-stochastic?
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the top card and see $\mathsf{A}\spadesuit$ (Ace of Spades), and set it aside. You'll never see another $\mathsf{A}\spadesuit$ in the whole deck. However, you don't know anything about the ordering of the remaining 51 cards, because the deck is shuffled. In this sense, the remainder of the deck still has a random order. The next card could be a $\mathsf{2}\color{red}{\heartsuit}$ or $\mathsf{J}\clubsuit$. You don't know for sure; all you do know is that the next card isn't the Ace of Spades, because you've put the only $\mathsf{A}\spadesuit$ face-up somewhere else.
In the scenario you outline, you're suggesting looking at the top card and then shuffling it into the deck again. This implies that the probability of seeing the $\mathsf{A}\spadesuit$ is independent of the previously-observed cards. Independence of events is an important attribute in probability theory, but it is not required to define a random process.
You might wonder why a person would want to construct mini-batches using the non-independent strategy. That question is answered here: Why do neural network researchers care about epochs? | Is stochastic gradient descent pseudo-stochastic?
Exhausting all $N$ samples before being able to repeat a sample means that the process is not independent. However, the process is still stochastic.
Consider a shuffled deck of cards. You look at the |
55,708 | How to predict the next number in a series while having additional series of data that might affect it? | Great Question!
The general approach is called a ARMAX model
The reason for the generality of approach is that it is important to consider the following possible states of nature which not only provide complications BUT opportunities..
The big mac price might be predicted better using previous big mac prices in conjunction with activity in the two causals
There might be discernable trends in big mac prices due to historical pricing strategy
The big mac price may be related to burger king prices OR changes in burger king prices or the history/trends of burger king prices
The big mac price may be related to inflation , changes in inflation or trends in inflation
There may be unusual values in the history of big mac prices or burger king prices or inflation that should be adjusted for in order to generate good coefficients. Sometimes unusual values are recording errors.
There may be omitted variables (stochastic in nature ) that may be important such as the price of a Wendy's burger .
There may have been one or more variance changes suggesting the need for some
sort of down-weighting to normalize volatile data.
The final model can be expressed as a Polynomial Distributed Lag model (PDL) or otherwise known as an ADL model (Autoregressive Distributed Lag). | How to predict the next number in a series while having additional series of data that might affect | Great Question!
The general approach is called a ARMAX model
The reason for the generality of approach is that it is important to consider the following possible states of nature which not only provi | How to predict the next number in a series while having additional series of data that might affect it?
Great Question!
The general approach is called a ARMAX model
The reason for the generality of approach is that it is important to consider the following possible states of nature which not only provide complications BUT opportunities..
The big mac price might be predicted better using previous big mac prices in conjunction with activity in the two causals
There might be discernable trends in big mac prices due to historical pricing strategy
The big mac price may be related to burger king prices OR changes in burger king prices or the history/trends of burger king prices
The big mac price may be related to inflation , changes in inflation or trends in inflation
There may be unusual values in the history of big mac prices or burger king prices or inflation that should be adjusted for in order to generate good coefficients. Sometimes unusual values are recording errors.
There may be omitted variables (stochastic in nature ) that may be important such as the price of a Wendy's burger .
There may have been one or more variance changes suggesting the need for some
sort of down-weighting to normalize volatile data.
The final model can be expressed as a Polynomial Distributed Lag model (PDL) or otherwise known as an ADL model (Autoregressive Distributed Lag). | How to predict the next number in a series while having additional series of data that might affect
Great Question!
The general approach is called a ARMAX model
The reason for the generality of approach is that it is important to consider the following possible states of nature which not only provi |
55,709 | How to predict the next number in a series while having additional series of data that might affect it? | One of the possible solutions: Support Vector Regression or SVR. Using machine learning programming the solution will look something like this:
var samples = [[2.5, 0], [2.5, 1], [1.6, 2]];
var targets = [2.2, 2.1, 1.5];
var regression->train(samples, targets);
result = var regression->predict([1.8, 3]);
return result;
In this case the result would be 1.41879. | How to predict the next number in a series while having additional series of data that might affect | One of the possible solutions: Support Vector Regression or SVR. Using machine learning programming the solution will look something like this:
var samples = [[2.5, 0], [2.5, 1], [1.6, 2]];
var target | How to predict the next number in a series while having additional series of data that might affect it?
One of the possible solutions: Support Vector Regression or SVR. Using machine learning programming the solution will look something like this:
var samples = [[2.5, 0], [2.5, 1], [1.6, 2]];
var targets = [2.2, 2.1, 1.5];
var regression->train(samples, targets);
result = var regression->predict([1.8, 3]);
return result;
In this case the result would be 1.41879. | How to predict the next number in a series while having additional series of data that might affect
One of the possible solutions: Support Vector Regression or SVR. Using machine learning programming the solution will look something like this:
var samples = [[2.5, 0], [2.5, 1], [1.6, 2]];
var target |
55,710 | accuracy and precision in regression vs classification | As you've pointed out, they are not the same, and sometimes refer to wildly different things (i.e., precision is a property of the model in classification, and refers to a measure of variance in regression). Unfortunately, in statistics and i'm sure other disciplines, we tend to abuse notation and use the same word to denote different things. You've pointed out a great example.
Precision and Accuracy
Precision in the context of regression, more specifically linear regression and the normal distribution refers to the Precision matrix, where X is a multivariate, normally distributed variable
$X \sim MVN(\mu, \Sigma), \hspace{4mm} \text{where } \Sigma^{-1} = \text{Precision Matrix} $.
In the context of classification, Precision is also known as PPV (or positive predictive value), and that refers to how "good" your model is at idnetifying true cases among the predictions.
$PPV = \text{Precision} = \frac{TP}{TP + FP}$
where TP/FP = True and False positives, respectively. Some communities use PPV, and some communities use precision. They mean the same.
Similarly, with Recall and Sensitivity, you measure how good you are at "catching" all the positive cases.
I think the best thing is to follow Bane's lead above, and create a cheat-sheet or notecards with these terms to not confuse them since they can easily get mixed up and be referred to constantly in ambiguous settings. | accuracy and precision in regression vs classification | As you've pointed out, they are not the same, and sometimes refer to wildly different things (i.e., precision is a property of the model in classification, and refers to a measure of variance in regre | accuracy and precision in regression vs classification
As you've pointed out, they are not the same, and sometimes refer to wildly different things (i.e., precision is a property of the model in classification, and refers to a measure of variance in regression). Unfortunately, in statistics and i'm sure other disciplines, we tend to abuse notation and use the same word to denote different things. You've pointed out a great example.
Precision and Accuracy
Precision in the context of regression, more specifically linear regression and the normal distribution refers to the Precision matrix, where X is a multivariate, normally distributed variable
$X \sim MVN(\mu, \Sigma), \hspace{4mm} \text{where } \Sigma^{-1} = \text{Precision Matrix} $.
In the context of classification, Precision is also known as PPV (or positive predictive value), and that refers to how "good" your model is at idnetifying true cases among the predictions.
$PPV = \text{Precision} = \frac{TP}{TP + FP}$
where TP/FP = True and False positives, respectively. Some communities use PPV, and some communities use precision. They mean the same.
Similarly, with Recall and Sensitivity, you measure how good you are at "catching" all the positive cases.
I think the best thing is to follow Bane's lead above, and create a cheat-sheet or notecards with these terms to not confuse them since they can easily get mixed up and be referred to constantly in ambiguous settings. | accuracy and precision in regression vs classification
As you've pointed out, they are not the same, and sometimes refer to wildly different things (i.e., precision is a property of the model in classification, and refers to a measure of variance in regre |
55,711 | accuracy and precision in regression vs classification | Accuracy is the overall accuracy of the model.
It is the ratio of
(total number of correct predictions) / (total population)
Precision (Positive Prediction Value) on the other hand is the ratio of
(correctly predicted as positive) / (total number of positive predictions) | accuracy and precision in regression vs classification | Accuracy is the overall accuracy of the model.
It is the ratio of
(total number of correct predictions) / (total population)
Precision (Positive Prediction Value) on the other hand is the ratio of | accuracy and precision in regression vs classification
Accuracy is the overall accuracy of the model.
It is the ratio of
(total number of correct predictions) / (total population)
Precision (Positive Prediction Value) on the other hand is the ratio of
(correctly predicted as positive) / (total number of positive predictions) | accuracy and precision in regression vs classification
Accuracy is the overall accuracy of the model.
It is the ratio of
(total number of correct predictions) / (total population)
Precision (Positive Prediction Value) on the other hand is the ratio of |
55,712 | Interpreting GLM with logged variable | Your model implies that the odds of re-offending are given by
$$\frac{p}{1-p} = {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\beta_1$ and $\beta_2$ are the regression coefficients and $p$ is the probability of re-offending.
BTW, I notice that you removed the intercept from the linear predictor by adding ~ -1 to the formula.
I don't agree with the idea of removing the intercept from a model like this just because it happens to be close to zero.
IMO the intercept is a fundamental component of a regression model like this one and should not be removed even if it is not significantly different from zero. The value of the intercept depends on arbitrary things such as how you code your variables. For example, suppose you decided to measure age in months rather than in years.
If the model included an intercept, then the results would remain unchanged.
The regression coefficients for precon and age would be unchanged as would their significance, but the intercept would change to absorb the difference between months as years as measurement scales.
In other words, inference is invariant to the measurement scale with the intercept in the model.
With no intercept in the model, then the two regression models, one with age in years and the other with age in months, become incomparable, and the model with age in months would likely fit very poorly indeed.
In other words, the inference would no longer be invariant with regard to the measurement scale.
Forcing the intercept to be zero makes the model too special and of very limited application, and just basically not believable.
With an intercept, the model would become
$$\frac{p}{1-p} = \alpha \times {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\alpha$ is $\exp$ of the intercept. | Interpreting GLM with logged variable | Your model implies that the odds of re-offending are given by
$$\frac{p}{1-p} = {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\beta_1$ and $\beta_2$ are the regression coefficients and $p$ | Interpreting GLM with logged variable
Your model implies that the odds of re-offending are given by
$$\frac{p}{1-p} = {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\beta_1$ and $\beta_2$ are the regression coefficients and $p$ is the probability of re-offending.
BTW, I notice that you removed the intercept from the linear predictor by adding ~ -1 to the formula.
I don't agree with the idea of removing the intercept from a model like this just because it happens to be close to zero.
IMO the intercept is a fundamental component of a regression model like this one and should not be removed even if it is not significantly different from zero. The value of the intercept depends on arbitrary things such as how you code your variables. For example, suppose you decided to measure age in months rather than in years.
If the model included an intercept, then the results would remain unchanged.
The regression coefficients for precon and age would be unchanged as would their significance, but the intercept would change to absorb the difference between months as years as measurement scales.
In other words, inference is invariant to the measurement scale with the intercept in the model.
With no intercept in the model, then the two regression models, one with age in years and the other with age in months, become incomparable, and the model with age in months would likely fit very poorly indeed.
In other words, the inference would no longer be invariant with regard to the measurement scale.
Forcing the intercept to be zero makes the model too special and of very limited application, and just basically not believable.
With an intercept, the model would become
$$\frac{p}{1-p} = \alpha \times {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\alpha$ is $\exp$ of the intercept. | Interpreting GLM with logged variable
Your model implies that the odds of re-offending are given by
$$\frac{p}{1-p} = {\rm precon}^{\beta_1}\times {\rm age}^{\beta_2}$$
where $\beta_1$ and $\beta_2$ are the regression coefficients and $p$ |
55,713 | Creating thinned models in during Dropout process | assume we have n neurons and each neuron has the probability to be disabled.
situation 0: zero neuron remains, n neurons are disabled, C(n,0)
situation 1: only one neuron remains, n-1 neurons are disabled, C(n,1)
situation 2: only two neurons remain, n-2 neurons are disabled, C(n,2)
.
.
.
situation n: n neurons remain, 0 neurons are disabled, C(n,n)
so C(n,0)+C(n,1)+C(n,2)+...+ C(n,n)=2^n | Creating thinned models in during Dropout process | assume we have n neurons and each neuron has the probability to be disabled.
situation 0: zero neuron remains, n neurons are disabled, C(n,0)
situation 1: only one neuron remains, n-1 neurons are disa | Creating thinned models in during Dropout process
assume we have n neurons and each neuron has the probability to be disabled.
situation 0: zero neuron remains, n neurons are disabled, C(n,0)
situation 1: only one neuron remains, n-1 neurons are disabled, C(n,1)
situation 2: only two neurons remain, n-2 neurons are disabled, C(n,2)
.
.
.
situation n: n neurons remain, 0 neurons are disabled, C(n,n)
so C(n,0)+C(n,1)+C(n,2)+...+ C(n,n)=2^n | Creating thinned models in during Dropout process
assume we have n neurons and each neuron has the probability to be disabled.
situation 0: zero neuron remains, n neurons are disabled, C(n,0)
situation 1: only one neuron remains, n-1 neurons are disa |
55,714 | Creating thinned models in during Dropout process | The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.
The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).
Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks. | Creating thinned models in during Dropout process | The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping ou | Creating thinned models in during Dropout process
The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.
The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).
Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks. | Creating thinned models in during Dropout process
The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping ou |
55,715 | Creating thinned models in during Dropout process | I too haven't understood their reasoning, I always assumed it was a typo or something...
The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:
$$
\frac{n!}{r! \cdot (n-r)!}
$$
possible combinations (not $2^n$ as the authors state).
Example:
Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.
Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).
Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:
$h_1, h_2$
$h_1, h_3$
$h_1, h_4$
$h_2, h_3$
$h_2, h_4$
$h_3, h_4$
or by applying the formula:
$$
\frac{4!}{2! \cdot (4-2)!} = \frac{24}{2 \cdot 2} = 6
$$ | Creating thinned models in during Dropout process | I too haven't understood their reasoning, I always assumed it was a typo or something...
The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dro | Creating thinned models in during Dropout process
I too haven't understood their reasoning, I always assumed it was a typo or something...
The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:
$$
\frac{n!}{r! \cdot (n-r)!}
$$
possible combinations (not $2^n$ as the authors state).
Example:
Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.
Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).
Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:
$h_1, h_2$
$h_1, h_3$
$h_1, h_4$
$h_2, h_3$
$h_2, h_4$
$h_3, h_4$
or by applying the formula:
$$
\frac{4!}{2! \cdot (4-2)!} = \frac{24}{2 \cdot 2} = 6
$$ | Creating thinned models in during Dropout process
I too haven't understood their reasoning, I always assumed it was a typo or something...
The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dro |
55,716 | Creating thinned models in during Dropout process | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is quite simple. It can be thought as a task of getting the number of subsets from one set. | Creating thinned models in during Dropout process | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Creating thinned models in during Dropout process
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This is quite simple. It can be thought as a task of getting the number of subsets from one set. | Creating thinned models in during Dropout process
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
55,717 | Is group lasso equivalent to ridge regression when there is 1 group | To see that group LASSO gives a similar solution to ridge regression upto a square in the penalty function, you need to look at subgradient conditions that characterize the solution of the group LASSO estimator. The best reference for this purpose I think is http://statweb.stanford.edu/~tibs/ftp/sparse-grlasso.pdf
These conditions are equation (4) and (5) in the paper, which are
$$ \|X^{\top}_l (y - \sum_{k\neq l} X_k \hat\beta_k)\| < \lambda \qquad \text{ (1)} \qquad \text{gives you condition when } \hat\beta_l=0 $$
and
$$ \hat\beta_l = \left(X^{\top}_l X_l - \frac{\lambda}{\|\hat\beta_l \|} \right)^{-1} X^{\top}_l r \qquad (2) \qquad \text{gives solution if } \hat\beta_l\neq0$$
where
$$ r = y - \sum_{k\neq l} X_k \hat\beta_k$$
Here l stands for a group index and X can be partitioned to non-overlapping groups. If you have a single group, these conditions boil down to
$$ \|X^{\top} (y - X \hat\beta)\| < \lambda \qquad \text{when } \hat\beta=0$$
and
$$ \hat\beta = \left(X^{\top} X - \frac{\lambda}{\|\hat\beta \|} \right)^{-1} X^{\top} y \qquad \text{otherwise }$$
We can now clearly see why group LASSO with a single group is, in fact, ridge regression with the weighted penalty term. The easiest way to solve group LASSO with a single group would be to use efficient implementations of group LASSO in whatever software you use setting the group index accordingly. Else, you could iterate the last equation a few times until the coefficient vector does not change too much. Hope this helps. | Is group lasso equivalent to ridge regression when there is 1 group | To see that group LASSO gives a similar solution to ridge regression upto a square in the penalty function, you need to look at subgradient conditions that characterize the solution of the group LASSO | Is group lasso equivalent to ridge regression when there is 1 group
To see that group LASSO gives a similar solution to ridge regression upto a square in the penalty function, you need to look at subgradient conditions that characterize the solution of the group LASSO estimator. The best reference for this purpose I think is http://statweb.stanford.edu/~tibs/ftp/sparse-grlasso.pdf
These conditions are equation (4) and (5) in the paper, which are
$$ \|X^{\top}_l (y - \sum_{k\neq l} X_k \hat\beta_k)\| < \lambda \qquad \text{ (1)} \qquad \text{gives you condition when } \hat\beta_l=0 $$
and
$$ \hat\beta_l = \left(X^{\top}_l X_l - \frac{\lambda}{\|\hat\beta_l \|} \right)^{-1} X^{\top}_l r \qquad (2) \qquad \text{gives solution if } \hat\beta_l\neq0$$
where
$$ r = y - \sum_{k\neq l} X_k \hat\beta_k$$
Here l stands for a group index and X can be partitioned to non-overlapping groups. If you have a single group, these conditions boil down to
$$ \|X^{\top} (y - X \hat\beta)\| < \lambda \qquad \text{when } \hat\beta=0$$
and
$$ \hat\beta = \left(X^{\top} X - \frac{\lambda}{\|\hat\beta \|} \right)^{-1} X^{\top} y \qquad \text{otherwise }$$
We can now clearly see why group LASSO with a single group is, in fact, ridge regression with the weighted penalty term. The easiest way to solve group LASSO with a single group would be to use efficient implementations of group LASSO in whatever software you use setting the group index accordingly. Else, you could iterate the last equation a few times until the coefficient vector does not change too much. Hope this helps. | Is group lasso equivalent to ridge regression when there is 1 group
To see that group LASSO gives a similar solution to ridge regression upto a square in the penalty function, you need to look at subgradient conditions that characterize the solution of the group LASSO |
55,718 | What does it mean when I add a new variable to my linear model and the R^2 stays the same? | Seeing little to no change in $R^2$ when you add a variable to a linear model means that the variable has little to no additional explanatory power to the response over what is already in your model. As you note, this can be either because it tells you almost nothing about the response or it explains the same variation in the response as the variables already in the model. | What does it mean when I add a new variable to my linear model and the R^2 stays the same? | Seeing little to no change in $R^2$ when you add a variable to a linear model means that the variable has little to no additional explanatory power to the response over what is already in your model. | What does it mean when I add a new variable to my linear model and the R^2 stays the same?
Seeing little to no change in $R^2$ when you add a variable to a linear model means that the variable has little to no additional explanatory power to the response over what is already in your model. As you note, this can be either because it tells you almost nothing about the response or it explains the same variation in the response as the variables already in the model. | What does it mean when I add a new variable to my linear model and the R^2 stays the same?
Seeing little to no change in $R^2$ when you add a variable to a linear model means that the variable has little to no additional explanatory power to the response over what is already in your model. |
55,719 | What does it mean when I add a new variable to my linear model and the R^2 stays the same? | As others have alluded, seeing no change in $R^2$ when you add a variable to your regression is unusual. In finite samples, this should only happen when your new variable is a linear combination of variables already present. In this case, most standard regression routines simply exclude that variable from the regression, and your $R^2$ will remain unchanged because the model was effectively unchanged.
As you notice, this does not mean the variable is unimportant, but rather that you are unable to distinguish its effect from that of the other variables in your model.
More broadly however, I (and many here at Cross Validated) would caution against using R^2 for model selection and interpretation. What I've discussed above is how the $R^2$ could not change and the variable still be important. Worse yet, the $R^2$ could change somewhat (or even dramatically) when you include an irrelevant variable. Broadly, using $R^2$ for model selection fell out of favor in the 70s, when it was dropped in favor of AIC (and its contemporaries). Today -- a typical statistician would recommend using cross validation (see the site name) for your model selection.
In general, adding a variable increases $R^2$ -- so using $R^2$ to determine a variables importance is a bit of a wild goose chase. Even when trying to understand simple situations you will end up with a completely absurd collection of variables. | What does it mean when I add a new variable to my linear model and the R^2 stays the same? | As others have alluded, seeing no change in $R^2$ when you add a variable to your regression is unusual. In finite samples, this should only happen when your new variable is a linear combination of va | What does it mean when I add a new variable to my linear model and the R^2 stays the same?
As others have alluded, seeing no change in $R^2$ when you add a variable to your regression is unusual. In finite samples, this should only happen when your new variable is a linear combination of variables already present. In this case, most standard regression routines simply exclude that variable from the regression, and your $R^2$ will remain unchanged because the model was effectively unchanged.
As you notice, this does not mean the variable is unimportant, but rather that you are unable to distinguish its effect from that of the other variables in your model.
More broadly however, I (and many here at Cross Validated) would caution against using R^2 for model selection and interpretation. What I've discussed above is how the $R^2$ could not change and the variable still be important. Worse yet, the $R^2$ could change somewhat (or even dramatically) when you include an irrelevant variable. Broadly, using $R^2$ for model selection fell out of favor in the 70s, when it was dropped in favor of AIC (and its contemporaries). Today -- a typical statistician would recommend using cross validation (see the site name) for your model selection.
In general, adding a variable increases $R^2$ -- so using $R^2$ to determine a variables importance is a bit of a wild goose chase. Even when trying to understand simple situations you will end up with a completely absurd collection of variables. | What does it mean when I add a new variable to my linear model and the R^2 stays the same?
As others have alluded, seeing no change in $R^2$ when you add a variable to your regression is unusual. In finite samples, this should only happen when your new variable is a linear combination of va |
55,720 | how to understand random factor and fixed factor interaction? | This interaction between a fixed and a random factor allows for differences in behavior of the fixed factor among the random factors. Let's run that code on the data set, available in the MASS package in R. (I kept the short variable names provided in that copy of the data.)
BVmodel <- lmer(Y ~ V + (1|B/V), data=oats)
> summary(BVmodel)
Linear mixed model fit by REML ['lmerMod']
Formula: Y ~ V + (1 | B/V)
Data: oats
REML criterion at convergence: 647.8
Scaled residuals:
Min 1Q Median 3Q Max
-1.66511 -0.67545 -0.00126 0.74643 2.11366
Random effects:
Groups Name Variance Std.Dev.
V:B (Intercept) 19.26 4.389
B (Intercept) 214.48 14.645
Residual 524.28 22.897
Number of obs: 72, groups: V:B, 18; B, 6
Fixed effects:
Estimate Std. Error t value
(Intercept) 104.500 7.798 13.402
VMarvellous 5.292 7.079 0.748
VVictory -6.875 7.079 -0.971
Correlation of Fixed Effects:
(Intr) VMrvll
VMarvellous -0.454
VVictory -0.454 0.500
This gives fixed effects for two Varieties (expressed relative to the Intercept that represents the Yield of the "Golden rain" Variety) and sets of random effects for Blocks and for the Variety:Block interaction.
Now let's look at the random effects themselves; for this purpose the ranef() function provides the clearest display, as it shows the random effects with respect to the immediately higher level in the hierarchy. (I omit some of the interaction effects, as they aren't needed to make the point.)
> ranef(BVmodel)
$`V:B`
(Intercept)
Golden.rain:I 0.4264964
Golden.rain:II 0.7807406
Golden.rain:III -1.4377120
Golden.rain:IV 1.0514971
Golden.rain:V 0.2028329
Golden.rain:VI -1.0238550
Marvellous:I -0.7000427
Marvellous:II 1.1277787
...
$B
(Intercept)
I 25.421563
II 2.656992
III -6.529897
IV -4.706029
V -10.582936
VI -6.259694
Notice that the 6 Block random effects ($B) all add up to 0. These represent how Yield differs (randomly) among the Blocks. The 6 random effects representing the interaction between Block and the "Golden rain" Variety also sum to 0, as do those for the 6 interactions of each of the other 2 Varieties with Block (not shown).
The interaction between each Variety and Block allows the Yield of a Variety to differ (randomly) among Blocks, around the overall Yield for the Variety and the overall random effect for the Block. It doesn't alter the fixed effect associated with each Variety, whether shown as the Intercept for "Golden rain" or as differences from "Golden rain" for the other 2 varieties.
Different yields of the same Variety among different Blocks, even after accounting for between-Block random effects, might be expected in practice and might need to be accounted for. So rather than thinking about this as "remov[ing] variance related to differences between varieties," think about this as allowing for a source of variance within each variety that is related to potentially different behaviors among Blocks. | how to understand random factor and fixed factor interaction? | This interaction between a fixed and a random factor allows for differences in behavior of the fixed factor among the random factors. Let's run that code on the data set, available in the MASS package | how to understand random factor and fixed factor interaction?
This interaction between a fixed and a random factor allows for differences in behavior of the fixed factor among the random factors. Let's run that code on the data set, available in the MASS package in R. (I kept the short variable names provided in that copy of the data.)
BVmodel <- lmer(Y ~ V + (1|B/V), data=oats)
> summary(BVmodel)
Linear mixed model fit by REML ['lmerMod']
Formula: Y ~ V + (1 | B/V)
Data: oats
REML criterion at convergence: 647.8
Scaled residuals:
Min 1Q Median 3Q Max
-1.66511 -0.67545 -0.00126 0.74643 2.11366
Random effects:
Groups Name Variance Std.Dev.
V:B (Intercept) 19.26 4.389
B (Intercept) 214.48 14.645
Residual 524.28 22.897
Number of obs: 72, groups: V:B, 18; B, 6
Fixed effects:
Estimate Std. Error t value
(Intercept) 104.500 7.798 13.402
VMarvellous 5.292 7.079 0.748
VVictory -6.875 7.079 -0.971
Correlation of Fixed Effects:
(Intr) VMrvll
VMarvellous -0.454
VVictory -0.454 0.500
This gives fixed effects for two Varieties (expressed relative to the Intercept that represents the Yield of the "Golden rain" Variety) and sets of random effects for Blocks and for the Variety:Block interaction.
Now let's look at the random effects themselves; for this purpose the ranef() function provides the clearest display, as it shows the random effects with respect to the immediately higher level in the hierarchy. (I omit some of the interaction effects, as they aren't needed to make the point.)
> ranef(BVmodel)
$`V:B`
(Intercept)
Golden.rain:I 0.4264964
Golden.rain:II 0.7807406
Golden.rain:III -1.4377120
Golden.rain:IV 1.0514971
Golden.rain:V 0.2028329
Golden.rain:VI -1.0238550
Marvellous:I -0.7000427
Marvellous:II 1.1277787
...
$B
(Intercept)
I 25.421563
II 2.656992
III -6.529897
IV -4.706029
V -10.582936
VI -6.259694
Notice that the 6 Block random effects ($B) all add up to 0. These represent how Yield differs (randomly) among the Blocks. The 6 random effects representing the interaction between Block and the "Golden rain" Variety also sum to 0, as do those for the 6 interactions of each of the other 2 Varieties with Block (not shown).
The interaction between each Variety and Block allows the Yield of a Variety to differ (randomly) among Blocks, around the overall Yield for the Variety and the overall random effect for the Block. It doesn't alter the fixed effect associated with each Variety, whether shown as the Intercept for "Golden rain" or as differences from "Golden rain" for the other 2 varieties.
Different yields of the same Variety among different Blocks, even after accounting for between-Block random effects, might be expected in practice and might need to be accounted for. So rather than thinking about this as "remov[ing] variance related to differences between varieties," think about this as allowing for a source of variance within each variety that is related to potentially different behaviors among Blocks. | how to understand random factor and fixed factor interaction?
This interaction between a fixed and a random factor allows for differences in behavior of the fixed factor among the random factors. Let's run that code on the data set, available in the MASS package |
55,721 | Likelihood modification in Metropolis Hastings ratio for transformed parameter | You should notice that what you denote $p(y|f(\theta))$ is actually the same as $p(y|\theta)$ [if you overlook the terrible abuse of notations]. As you mention, changing the parameterisation does not modify the density of the random variable at the observed value $y$ and there is no Jacobian associated with that part.
With proper notations, if
\begin{align*}
\theta &\sim \pi(\theta)\qquad\qquad&\text{prior}\\
y|\theta &\sim f(y|\theta)\qquad\qquad&\text{sampling}\\
\xi &= h(\theta) \qquad\qquad&\text{reparameterisation}\\
\dfrac{\text{d}\theta}{\text{d}\xi}(\xi) &= J(\xi)\qquad\qquad&\text{Jacobian}\\
y|\xi &\sim g(y|\xi)\qquad\qquad&\text{reparameterised density}\\
\xi^{(t+1)}|\xi^{(t)} &\sim q(\xi^{(t+1)}|\xi^{(t)}) \qquad\qquad&\text{proposal}
\end{align*}
the Metropolis-Hastings ratio associated with the proposal $\xi'\sim q(\xi'|\xi)$ in the $\xi$ parameterisation is
$$
\underbrace{\dfrac{\pi(\theta(\xi'))J(\xi')}{\pi(\theta(\xi))J(\xi)}}_\text{ratio of priors}\times
\underbrace{\dfrac{f(y|\theta(\xi')}{f(y|\theta(\xi))}}_\text{likelihood ratio}\times
\underbrace{\dfrac{q(\xi|\xi')}{q(\xi'|\xi)}}_\text{proposal ratio}
$$
which also writes as
$$\dfrac{\pi(h^{-1}(\xi'))J(\xi')}{\pi(h^{-1}(\xi))J(\xi)}\times
\dfrac{g(y|\xi')}{g(y|\xi)}\times
\dfrac{q(\xi|xi')}{q(\xi'|xi)}
$$ | Likelihood modification in Metropolis Hastings ratio for transformed parameter | You should notice that what you denote $p(y|f(\theta))$ is actually the same as $p(y|\theta)$ [if you overlook the terrible abuse of notations]. As you mention, changing the parameterisation does not | Likelihood modification in Metropolis Hastings ratio for transformed parameter
You should notice that what you denote $p(y|f(\theta))$ is actually the same as $p(y|\theta)$ [if you overlook the terrible abuse of notations]. As you mention, changing the parameterisation does not modify the density of the random variable at the observed value $y$ and there is no Jacobian associated with that part.
With proper notations, if
\begin{align*}
\theta &\sim \pi(\theta)\qquad\qquad&\text{prior}\\
y|\theta &\sim f(y|\theta)\qquad\qquad&\text{sampling}\\
\xi &= h(\theta) \qquad\qquad&\text{reparameterisation}\\
\dfrac{\text{d}\theta}{\text{d}\xi}(\xi) &= J(\xi)\qquad\qquad&\text{Jacobian}\\
y|\xi &\sim g(y|\xi)\qquad\qquad&\text{reparameterised density}\\
\xi^{(t+1)}|\xi^{(t)} &\sim q(\xi^{(t+1)}|\xi^{(t)}) \qquad\qquad&\text{proposal}
\end{align*}
the Metropolis-Hastings ratio associated with the proposal $\xi'\sim q(\xi'|\xi)$ in the $\xi$ parameterisation is
$$
\underbrace{\dfrac{\pi(\theta(\xi'))J(\xi')}{\pi(\theta(\xi))J(\xi)}}_\text{ratio of priors}\times
\underbrace{\dfrac{f(y|\theta(\xi')}{f(y|\theta(\xi))}}_\text{likelihood ratio}\times
\underbrace{\dfrac{q(\xi|\xi')}{q(\xi'|\xi)}}_\text{proposal ratio}
$$
which also writes as
$$\dfrac{\pi(h^{-1}(\xi'))J(\xi')}{\pi(h^{-1}(\xi))J(\xi)}\times
\dfrac{g(y|\xi')}{g(y|\xi)}\times
\dfrac{q(\xi|xi')}{q(\xi'|xi)}
$$ | Likelihood modification in Metropolis Hastings ratio for transformed parameter
You should notice that what you denote $p(y|f(\theta))$ is actually the same as $p(y|\theta)$ [if you overlook the terrible abuse of notations]. As you mention, changing the parameterisation does not |
55,722 | Generating vector image from a hand drawn picture. Machine Learning | It depends a little on the exact problem.
If you are interested only in text, then two fields come to mind:
Optical character recognition (OCR), which is specifically about text recognition from images. However, these methods tend to focus on "nice" documents, and may not be applicable to harder case (i.e., generalizable). For instance, one could first determine what each character is, and then search through different fonts to get the best match.
Text detection in natural images. If you have harder images that standard OCR struggles with, you can attempt to first detect and extract the text using ML-based computer vision algorithms. Some starting points:
Ye et al, Text Detection and Recognition in Imagery: A Survey
Cheng et al, Focusing Attention: Towards Accurate Text Recognition in Natural Images
Again in this case one would simply detect the text, classify it, and then replace it with a vector version (discarding the rest of the image, or e.g. blending it).
Things become tougher when you have general vocabularies of discrete objects. For instance, I noticed the box in your example becomes a nice straight box. How should this be done? Should it detect there is a box, and then figure out what size it should be? Or should it detect four lines, and separately compute their lengths? This is a non-trivial problem, but there are numerous ways to approach it (some rather effective):
There is some work on directly generating vector images from raster ones: see Sbai et al, Vector Image Generation by Learning Parametric Layer Decomposition. This is not "object-centered" if you will, however.
An approach based on generative modelling could conceivably be used (see Lee et al, Context-Aware Synthesis and Placement of Object Instances). The idea would be to adapt the method from the aforementioned paper to "replace" everything in the input by placing objects around the image such that it reconstructs the image. How to define and parametrize the vocabulary would still be hard though.
The most general approach is using a discrete vocabulary of primitives. One paper doing pretty much exactly what you want is Ellis et al, Learning to Infer Graphics Programs from
Hand-Drawn Images. This approach is somewhat complicated, but extremely general.
Overall, due to the requirement for differentiability in deep learning, handling discreteness is challenging. One can use techniques from reinforcement learning to circumvent this, since the likelihood ratio (i.e., the REINFORCE estimator) can compute gradient estimations in very general scenarios. In other words, you can set up your problem as a deep RL problem, where the agent gets a reward for reproducing your target image using choices from a vector vocabulary. Papers like Tucker et al, REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models (as well as those papers cited by/citing it) might be a good place to start learning about that area.
Hopefully that's a useful starting point :) | Generating vector image from a hand drawn picture. Machine Learning | It depends a little on the exact problem.
If you are interested only in text, then two fields come to mind:
Optical character recognition (OCR), which is specifically about text recognition from ima | Generating vector image from a hand drawn picture. Machine Learning
It depends a little on the exact problem.
If you are interested only in text, then two fields come to mind:
Optical character recognition (OCR), which is specifically about text recognition from images. However, these methods tend to focus on "nice" documents, and may not be applicable to harder case (i.e., generalizable). For instance, one could first determine what each character is, and then search through different fonts to get the best match.
Text detection in natural images. If you have harder images that standard OCR struggles with, you can attempt to first detect and extract the text using ML-based computer vision algorithms. Some starting points:
Ye et al, Text Detection and Recognition in Imagery: A Survey
Cheng et al, Focusing Attention: Towards Accurate Text Recognition in Natural Images
Again in this case one would simply detect the text, classify it, and then replace it with a vector version (discarding the rest of the image, or e.g. blending it).
Things become tougher when you have general vocabularies of discrete objects. For instance, I noticed the box in your example becomes a nice straight box. How should this be done? Should it detect there is a box, and then figure out what size it should be? Or should it detect four lines, and separately compute their lengths? This is a non-trivial problem, but there are numerous ways to approach it (some rather effective):
There is some work on directly generating vector images from raster ones: see Sbai et al, Vector Image Generation by Learning Parametric Layer Decomposition. This is not "object-centered" if you will, however.
An approach based on generative modelling could conceivably be used (see Lee et al, Context-Aware Synthesis and Placement of Object Instances). The idea would be to adapt the method from the aforementioned paper to "replace" everything in the input by placing objects around the image such that it reconstructs the image. How to define and parametrize the vocabulary would still be hard though.
The most general approach is using a discrete vocabulary of primitives. One paper doing pretty much exactly what you want is Ellis et al, Learning to Infer Graphics Programs from
Hand-Drawn Images. This approach is somewhat complicated, but extremely general.
Overall, due to the requirement for differentiability in deep learning, handling discreteness is challenging. One can use techniques from reinforcement learning to circumvent this, since the likelihood ratio (i.e., the REINFORCE estimator) can compute gradient estimations in very general scenarios. In other words, you can set up your problem as a deep RL problem, where the agent gets a reward for reproducing your target image using choices from a vector vocabulary. Papers like Tucker et al, REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models (as well as those papers cited by/citing it) might be a good place to start learning about that area.
Hopefully that's a useful starting point :) | Generating vector image from a hand drawn picture. Machine Learning
It depends a little on the exact problem.
If you are interested only in text, then two fields come to mind:
Optical character recognition (OCR), which is specifically about text recognition from ima |
55,723 | Generating vector image from a hand drawn picture. Machine Learning | In addition to user3658307's comprehensive answer I'd like to mention Generative Adversarial Networks (image to image translation GANs specifically) - they should come in handy if you can give them enough training data. The advantage over OCR-based methods would be simplicity - you don't need to build any pipeline, the method learns the transformation end-to-end.
Image to Image translation aims at recovering transform that maps images from one domain to second one - this looks exactly like your problem - in fact, I think it's not a difficult one (other examples cover generating images of cats/items from sketches, which seems harder). I encourage you to visit the method authors website. It contains many examples as well as links to implementations. | Generating vector image from a hand drawn picture. Machine Learning | In addition to user3658307's comprehensive answer I'd like to mention Generative Adversarial Networks (image to image translation GANs specifically) - they should come in handy if you can give them en | Generating vector image from a hand drawn picture. Machine Learning
In addition to user3658307's comprehensive answer I'd like to mention Generative Adversarial Networks (image to image translation GANs specifically) - they should come in handy if you can give them enough training data. The advantage over OCR-based methods would be simplicity - you don't need to build any pipeline, the method learns the transformation end-to-end.
Image to Image translation aims at recovering transform that maps images from one domain to second one - this looks exactly like your problem - in fact, I think it's not a difficult one (other examples cover generating images of cats/items from sketches, which seems harder). I encourage you to visit the method authors website. It contains many examples as well as links to implementations. | Generating vector image from a hand drawn picture. Machine Learning
In addition to user3658307's comprehensive answer I'd like to mention Generative Adversarial Networks (image to image translation GANs specifically) - they should come in handy if you can give them en |
55,724 | What is the correct terminology for repeating groups of coin flips multiple times in a simulation? | In your simulation you want to tally up the number of successes (heads) for a series of 5 independent Bernoulli trials from 3 realization (or draws, or observations).
In R this can be done using the rbinom(n, size, prob) function, where n is the number of observations, size is the number of trials and prob the probability of success in each trial. See also here for the function reference (I am sure this is similar in other programming languages as well).
The tallied number of successes for 5 trials can then be calculated as (assuming a fair coin):
> set.seed(123)
> rbinom(3, 5, .5)
[1] 2 3 2
In this case, we have 2 successes (first observation), 3 successes (second observation), 2 successes (third observation).
So Group 1, 2, 3 are observations with 5 trials in each observation.
Edit: added illustration
Here's an illustration of a Galton Board that will make it much clearer what the differences between trials and observations are:
Each ball's path represents one observation (red arrows) passing through 6 left/right decisions or six trials (blue triangles) eventually ending up in one of the bins. The exact probability ($Pr$) of landing in one of the bins can be calculated using the binomial probability mass function $$Pr(X = k) = {n \choose k}p^k(1 - p)^{n-k},$$ where $X$ represents a random variable, $k$ the number of successes (e.g. bounce left), $n$ the number trials, and $p$ the probability of the outcome of each trial.
In R you can use the dbinom() function to calculate the exact probabilities of falling in each of the bins (from left to right and assuming $p=0.5$):
dbinom(6, 6, .5)
#[1] 0.015625
dbinom(5, 6, .5)
#[1] 0.09375
dbinom(4, 6, .5)
#[1] 0.234375
dbinom(3, 6, .5)
#[1] 0.3125
dbinom(2, 6, .5)
#[1] 0.234375
dbinom(1, 6, .5)
#[1] 0.09375
dbinom(0, 6, .5)
#[1] 0.015625
Or through simulation (in this case 100,000 observations and 6 trials):
bounces <- rbinom(100000, 6, .5)
mean(bounces == 6)
#[1] 0.01554
mean(bounces == 5)
#[1] 0.09325
mean(bounces == 4)
#[1] 0.23468
mean(bounces == 3)
#[1] 0.31349
mean(bounces == 2)
#[1] 0.23486
mean(bounces == 1)
#[1] 0.09287
mean(bounces == 0)
#[1] 0.01531 | What is the correct terminology for repeating groups of coin flips multiple times in a simulation? | In your simulation you want to tally up the number of successes (heads) for a series of 5 independent Bernoulli trials from 3 realization (or draws, or observations).
In R this can be done using the r | What is the correct terminology for repeating groups of coin flips multiple times in a simulation?
In your simulation you want to tally up the number of successes (heads) for a series of 5 independent Bernoulli trials from 3 realization (or draws, or observations).
In R this can be done using the rbinom(n, size, prob) function, where n is the number of observations, size is the number of trials and prob the probability of success in each trial. See also here for the function reference (I am sure this is similar in other programming languages as well).
The tallied number of successes for 5 trials can then be calculated as (assuming a fair coin):
> set.seed(123)
> rbinom(3, 5, .5)
[1] 2 3 2
In this case, we have 2 successes (first observation), 3 successes (second observation), 2 successes (third observation).
So Group 1, 2, 3 are observations with 5 trials in each observation.
Edit: added illustration
Here's an illustration of a Galton Board that will make it much clearer what the differences between trials and observations are:
Each ball's path represents one observation (red arrows) passing through 6 left/right decisions or six trials (blue triangles) eventually ending up in one of the bins. The exact probability ($Pr$) of landing in one of the bins can be calculated using the binomial probability mass function $$Pr(X = k) = {n \choose k}p^k(1 - p)^{n-k},$$ where $X$ represents a random variable, $k$ the number of successes (e.g. bounce left), $n$ the number trials, and $p$ the probability of the outcome of each trial.
In R you can use the dbinom() function to calculate the exact probabilities of falling in each of the bins (from left to right and assuming $p=0.5$):
dbinom(6, 6, .5)
#[1] 0.015625
dbinom(5, 6, .5)
#[1] 0.09375
dbinom(4, 6, .5)
#[1] 0.234375
dbinom(3, 6, .5)
#[1] 0.3125
dbinom(2, 6, .5)
#[1] 0.234375
dbinom(1, 6, .5)
#[1] 0.09375
dbinom(0, 6, .5)
#[1] 0.015625
Or through simulation (in this case 100,000 observations and 6 trials):
bounces <- rbinom(100000, 6, .5)
mean(bounces == 6)
#[1] 0.01554
mean(bounces == 5)
#[1] 0.09325
mean(bounces == 4)
#[1] 0.23468
mean(bounces == 3)
#[1] 0.31349
mean(bounces == 2)
#[1] 0.23486
mean(bounces == 1)
#[1] 0.09287
mean(bounces == 0)
#[1] 0.01531 | What is the correct terminology for repeating groups of coin flips multiple times in a simulation?
In your simulation you want to tally up the number of successes (heads) for a series of 5 independent Bernoulli trials from 3 realization (or draws, or observations).
In R this can be done using the r |
55,725 | What is the correct terminology for repeating groups of coin flips multiple times in a simulation? | I understand your confusion. Things become more clear once you think in terms of random variables, outcome sets, and realizations/observations of your random process/variable.
Flipping a coin
A random variable is a function mapping outcomes of a random experiment to numbers. In the case of flipping a coin, we can define a random variable X, to take value 1 when the random variable takes value H, and take value 0 when the random variable takes the value T.
In this case there are only 2 possible outcomes: H or T. The set of all possible outcomes is referred to as the outcome set or sample space. In the case of flipping a coin, the outcome set consists of only 2 outcomes {H,T}.
Now that we have a random variable and the outcome set defined, we can repeat the random experiment of flipping one coin. Let's do it 3 times and observe the realizations of the random process. You may have obtained H on the first flip, T on the second, and H on the third. Your random variable, which assigns numbers to each outcome as per the above rule, will thus take on values 1,0, and 1 respectively. You now have a sample of three observations for the random variable X. A random variable that takes on only 2 values happens to have a special name - a Bernoulli variable.
Flipping 5 coins
Now let's look at a completely different random experiment - flipping 5 coins and counting the number of Heads. In this case, we can define a random variable Y to stand for the number of Heads observed in 5 tosses of a coin. The random experiment here consists of flipping 5 coins. There are 2^5=32 possible outcomes for this experiment, but only 5 possible outcomes for the random variable Y, {0,1,2,3,4,5}. Remember that our random variable Y maps outcomes of the experiment of flipping 5 coins to numbers (count of number of Heads). This mapping is described in the figure below.
You have repeated this random experiment 3 times and obtained: HTTTT, THHHT, and HHHHT. These outcomes of the random experiment map to the following values for the random variable Y: 1, 3, and 4. You now have a sample of three observations for the random variable Y. Such a variable, which counts the number of successes (i.e. Heads) in n repeated Bernoulli trials has a special name - a Binomial variable.
I hope this helps. | What is the correct terminology for repeating groups of coin flips multiple times in a simulation? | I understand your confusion. Things become more clear once you think in terms of random variables, outcome sets, and realizations/observations of your random process/variable.
Flipping a coin
A random | What is the correct terminology for repeating groups of coin flips multiple times in a simulation?
I understand your confusion. Things become more clear once you think in terms of random variables, outcome sets, and realizations/observations of your random process/variable.
Flipping a coin
A random variable is a function mapping outcomes of a random experiment to numbers. In the case of flipping a coin, we can define a random variable X, to take value 1 when the random variable takes value H, and take value 0 when the random variable takes the value T.
In this case there are only 2 possible outcomes: H or T. The set of all possible outcomes is referred to as the outcome set or sample space. In the case of flipping a coin, the outcome set consists of only 2 outcomes {H,T}.
Now that we have a random variable and the outcome set defined, we can repeat the random experiment of flipping one coin. Let's do it 3 times and observe the realizations of the random process. You may have obtained H on the first flip, T on the second, and H on the third. Your random variable, which assigns numbers to each outcome as per the above rule, will thus take on values 1,0, and 1 respectively. You now have a sample of three observations for the random variable X. A random variable that takes on only 2 values happens to have a special name - a Bernoulli variable.
Flipping 5 coins
Now let's look at a completely different random experiment - flipping 5 coins and counting the number of Heads. In this case, we can define a random variable Y to stand for the number of Heads observed in 5 tosses of a coin. The random experiment here consists of flipping 5 coins. There are 2^5=32 possible outcomes for this experiment, but only 5 possible outcomes for the random variable Y, {0,1,2,3,4,5}. Remember that our random variable Y maps outcomes of the experiment of flipping 5 coins to numbers (count of number of Heads). This mapping is described in the figure below.
You have repeated this random experiment 3 times and obtained: HTTTT, THHHT, and HHHHT. These outcomes of the random experiment map to the following values for the random variable Y: 1, 3, and 4. You now have a sample of three observations for the random variable Y. Such a variable, which counts the number of successes (i.e. Heads) in n repeated Bernoulli trials has a special name - a Binomial variable.
I hope this helps. | What is the correct terminology for repeating groups of coin flips multiple times in a simulation?
I understand your confusion. Things become more clear once you think in terms of random variables, outcome sets, and realizations/observations of your random process/variable.
Flipping a coin
A random |
55,726 | Distribution of min(X+Y,Y+Z,X+Z,Z+V,X+V,Y+V)? | Your notations do not help. Rephrase the problem as
$$X_1,\ldots,X_4\stackrel{\text{iid}}{\sim} f(x)$$
and
$$Y=\min\{X_1+X_2,X_1+X_3,X_1+X_4,X_2+X_3,X_2+X_4,X_3+X_4\}$$
You can then express $Y$ in terms of the order statistics
$$X_{(1)}\le X_{(2)}\le X_{(3)}\le X_{(4)}$$
and deduce that $Y$ is a specific sum of two of these order statistics, which leads to the conclusion. Indeed,
$$Y=X_{(1)}+X_{(2)}$$
and, since
$$(X_{(1)},X_{(2)})\sim \frac{4!}{2!}f(x_{(1)})f(x_{(1)})[1-F(x_{(2)})]^2$$
one can derive the distribution of $Y$ from a convolution step:
$$Y\sim\int_{-\infty}^{\infty} \frac{4!}{2!}f(x_{(1)})f(y-x_{(1)})[1-F(y-x_{(1)})]^2\,\text{d}x_{(1)}$$ | Distribution of min(X+Y,Y+Z,X+Z,Z+V,X+V,Y+V)? | Your notations do not help. Rephrase the problem as
$$X_1,\ldots,X_4\stackrel{\text{iid}}{\sim} f(x)$$
and
$$Y=\min\{X_1+X_2,X_1+X_3,X_1+X_4,X_2+X_3,X_2+X_4,X_3+X_4\}$$
You can then express $Y$ in ter | Distribution of min(X+Y,Y+Z,X+Z,Z+V,X+V,Y+V)?
Your notations do not help. Rephrase the problem as
$$X_1,\ldots,X_4\stackrel{\text{iid}}{\sim} f(x)$$
and
$$Y=\min\{X_1+X_2,X_1+X_3,X_1+X_4,X_2+X_3,X_2+X_4,X_3+X_4\}$$
You can then express $Y$ in terms of the order statistics
$$X_{(1)}\le X_{(2)}\le X_{(3)}\le X_{(4)}$$
and deduce that $Y$ is a specific sum of two of these order statistics, which leads to the conclusion. Indeed,
$$Y=X_{(1)}+X_{(2)}$$
and, since
$$(X_{(1)},X_{(2)})\sim \frac{4!}{2!}f(x_{(1)})f(x_{(1)})[1-F(x_{(2)})]^2$$
one can derive the distribution of $Y$ from a convolution step:
$$Y\sim\int_{-\infty}^{\infty} \frac{4!}{2!}f(x_{(1)})f(y-x_{(1)})[1-F(y-x_{(1)})]^2\,\text{d}x_{(1)}$$ | Distribution of min(X+Y,Y+Z,X+Z,Z+V,X+V,Y+V)?
Your notations do not help. Rephrase the problem as
$$X_1,\ldots,X_4\stackrel{\text{iid}}{\sim} f(x)$$
and
$$Y=\min\{X_1+X_2,X_1+X_3,X_1+X_4,X_2+X_3,X_2+X_4,X_3+X_4\}$$
You can then express $Y$ in ter |
55,727 | How much of a problem are autocorrelated residuals of a binary GAM (Generalized Additive model)? | The autocorrelation is going to affect any statistical inference you try to do with the model, such as testing is smooths are significant.
It is trivial to include random effects and spatio-temporal smooths in the GAM. You'll need to expand on what features you want to include in this model but, for example:
An isotopic spatial smoother (on coords x and y) plus region specific temporal trends all with same wiggliness (but not same shape) would include
gam(y ~ s(x,y) + s(time, region, bs = 'fs'), data = foo, method = 'REML')
An isotropic spatial smoother plus region specific temporal trends with different wiggliness
gam(y ~ region + s(x,y) + s(time, by = region), data = foo, method = 'REML')
and we can build up from there. For example, a Markov Random Field smooth can be used for the regions if they are areal data (administrative boundaries etc), and the random effect basis can be used if you want a random intercept per region or subject. (Note the above are using the syntax from the mgcv package.) | How much of a problem are autocorrelated residuals of a binary GAM (Generalized Additive model)? | The autocorrelation is going to affect any statistical inference you try to do with the model, such as testing is smooths are significant.
It is trivial to include random effects and spatio-temporal s | How much of a problem are autocorrelated residuals of a binary GAM (Generalized Additive model)?
The autocorrelation is going to affect any statistical inference you try to do with the model, such as testing is smooths are significant.
It is trivial to include random effects and spatio-temporal smooths in the GAM. You'll need to expand on what features you want to include in this model but, for example:
An isotopic spatial smoother (on coords x and y) plus region specific temporal trends all with same wiggliness (but not same shape) would include
gam(y ~ s(x,y) + s(time, region, bs = 'fs'), data = foo, method = 'REML')
An isotropic spatial smoother plus region specific temporal trends with different wiggliness
gam(y ~ region + s(x,y) + s(time, by = region), data = foo, method = 'REML')
and we can build up from there. For example, a Markov Random Field smooth can be used for the regions if they are areal data (administrative boundaries etc), and the random effect basis can be used if you want a random intercept per region or subject. (Note the above are using the syntax from the mgcv package.) | How much of a problem are autocorrelated residuals of a binary GAM (Generalized Additive model)?
The autocorrelation is going to affect any statistical inference you try to do with the model, such as testing is smooths are significant.
It is trivial to include random effects and spatio-temporal s |
55,728 | Identically distributed vs P(X > Y) = P(Y > X) | This answer is written under the assumption that $\mathbb{P}(Y=X)=0$
which was part of the original wording of the question.
Question 1: A sufficient condition for$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)\tag{1}$$is that $X$ and $Y$ are exchangeable, that is, that $(X,Y)$ and $(Y,X)$ have the same joint distribution. And obviously
$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)=1/2$$since they sum up to one. (In the alternative case that $\mathbb{P}(Y=X)>0$ this is obviously no longer true.)
Question 2: Take a bivariate normal vector $(X,Y)$ with mean $(\mu,\mu)$. Then $X-Y$ and $Y-X$ are identically distributed, no matter what the correlation between $X$ and $Y$, and no matter what the variances of $X$ and $Y$ are, and therefore (1) holds. The conjecture is thus false. | Identically distributed vs P(X > Y) = P(Y > X) | This answer is written under the assumption that $\mathbb{P}(Y=X)=0$
which was part of the original wording of the question.
Question 1: A sufficient condition for$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)\ | Identically distributed vs P(X > Y) = P(Y > X)
This answer is written under the assumption that $\mathbb{P}(Y=X)=0$
which was part of the original wording of the question.
Question 1: A sufficient condition for$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)\tag{1}$$is that $X$ and $Y$ are exchangeable, that is, that $(X,Y)$ and $(Y,X)$ have the same joint distribution. And obviously
$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)=1/2$$since they sum up to one. (In the alternative case that $\mathbb{P}(Y=X)>0$ this is obviously no longer true.)
Question 2: Take a bivariate normal vector $(X,Y)$ with mean $(\mu,\mu)$. Then $X-Y$ and $Y-X$ are identically distributed, no matter what the correlation between $X$ and $Y$, and no matter what the variances of $X$ and $Y$ are, and therefore (1) holds. The conjecture is thus false. | Identically distributed vs P(X > Y) = P(Y > X)
This answer is written under the assumption that $\mathbb{P}(Y=X)=0$
which was part of the original wording of the question.
Question 1: A sufficient condition for$$\mathbb{P}(X<Y)=\mathbb{P}(Y<X)\ |
55,729 | Identically distributed vs P(X > Y) = P(Y > X) | I will show that the distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$
That two random variables $X,Y$ are independent means that for every pair of measurable sets $A,B$ the events $[X\in A], [Y\in B]$ are independent. In particular for any two numbers $x,y$ the events $[X\le x], [Y\le y]$ are independent, so $F_{X,Y}(x,y) = F_X(x)\cdot F_Y(y).$
And the distribution of the pair $(X,Y)$ is completely determined by the joint c.d.f.
Since it is given that $F_X=F_Y,$ we can write $F_{X,Y}(x,y) = F_X(x)\cdot F_X(y).$
This is symmetric as a function of $x$ and $y,$ i.e. it remains the same if $x$ and $y$ are interchanged.
But interchanging $x$ and $y$ in $F_{X,Y}(x,y)$ is the same as interchanging $X$ and $Y,$ since
$$
F_{X,Y}(x,y) = \Pr(X\le x\ \&\ Y\le y).
$$
Therefore (the main point):
The distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$ | Identically distributed vs P(X > Y) = P(Y > X) | I will show that the distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$
That two random variables $X,Y$ are independent means that for every pair of measurable sets | Identically distributed vs P(X > Y) = P(Y > X)
I will show that the distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$
That two random variables $X,Y$ are independent means that for every pair of measurable sets $A,B$ the events $[X\in A], [Y\in B]$ are independent. In particular for any two numbers $x,y$ the events $[X\le x], [Y\le y]$ are independent, so $F_{X,Y}(x,y) = F_X(x)\cdot F_Y(y).$
And the distribution of the pair $(X,Y)$ is completely determined by the joint c.d.f.
Since it is given that $F_X=F_Y,$ we can write $F_{X,Y}(x,y) = F_X(x)\cdot F_X(y).$
This is symmetric as a function of $x$ and $y,$ i.e. it remains the same if $x$ and $y$ are interchanged.
But interchanging $x$ and $y$ in $F_{X,Y}(x,y)$ is the same as interchanging $X$ and $Y,$ since
$$
F_{X,Y}(x,y) = \Pr(X\le x\ \&\ Y\le y).
$$
Therefore (the main point):
The distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$ | Identically distributed vs P(X > Y) = P(Y > X)
I will show that the distribution of the pair $(X,Y)$ is the same as the distribution of the pair $(Y,X).$
That two random variables $X,Y$ are independent means that for every pair of measurable sets |
55,730 | Multivariate conditional entropy | Definition of conditional entropy
Suppose $A,B,C,D,E$ are random variables with joint distribution $p(a,b,c,d,e)$. The conditional entropy of $E$ given $A,B,C,D$ is defined as the expected value (over the joint distribution) of $-\log p(e \mid a,b,c,d)$:
$$H(E \mid A, B, C, D) =
-\mathbb{E}_{p(a,b,c,d,e)} \big[ \log p(e \mid a, b, c, d) \big]
\tag{1}$$
For discrete random variables, this is:
$$H(E \mid A,B,C,D) =
-\sum_{a,b,c,d,e} p(a,b,c,d,e) \log p(e \mid a,b,c,d)
\tag{2}$$
where the sum is over all possible values taken by the variables. For continuous random variables, simply replace the sum with an integral.
The conditional entropy can also be expressed as the difference between the joint entropy of all variables and the joint entropy of the variables upon which we wish to condition:
$$H(E \mid A,B,C,D) = H(A,B,C,D,E) - H(A,B,C,D) \tag{3}$$
Estimation of conditional entropy
If the joint distribution is known, the conditional entropy can be computed following the definition above. But, if we only have access to data sampled from the joint distribution, the conditional entropy must be estimated.
A straightforward option is to estimate the required probability distributions from the data (e.g. using histograms or kernel density estimates), then plug them into the expressions above. This is called a plug-in estimate. For example, we could estimate $p(a,b,c,d,e)$ and $p(a,b,c,d)$, plug them into the joint entropy formula, then take the difference as in expression $(3)$.
However, plug-in entropy estimators suffer from bias, and are inaccurate when there's not much data available. The problem quickly becomes worse as the number of variables increases. Many improved entropy estimators have been proposed. See Beirlant et al. (2001) for a review, as well as more recent work (example references below). These improved procedures can be used to estimate $H(A,B,C,D,E)$ and $H(A,B,C,D)$ from the data. Then, $H(E \mid A,B,C,D)$ is given by the difference, as in expression $(3)$.
References
Beirlant, Dudewicz, Gyorfi, van der Meulen (1997). Nonparametric entropy estimation: An overview.
Nemenman, Shafee, Bialek (2002). Entropy and Inference, Revisited.
Paninski (2003). Estimation of entropy and mutual information.
Miller (2003). A new class of entropy estimators for multi-dimensional densities.
Schurmann (2004). Bias analysis in entropy estimation.
Bonachela, Hinrichsen, Munoz (2008). Entropy estimates of small data sets. | Multivariate conditional entropy | Definition of conditional entropy
Suppose $A,B,C,D,E$ are random variables with joint distribution $p(a,b,c,d,e)$. The conditional entropy of $E$ given $A,B,C,D$ is defined as the expected value (over | Multivariate conditional entropy
Definition of conditional entropy
Suppose $A,B,C,D,E$ are random variables with joint distribution $p(a,b,c,d,e)$. The conditional entropy of $E$ given $A,B,C,D$ is defined as the expected value (over the joint distribution) of $-\log p(e \mid a,b,c,d)$:
$$H(E \mid A, B, C, D) =
-\mathbb{E}_{p(a,b,c,d,e)} \big[ \log p(e \mid a, b, c, d) \big]
\tag{1}$$
For discrete random variables, this is:
$$H(E \mid A,B,C,D) =
-\sum_{a,b,c,d,e} p(a,b,c,d,e) \log p(e \mid a,b,c,d)
\tag{2}$$
where the sum is over all possible values taken by the variables. For continuous random variables, simply replace the sum with an integral.
The conditional entropy can also be expressed as the difference between the joint entropy of all variables and the joint entropy of the variables upon which we wish to condition:
$$H(E \mid A,B,C,D) = H(A,B,C,D,E) - H(A,B,C,D) \tag{3}$$
Estimation of conditional entropy
If the joint distribution is known, the conditional entropy can be computed following the definition above. But, if we only have access to data sampled from the joint distribution, the conditional entropy must be estimated.
A straightforward option is to estimate the required probability distributions from the data (e.g. using histograms or kernel density estimates), then plug them into the expressions above. This is called a plug-in estimate. For example, we could estimate $p(a,b,c,d,e)$ and $p(a,b,c,d)$, plug them into the joint entropy formula, then take the difference as in expression $(3)$.
However, plug-in entropy estimators suffer from bias, and are inaccurate when there's not much data available. The problem quickly becomes worse as the number of variables increases. Many improved entropy estimators have been proposed. See Beirlant et al. (2001) for a review, as well as more recent work (example references below). These improved procedures can be used to estimate $H(A,B,C,D,E)$ and $H(A,B,C,D)$ from the data. Then, $H(E \mid A,B,C,D)$ is given by the difference, as in expression $(3)$.
References
Beirlant, Dudewicz, Gyorfi, van der Meulen (1997). Nonparametric entropy estimation: An overview.
Nemenman, Shafee, Bialek (2002). Entropy and Inference, Revisited.
Paninski (2003). Estimation of entropy and mutual information.
Miller (2003). A new class of entropy estimators for multi-dimensional densities.
Schurmann (2004). Bias analysis in entropy estimation.
Bonachela, Hinrichsen, Munoz (2008). Entropy estimates of small data sets. | Multivariate conditional entropy
Definition of conditional entropy
Suppose $A,B,C,D,E$ are random variables with joint distribution $p(a,b,c,d,e)$. The conditional entropy of $E$ given $A,B,C,D$ is defined as the expected value (over |
55,731 | Is there any tool to test the tendency toward/away from stationarity? | There are many such tests.
Generally, they fall into two categories:
Tests for unit root (I.e. tests of stationarity)
$H_{0}:$ Time series is/are stationary.
$H_{A}:$ Time series has/have unit root (i.e. is non-stationary).
Example: The Hadri-Lagrange multiplier test.
Tests for stationarity (I.e. tests of unit root)
$H_{0}:$ Time series has/have unit root.
$H_{A}:$ Time series is/are stationary.
Example: The Im-Pesaran-Shim test.
There are many versions of these tests, for example, for making such inferences about multiple time series (e.g., a time series for each country, city, etc.). Which tests are appropriate for you depends on your study design (e.g., single or multiple time series, balanced measurements, missing observations, etc.).
I think a wise way to approach making use of such tests is to combine inference from an appropriate version of a test for stationarity, coupled with an appropriate version of a test for unit root:
Reject test for stationarity, and fail to reject test for unit root: conclude data are stationary.
Fail to reject test for stationarity, and reject test for unit root: conclude data have unit root (are non-stationary).
Fail to reject test for stationarity, and fail to reject test for unit root: conclude data are under-powered to make any inference one way or the other.
Reject test for stationarity, and reject test for unit root: think hard about your data! For example with tests like those I mentioned above it may be the case that some time series have unit root, and some time series are stationary. It may also be the case that your data are autoregressive (values have memory of their prior states), but are not fully unit root (i.e. the memory decays eventually, instead of carrying forward infinitely).
For a single time series, the augmented Dickey-Fuller test for stationarity, and the complementary Kwiatkowski–Phillips–Schmidt–Shin test for unit root may be appropriate tools, and these tests are commonly implemented in statistical software (e.g., R, Stata, etc.).
Further, explicit modeling of autoregressive process will allow you to make inferences not only about stationarity vs. unit root, but the middle-ground of autoregressive (memoried) processes, which will be stationary in the long run (possibly very long run), but still present difficulties for valid inference over shorter time scales. For example, using OLS regression or some other estimator, one could model the first difference of a time series ($\Delta y_{t} = y_{t} - y_{t-1}$) using a single lag:
$$\Delta y_{t} = \alpha + \rho y_{t-1} + \zeta_{t}$$
A value of $\rho$ which is undifferentiable from $0$ (equivalence tests can help with this inference) is evidence of stationarity (trend stationarity if $\alpha \ne 0$) over the time scale of your data. A value of $|\rho| \approx 1$ is evidence of unit root. A value of $0 < \rho < 1$ is an autoregressive process which is likely to demand an appropriate time series model (see de Boef and Keele) the closer $\rho$ is to 1 (i.e. the longer its memory gets). (There can even be $|\rho|>1$, implying a runaway process... in my discipline, I don't see these much.) Of course, models with more lags are also possible.
References
De Boef, S. (2001). Modeling equilibrium relationships: Error correction models with strongly autoregressive data. Political Analysis, 9(1):78–94.
De Boef, S. and Keele, L. (2008). Taking time seriously. American Journal of Political Science, 52(1):184–200.
Dickey, D. A. and Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366):427–431.
Hadri, K. (2000). Testing for stationarity in heterogeneous panel data. The Econometrics Journal, 3(2):148–161.
Im, K. S., Pesaran, M. H., and Shin, Y. (2003). Testing for unit roots in heterogeneous panels. Journal of Econometrics, 115:53–74.
Kwiatkowski, D., Phillips, P. C., Schmidt, P., and Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root: how sure are we that economic time series have a unit root? Journal of Econometrics, 54(1-3):159–178. | Is there any tool to test the tendency toward/away from stationarity? | There are many such tests.
Generally, they fall into two categories:
Tests for unit root (I.e. tests of stationarity)
$H_{0}:$ Time series is/are stationary.
$H_{A}:$ Time series has/have unit root ( | Is there any tool to test the tendency toward/away from stationarity?
There are many such tests.
Generally, they fall into two categories:
Tests for unit root (I.e. tests of stationarity)
$H_{0}:$ Time series is/are stationary.
$H_{A}:$ Time series has/have unit root (i.e. is non-stationary).
Example: The Hadri-Lagrange multiplier test.
Tests for stationarity (I.e. tests of unit root)
$H_{0}:$ Time series has/have unit root.
$H_{A}:$ Time series is/are stationary.
Example: The Im-Pesaran-Shim test.
There are many versions of these tests, for example, for making such inferences about multiple time series (e.g., a time series for each country, city, etc.). Which tests are appropriate for you depends on your study design (e.g., single or multiple time series, balanced measurements, missing observations, etc.).
I think a wise way to approach making use of such tests is to combine inference from an appropriate version of a test for stationarity, coupled with an appropriate version of a test for unit root:
Reject test for stationarity, and fail to reject test for unit root: conclude data are stationary.
Fail to reject test for stationarity, and reject test for unit root: conclude data have unit root (are non-stationary).
Fail to reject test for stationarity, and fail to reject test for unit root: conclude data are under-powered to make any inference one way or the other.
Reject test for stationarity, and reject test for unit root: think hard about your data! For example with tests like those I mentioned above it may be the case that some time series have unit root, and some time series are stationary. It may also be the case that your data are autoregressive (values have memory of their prior states), but are not fully unit root (i.e. the memory decays eventually, instead of carrying forward infinitely).
For a single time series, the augmented Dickey-Fuller test for stationarity, and the complementary Kwiatkowski–Phillips–Schmidt–Shin test for unit root may be appropriate tools, and these tests are commonly implemented in statistical software (e.g., R, Stata, etc.).
Further, explicit modeling of autoregressive process will allow you to make inferences not only about stationarity vs. unit root, but the middle-ground of autoregressive (memoried) processes, which will be stationary in the long run (possibly very long run), but still present difficulties for valid inference over shorter time scales. For example, using OLS regression or some other estimator, one could model the first difference of a time series ($\Delta y_{t} = y_{t} - y_{t-1}$) using a single lag:
$$\Delta y_{t} = \alpha + \rho y_{t-1} + \zeta_{t}$$
A value of $\rho$ which is undifferentiable from $0$ (equivalence tests can help with this inference) is evidence of stationarity (trend stationarity if $\alpha \ne 0$) over the time scale of your data. A value of $|\rho| \approx 1$ is evidence of unit root. A value of $0 < \rho < 1$ is an autoregressive process which is likely to demand an appropriate time series model (see de Boef and Keele) the closer $\rho$ is to 1 (i.e. the longer its memory gets). (There can even be $|\rho|>1$, implying a runaway process... in my discipline, I don't see these much.) Of course, models with more lags are also possible.
References
De Boef, S. (2001). Modeling equilibrium relationships: Error correction models with strongly autoregressive data. Political Analysis, 9(1):78–94.
De Boef, S. and Keele, L. (2008). Taking time seriously. American Journal of Political Science, 52(1):184–200.
Dickey, D. A. and Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of the American Statistical Association, 74(366):427–431.
Hadri, K. (2000). Testing for stationarity in heterogeneous panel data. The Econometrics Journal, 3(2):148–161.
Im, K. S., Pesaran, M. H., and Shin, Y. (2003). Testing for unit roots in heterogeneous panels. Journal of Econometrics, 115:53–74.
Kwiatkowski, D., Phillips, P. C., Schmidt, P., and Shin, Y. (1992). Testing the null hypothesis of stationarity against the alternative of a unit root: how sure are we that economic time series have a unit root? Journal of Econometrics, 54(1-3):159–178. | Is there any tool to test the tendency toward/away from stationarity?
There are many such tests.
Generally, they fall into two categories:
Tests for unit root (I.e. tests of stationarity)
$H_{0}:$ Time series is/are stationary.
$H_{A}:$ Time series has/have unit root ( |
55,732 | Is there any tool to test the tendency toward/away from stationarity? | You can plot the time-series and inspect it visually. A stationary time-series will have constant variance and have a constant expected value (edit to incorporate @Richard Hardy's comment). plot(Value~Time)
You could look at the ACF and PACF plots. They should die down if the series is stationary.
There is also the Augmented Dickey-Fuller Test for being stationary. In R use ?adf.test for more information. | Is there any tool to test the tendency toward/away from stationarity? | You can plot the time-series and inspect it visually. A stationary time-series will have constant variance and have a constant expected value (edit to incorporate @Richard Hardy's comment). plot(Value | Is there any tool to test the tendency toward/away from stationarity?
You can plot the time-series and inspect it visually. A stationary time-series will have constant variance and have a constant expected value (edit to incorporate @Richard Hardy's comment). plot(Value~Time)
You could look at the ACF and PACF plots. They should die down if the series is stationary.
There is also the Augmented Dickey-Fuller Test for being stationary. In R use ?adf.test for more information. | Is there any tool to test the tendency toward/away from stationarity?
You can plot the time-series and inspect it visually. A stationary time-series will have constant variance and have a constant expected value (edit to incorporate @Richard Hardy's comment). plot(Value |
55,733 | Can a Bernoulli distribution be approximated by a Normal distribution? | Let's analyze the error.
The figure shows plots of the distribution function of various Bernoulli$(p)$ variables in blue and the corresponding Normal distributions in Red. The shaded regions show where the functions differ appreciably.
(Why plot distribution functions instead of density functions? Because a Bernoulli variable has no density function. The densities of good continuous approximations to Bernoulli distributions have huge spikes in neighborhoods of $0$ and $1.$)
No matter what $p$ may be, for some values of $x$ the difference between the two distribution functions will be large. After all, the Bernoullli distribution function has two leaps in it: it jumps by $1-p$ at $x=0$ and again by $p$ at $x=1.$ The Normal distribution function is going to split the greater of those two leaps into two parts, whence the larger of the two vertical differences--the largest error--must be at least $1/4.$ In fact, it's always greater even than that.
Here is a plot of the maximum difference between the two functions, as it depends on $p:$
It is never smaller than $0.341345,$ attained when $p=1/2.$ Because probabilities all lie between $0$ and $1,$ this is a substantial error. It is difficult to conceive of circumstances where this approximation would be acceptable, except perhaps when $x\lt 0$ or $x\gt 1:$ but then why use a Normal distribution at all? Just approximate those values as $0$ and $1,$ respectively, without any error at all. | Can a Bernoulli distribution be approximated by a Normal distribution? | Let's analyze the error.
The figure shows plots of the distribution function of various Bernoulli$(p)$ variables in blue and the corresponding Normal distributions in Red. The shaded regions show wh | Can a Bernoulli distribution be approximated by a Normal distribution?
Let's analyze the error.
The figure shows plots of the distribution function of various Bernoulli$(p)$ variables in blue and the corresponding Normal distributions in Red. The shaded regions show where the functions differ appreciably.
(Why plot distribution functions instead of density functions? Because a Bernoulli variable has no density function. The densities of good continuous approximations to Bernoulli distributions have huge spikes in neighborhoods of $0$ and $1.$)
No matter what $p$ may be, for some values of $x$ the difference between the two distribution functions will be large. After all, the Bernoullli distribution function has two leaps in it: it jumps by $1-p$ at $x=0$ and again by $p$ at $x=1.$ The Normal distribution function is going to split the greater of those two leaps into two parts, whence the larger of the two vertical differences--the largest error--must be at least $1/4.$ In fact, it's always greater even than that.
Here is a plot of the maximum difference between the two functions, as it depends on $p:$
It is never smaller than $0.341345,$ attained when $p=1/2.$ Because probabilities all lie between $0$ and $1,$ this is a substantial error. It is difficult to conceive of circumstances where this approximation would be acceptable, except perhaps when $x\lt 0$ or $x\gt 1:$ but then why use a Normal distribution at all? Just approximate those values as $0$ and $1,$ respectively, without any error at all. | Can a Bernoulli distribution be approximated by a Normal distribution?
Let's analyze the error.
The figure shows plots of the distribution function of various Bernoulli$(p)$ variables in blue and the corresponding Normal distributions in Red. The shaded regions show wh |
55,734 | Can a Bernoulli distribution be approximated by a Normal distribution? | I don't think you can conclude that N(p,p(1−p)) could represent an approximation of bernoulli(p). First of all, for a bernoulli variable, a random sample could only be 0 or 1, on the other hand, the range of normal variable could be from -inf to inf. Secondly, If we have a random distribution with mean p, and variance p(1-p), once we draw lots of samples from this distribution and add them together, their summation distribution will also follow a normal distribution with mean np and variance np(1-p) due to central limit theorem. For sure we can't say the random distribution represents an approximation of bernoulli(p)... | Can a Bernoulli distribution be approximated by a Normal distribution? | I don't think you can conclude that N(p,p(1−p)) could represent an approximation of bernoulli(p). First of all, for a bernoulli variable, a random sample could only be 0 or 1, on the other hand, the r | Can a Bernoulli distribution be approximated by a Normal distribution?
I don't think you can conclude that N(p,p(1−p)) could represent an approximation of bernoulli(p). First of all, for a bernoulli variable, a random sample could only be 0 or 1, on the other hand, the range of normal variable could be from -inf to inf. Secondly, If we have a random distribution with mean p, and variance p(1-p), once we draw lots of samples from this distribution and add them together, their summation distribution will also follow a normal distribution with mean np and variance np(1-p) due to central limit theorem. For sure we can't say the random distribution represents an approximation of bernoulli(p)... | Can a Bernoulli distribution be approximated by a Normal distribution?
I don't think you can conclude that N(p,p(1−p)) could represent an approximation of bernoulli(p). First of all, for a bernoulli variable, a random sample could only be 0 or 1, on the other hand, the r |
55,735 | u-shape for logistic regression? | Yes. Include a quadratic term for wine units consumed. The statistical significance of this term may indicate the presence of an inflection point, at which point the linear trend pivots. It may also indicate an "acceleration" effect, where sequentially higher or lower doses may have escalating trends with the outcome risk. Accompanied by the LOESS smooth which you have already produced, it's compelling evidence in favor of the formerly noted "U-shaped" trend with alcohol consumption.
If one includes an intercept term, a linear term, and a quadratic term, then the resulting model fits a quadratic trend in the log-odds whose apex location and value optimally predict the trend in the data. If one omits the linear term, the quadratic form is constrained to achieve it's extrema at the origin (no wine consumed) which will not reflect the noted reversal of trend in the exposure duration. | u-shape for logistic regression? | Yes. Include a quadratic term for wine units consumed. The statistical significance of this term may indicate the presence of an inflection point, at which point the linear trend pivots. It may also i | u-shape for logistic regression?
Yes. Include a quadratic term for wine units consumed. The statistical significance of this term may indicate the presence of an inflection point, at which point the linear trend pivots. It may also indicate an "acceleration" effect, where sequentially higher or lower doses may have escalating trends with the outcome risk. Accompanied by the LOESS smooth which you have already produced, it's compelling evidence in favor of the formerly noted "U-shaped" trend with alcohol consumption.
If one includes an intercept term, a linear term, and a quadratic term, then the resulting model fits a quadratic trend in the log-odds whose apex location and value optimally predict the trend in the data. If one omits the linear term, the quadratic form is constrained to achieve it's extrema at the origin (no wine consumed) which will not reflect the noted reversal of trend in the exposure duration. | u-shape for logistic regression?
Yes. Include a quadratic term for wine units consumed. The statistical significance of this term may indicate the presence of an inflection point, at which point the linear trend pivots. It may also i |
55,736 | How is an RNN (or any neural network) a parametric model? | A neural network is defined by the weights on its connections, which are its parameters. It doesn't matter what data the network was trained upon, once you have a set of weights, you can throw away your training dataset without repercussion. If you want to classify a new sample, you can do it with only the parameterized network. Even if your training dataset is very large, you can still describe the network with exactly the same number of parameters as a small training set.
A k-nearest neighbor classifier, as an example on the other hand, is nonparametric because it relies on the training data to make any predictions. You need every training point to describe your classifier, there's no way to abstract it into a parameterized model. In essence, the number of "parameters" in this model grows with the number of training points you have. If you want to classify a new sample, you need the training data itself, because it cannot be summarized by a smaller set of parameters. A kNN classifier training on a large dataset will have more effective parameters than a kNN classifier trained on a small dataset.
The neural network is parametric because it uses a fixed number of parameters to build a model, independent of the size of the training data. | How is an RNN (or any neural network) a parametric model? | A neural network is defined by the weights on its connections, which are its parameters. It doesn't matter what data the network was trained upon, once you have a set of weights, you can throw away yo | How is an RNN (or any neural network) a parametric model?
A neural network is defined by the weights on its connections, which are its parameters. It doesn't matter what data the network was trained upon, once you have a set of weights, you can throw away your training dataset without repercussion. If you want to classify a new sample, you can do it with only the parameterized network. Even if your training dataset is very large, you can still describe the network with exactly the same number of parameters as a small training set.
A k-nearest neighbor classifier, as an example on the other hand, is nonparametric because it relies on the training data to make any predictions. You need every training point to describe your classifier, there's no way to abstract it into a parameterized model. In essence, the number of "parameters" in this model grows with the number of training points you have. If you want to classify a new sample, you need the training data itself, because it cannot be summarized by a smaller set of parameters. A kNN classifier training on a large dataset will have more effective parameters than a kNN classifier trained on a small dataset.
The neural network is parametric because it uses a fixed number of parameters to build a model, independent of the size of the training data. | How is an RNN (or any neural network) a parametric model?
A neural network is defined by the weights on its connections, which are its parameters. It doesn't matter what data the network was trained upon, once you have a set of weights, you can throw away yo |
55,737 | Classification when Learning is not Feasible | A decision tree will always try to find splits and fit the data, whether it is "noise" or not. The nodes will end up being more pure than the initial full distribution. However, you'll likely get very close to a score that represents the distribution of the class labels.
The only way to get a single node is if all the data are the same. You can test this out yourself, using different distributions of X and Y: https://share.cocalc.com/share/a20be7c6-920f-4a02-a210-5f78edc58fd4/2018-12-01-191018.ipynb?viewer=share
Here's an excerpt from that for random features, with a section of the tree showing the pure nodes:
import numpy as np
from sklearn import tree
from sklearn.model_selection import train_test_split
import random, graphviz
def test_tree(X, Y=np.random.randint(0,n_classes,(n_samples,1))):
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=seed)
clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
print("Score on test data: ", clf.score(X_test, y_test))
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=range(X.shape[1]),
class_names=[chr(i) for i in range(ord('a'),ord('a')+n_classes)],
filled=True, rounded=True,
special_characters=True)
return graphviz.Source(dot_data)
seed = 42
random.seed(seed)
np.random.seed(seed)
n_features = 10
n_samples = 500
x_max = 100
x_min = 0
n_classes = 3
# Test a random distribution of samples and features
test_tree(np.random.randint(x_min,x_max,(n_samples,n_features)))
('Score on test data: ', 0.312)
And when all the data are the same:
# Test where all samples are the same value
test_tree(np.ones(X.shape))
('Score on test data: ', 0.34399999999999997)
Try increasing the number of features or number of samples or both, or altering the class distribution, and you'll see you can get fairly consistent score results representing the distribution of the classes. | Classification when Learning is not Feasible | A decision tree will always try to find splits and fit the data, whether it is "noise" or not. The nodes will end up being more pure than the initial full distribution. However, you'll likely get ve | Classification when Learning is not Feasible
A decision tree will always try to find splits and fit the data, whether it is "noise" or not. The nodes will end up being more pure than the initial full distribution. However, you'll likely get very close to a score that represents the distribution of the class labels.
The only way to get a single node is if all the data are the same. You can test this out yourself, using different distributions of X and Y: https://share.cocalc.com/share/a20be7c6-920f-4a02-a210-5f78edc58fd4/2018-12-01-191018.ipynb?viewer=share
Here's an excerpt from that for random features, with a section of the tree showing the pure nodes:
import numpy as np
from sklearn import tree
from sklearn.model_selection import train_test_split
import random, graphviz
def test_tree(X, Y=np.random.randint(0,n_classes,(n_samples,1))):
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=seed)
clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
print("Score on test data: ", clf.score(X_test, y_test))
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=range(X.shape[1]),
class_names=[chr(i) for i in range(ord('a'),ord('a')+n_classes)],
filled=True, rounded=True,
special_characters=True)
return graphviz.Source(dot_data)
seed = 42
random.seed(seed)
np.random.seed(seed)
n_features = 10
n_samples = 500
x_max = 100
x_min = 0
n_classes = 3
# Test a random distribution of samples and features
test_tree(np.random.randint(x_min,x_max,(n_samples,n_features)))
('Score on test data: ', 0.312)
And when all the data are the same:
# Test where all samples are the same value
test_tree(np.ones(X.shape))
('Score on test data: ', 0.34399999999999997)
Try increasing the number of features or number of samples or both, or altering the class distribution, and you'll see you can get fairly consistent score results representing the distribution of the classes. | Classification when Learning is not Feasible
A decision tree will always try to find splits and fit the data, whether it is "noise" or not. The nodes will end up being more pure than the initial full distribution. However, you'll likely get ve |
55,738 | Classification when Learning is not Feasible | Usually, the "empty" decision tree you are referring to is a single node (e.g: in WEKA). It contains all the data (no splits). Other ml algorithms have some kind of "empty" model as well.
On real-life data, this will not happen, and model complexity in this setting will be "more than 0".
Class distributions will vary more with increasing model complexity. | Classification when Learning is not Feasible | Usually, the "empty" decision tree you are referring to is a single node (e.g: in WEKA). It contains all the data (no splits). Other ml algorithms have some kind of "empty" model as well.
On real-life | Classification when Learning is not Feasible
Usually, the "empty" decision tree you are referring to is a single node (e.g: in WEKA). It contains all the data (no splits). Other ml algorithms have some kind of "empty" model as well.
On real-life data, this will not happen, and model complexity in this setting will be "more than 0".
Class distributions will vary more with increasing model complexity. | Classification when Learning is not Feasible
Usually, the "empty" decision tree you are referring to is a single node (e.g: in WEKA). It contains all the data (no splits). Other ml algorithms have some kind of "empty" model as well.
On real-life |
55,739 | Classification when Learning is not Feasible | This really depends how you define "noise".
Assume you have training data and there exists a relationship between your target and predictors over some partition of the data. Then your tree will find a way in which to split and make predictions it believes to be better than the mean. You then apply your model to a test set and discover your tree made predictions even worse than applying the mean.
Was the signal in the training data "noise"? Certainly not if you had no hold out data or didn't know the underlying data generating function. Certainly if you did. It all depends on what you consider "noise". | Classification when Learning is not Feasible | This really depends how you define "noise".
Assume you have training data and there exists a relationship between your target and predictors over some partition of the data. Then your tree will find | Classification when Learning is not Feasible
This really depends how you define "noise".
Assume you have training data and there exists a relationship between your target and predictors over some partition of the data. Then your tree will find a way in which to split and make predictions it believes to be better than the mean. You then apply your model to a test set and discover your tree made predictions even worse than applying the mean.
Was the signal in the training data "noise"? Certainly not if you had no hold out data or didn't know the underlying data generating function. Certainly if you did. It all depends on what you consider "noise". | Classification when Learning is not Feasible
This really depends how you define "noise".
Assume you have training data and there exists a relationship between your target and predictors over some partition of the data. Then your tree will find |
55,740 | Classification when Learning is not Feasible | This is my first time answering a question in this site.
I think this setting violates the requisite universal in almost all machine learning algorithms, that is the examples $(\mathbf{x}_i,y_i)$ have to be independently drawn from same underlying distribution $\mathcal{D}: \mathcal{X}\times \mathcal{Y}$. In this sense, it is not likely that $\mathbf{x}_i$'s do not provide any information about $y_i$'s.
However, in practice, this might occur because of inappropriate measurement of your data (either $\mathbf{x}_i$'s or $y_i$'s), but this has nothing to do with machine learning algorithm itself but related to your measurement and some data preprocessing techniques might be relatable to your question. | Classification when Learning is not Feasible | This is my first time answering a question in this site.
I think this setting violates the requisite universal in almost all machine learning algorithms, that is the examples $(\mathbf{x}_i,y_i)$ have | Classification when Learning is not Feasible
This is my first time answering a question in this site.
I think this setting violates the requisite universal in almost all machine learning algorithms, that is the examples $(\mathbf{x}_i,y_i)$ have to be independently drawn from same underlying distribution $\mathcal{D}: \mathcal{X}\times \mathcal{Y}$. In this sense, it is not likely that $\mathbf{x}_i$'s do not provide any information about $y_i$'s.
However, in practice, this might occur because of inappropriate measurement of your data (either $\mathbf{x}_i$'s or $y_i$'s), but this has nothing to do with machine learning algorithm itself but related to your measurement and some data preprocessing techniques might be relatable to your question. | Classification when Learning is not Feasible
This is my first time answering a question in this site.
I think this setting violates the requisite universal in almost all machine learning algorithms, that is the examples $(\mathbf{x}_i,y_i)$ have |
55,741 | How can I mix image and data into a CNN | You need to define sub-modules of the network and then somehow merge them and do further processing on the whole data. This is usually done by creating smaller neural networks within the bigger one. For example, you have one sub-network that processes images (say convolutional network) and another one that processes tabular data (say, dense network), then you combine the outputs of both networks and put some layers on top (dense layers are the simplest case). By merging I mean in here operations like concatenation of the outputs, but other operations are possible as well, for example if all the dimensions match you can simply add them.
Recent example of such network was described on the Google Cloud blog (diagram below), where they used three sources of data: raw videos, tabular data on the movies, and tabular data on viewers of the movies, to forecast the movie audience. | How can I mix image and data into a CNN | You need to define sub-modules of the network and then somehow merge them and do further processing on the whole data. This is usually done by creating smaller neural networks within the bigger one. F | How can I mix image and data into a CNN
You need to define sub-modules of the network and then somehow merge them and do further processing on the whole data. This is usually done by creating smaller neural networks within the bigger one. For example, you have one sub-network that processes images (say convolutional network) and another one that processes tabular data (say, dense network), then you combine the outputs of both networks and put some layers on top (dense layers are the simplest case). By merging I mean in here operations like concatenation of the outputs, but other operations are possible as well, for example if all the dimensions match you can simply add them.
Recent example of such network was described on the Google Cloud blog (diagram below), where they used three sources of data: raw videos, tabular data on the movies, and tabular data on viewers of the movies, to forecast the movie audience. | How can I mix image and data into a CNN
You need to define sub-modules of the network and then somehow merge them and do further processing on the whole data. This is usually done by creating smaller neural networks within the bigger one. F |
55,742 | Infinitesimal independence | If the $X$ and $Y$ rv's have a joint density $f(x,y)$, then you can compute the marginals of $X$ and $Y$, say $g(x), h(y)$ and compare the joint density to the independence density $g(x)h(y)$, for example via the likelihood ratio
$$
\frac{f(x,y)}{g(x)h(y)}
$$
and plot that. In the following we have done this for the example of a joint Cauchy distribution:
The R code used is:
library(mvtnorm)
library(lattice)
mycplot <- function(df=1, ...) {
x <- y <- seq(from=-3, to=3, length.out=101)
grid <- expand.grid(x=x, y=y)
grid$z <- exp( mvtnorm::dmvt(cbind(grid$x, grid$y), df=df, log=TRUE) -
dt(grid$x, df=df, log=TRUE) - dt(grid$y, df=df, log=TRUE))
lattice::contourplot(z ~ x*y, data=grid, region=TRUE, ...)
} | Infinitesimal independence | If the $X$ and $Y$ rv's have a joint density $f(x,y)$, then you can compute the marginals of $X$ and $Y$, say $g(x), h(y)$ and compare the joint density to the independence density $g(x)h(y)$, for exa | Infinitesimal independence
If the $X$ and $Y$ rv's have a joint density $f(x,y)$, then you can compute the marginals of $X$ and $Y$, say $g(x), h(y)$ and compare the joint density to the independence density $g(x)h(y)$, for example via the likelihood ratio
$$
\frac{f(x,y)}{g(x)h(y)}
$$
and plot that. In the following we have done this for the example of a joint Cauchy distribution:
The R code used is:
library(mvtnorm)
library(lattice)
mycplot <- function(df=1, ...) {
x <- y <- seq(from=-3, to=3, length.out=101)
grid <- expand.grid(x=x, y=y)
grid$z <- exp( mvtnorm::dmvt(cbind(grid$x, grid$y), df=df, log=TRUE) -
dt(grid$x, df=df, log=TRUE) - dt(grid$y, df=df, log=TRUE))
lattice::contourplot(z ~ x*y, data=grid, region=TRUE, ...)
} | Infinitesimal independence
If the $X$ and $Y$ rv's have a joint density $f(x,y)$, then you can compute the marginals of $X$ and $Y$, say $g(x), h(y)$ and compare the joint density to the independence density $g(x)h(y)$, for exa |
55,743 | Infinitesimal independence | In fact I realized my definition with the small interval around points was weird: I realized there exist the Pointwise Mutual Information (https://en.wikipedia.org/wiki/Pointwise_mutual_information), that will simply measure the independence between two random variables at a particular point x, y. I guess it was what I was looking for.
Note: actually the Pointwise mutual information is nothing else than a particular log likelihood ratio, which is described in @kjetil b halvorsen 's answer below https://stats.stackexchange.com/posts/392272 | Infinitesimal independence | In fact I realized my definition with the small interval around points was weird: I realized there exist the Pointwise Mutual Information (https://en.wikipedia.org/wiki/Pointwise_mutual_information), | Infinitesimal independence
In fact I realized my definition with the small interval around points was weird: I realized there exist the Pointwise Mutual Information (https://en.wikipedia.org/wiki/Pointwise_mutual_information), that will simply measure the independence between two random variables at a particular point x, y. I guess it was what I was looking for.
Note: actually the Pointwise mutual information is nothing else than a particular log likelihood ratio, which is described in @kjetil b halvorsen 's answer below https://stats.stackexchange.com/posts/392272 | Infinitesimal independence
In fact I realized my definition with the small interval around points was weird: I realized there exist the Pointwise Mutual Information (https://en.wikipedia.org/wiki/Pointwise_mutual_information), |
55,744 | Collinearity of features and random forest | Actually, the blog post does not say that there is an issue with correlated features. It says only that the feature importances that they calculated did not yield correct answer. Now, this does not have to be a problem with random forest itself, but with the feature importance algorithm they used (for example, the default one seems to be biased). They also noticed that including the correlated feature did not hurt the cross-validation performance on the particular dataset they used. So the question is if you want to make predictions, or use the model to infer something about the data?
By design random forest should not be affected by correlated features. First of all, for each tree you usually train on random subset of features, so the correlated features may, or may not be used for a particular tree. Second, consider extreme case where you have some feature duplicated in your dataset (let's call them $A$ and $A'$). Imagine that to make decision, a tree needs to make several splits given this particular feature. So the tree makes first split on $A$, then it may make second split using either $A$ or $A'$. Since they are the same, it could as well throw a coin to choose between them and get exactly the same result. If they are not perfectly correlated, this is only a question if decision tree can pick between features the one that works better, but that's a question about quality of the algorithm in general, not about correlated features. | Collinearity of features and random forest | Actually, the blog post does not say that there is an issue with correlated features. It says only that the feature importances that they calculated did not yield correct answer. Now, this does not ha | Collinearity of features and random forest
Actually, the blog post does not say that there is an issue with correlated features. It says only that the feature importances that they calculated did not yield correct answer. Now, this does not have to be a problem with random forest itself, but with the feature importance algorithm they used (for example, the default one seems to be biased). They also noticed that including the correlated feature did not hurt the cross-validation performance on the particular dataset they used. So the question is if you want to make predictions, or use the model to infer something about the data?
By design random forest should not be affected by correlated features. First of all, for each tree you usually train on random subset of features, so the correlated features may, or may not be used for a particular tree. Second, consider extreme case where you have some feature duplicated in your dataset (let's call them $A$ and $A'$). Imagine that to make decision, a tree needs to make several splits given this particular feature. So the tree makes first split on $A$, then it may make second split using either $A$ or $A'$. Since they are the same, it could as well throw a coin to choose between them and get exactly the same result. If they are not perfectly correlated, this is only a question if decision tree can pick between features the one that works better, but that's a question about quality of the algorithm in general, not about correlated features. | Collinearity of features and random forest
Actually, the blog post does not say that there is an issue with correlated features. It says only that the feature importances that they calculated did not yield correct answer. Now, this does not ha |
55,745 | Collinearity of features and random forest | As pointed out in the comments you should clarify what is your purpose and the data at hands. Several models such as partial least squares take advantage of correlated predictors.
The findCorrelation function in the caret package could serve to detect and remove correlated predictors according to prespecified treshold
descrCor <- cor(filteredDescr)
highlyCorDescr <- findCorrelation(descrCor, cutoff = .75)
filteredDescr <- filteredDescr[,-highlyCorDescr]
The
You should check this source related to preprocessing in Caret | Collinearity of features and random forest | As pointed out in the comments you should clarify what is your purpose and the data at hands. Several models such as partial least squares take advantage of correlated predictors.
The findCorrelation | Collinearity of features and random forest
As pointed out in the comments you should clarify what is your purpose and the data at hands. Several models such as partial least squares take advantage of correlated predictors.
The findCorrelation function in the caret package could serve to detect and remove correlated predictors according to prespecified treshold
descrCor <- cor(filteredDescr)
highlyCorDescr <- findCorrelation(descrCor, cutoff = .75)
filteredDescr <- filteredDescr[,-highlyCorDescr]
The
You should check this source related to preprocessing in Caret | Collinearity of features and random forest
As pointed out in the comments you should clarify what is your purpose and the data at hands. Several models such as partial least squares take advantage of correlated predictors.
The findCorrelation |
55,746 | Collinearity of features and random forest | @Tim made some excellent points. One method of determining what (if anything) colinearity is doing to your results is to use the perturb package in R. This adds random noise to the variables and then sees what happens to the results. It requires "some kind of model object" but that need not be a regression. I don't know whether the random forest methods in R produce the right kind of object (I am not a user of random forests). | Collinearity of features and random forest | @Tim made some excellent points. One method of determining what (if anything) colinearity is doing to your results is to use the perturb package in R. This adds random noise to the variables and then | Collinearity of features and random forest
@Tim made some excellent points. One method of determining what (if anything) colinearity is doing to your results is to use the perturb package in R. This adds random noise to the variables and then sees what happens to the results. It requires "some kind of model object" but that need not be a regression. I don't know whether the random forest methods in R produce the right kind of object (I am not a user of random forests). | Collinearity of features and random forest
@Tim made some excellent points. One method of determining what (if anything) colinearity is doing to your results is to use the perturb package in R. This adds random noise to the variables and then |
55,747 | Proof of Convergence in Distribution with unbounded moment | Firstly define $Y_{k,n} = X_k 1\{ |X_k| \leq n \}$. Then it is easy to see that $Var(Y_{k,n}) = 2 \log n$ and that
$$Var (T_n ) = Var \left( \sum_{k=1}^n Y_{k,n} \right) = 2n \log n$$
Letting $S_n = \sum_{k=1}^n X_k$ we also see that
$$P(S_n \neq T_n) \leq P(\cup_k X_k \neq Y_{k,n}) \leq n P(X_k > n) = \frac{n}{2n^2} \to 0$$
So that it is enough to show
$$\frac{T_n}{\sqrt{2n \log n}} \to N(0,1)$$
and the result follows by Slutsky's theorem for the original sum $S_n$.
This new sum $T_n$ now has finite variance so we can apply the Lindeberg-Feller theorem (otherwise called Lindeberg condition).
Let $Z_{k,n} = \frac{Y_{k,n}}{\sqrt{2 n \log n}}$. Then we see that if the two two conditions of Lindeberg-Feller theorem hold:
$\sum_{k=1}^n Var(Z_{k,n}) = 1 > 0$ for all $n$ (holds trivially)
For all $\epsilon > 0$, $\sum_{k=1}^n E[|Z_{k,n}|^2 1\{ |Z_{k,n}| > \epsilon \}] \to 0$
then the result follows. So you only need to verify the second condition.
With the second condition you should note that you can rewrite $1\{ |Z_{k,n}| > \epsilon \}$ as
$$1\{ |Z_{k,n}| > \epsilon \} = 1\{ |Y_{k,n}| > \sqrt{n \log n }\epsilon \} = 1\{ X_k 1 \{ |X_k| \leq n \log n \} > \sqrt{n \log n }\epsilon \}$$ | Proof of Convergence in Distribution with unbounded moment | Firstly define $Y_{k,n} = X_k 1\{ |X_k| \leq n \}$. Then it is easy to see that $Var(Y_{k,n}) = 2 \log n$ and that
$$Var (T_n ) = Var \left( \sum_{k=1}^n Y_{k,n} \right) = 2n \log n$$
Letting $S_n = \ | Proof of Convergence in Distribution with unbounded moment
Firstly define $Y_{k,n} = X_k 1\{ |X_k| \leq n \}$. Then it is easy to see that $Var(Y_{k,n}) = 2 \log n$ and that
$$Var (T_n ) = Var \left( \sum_{k=1}^n Y_{k,n} \right) = 2n \log n$$
Letting $S_n = \sum_{k=1}^n X_k$ we also see that
$$P(S_n \neq T_n) \leq P(\cup_k X_k \neq Y_{k,n}) \leq n P(X_k > n) = \frac{n}{2n^2} \to 0$$
So that it is enough to show
$$\frac{T_n}{\sqrt{2n \log n}} \to N(0,1)$$
and the result follows by Slutsky's theorem for the original sum $S_n$.
This new sum $T_n$ now has finite variance so we can apply the Lindeberg-Feller theorem (otherwise called Lindeberg condition).
Let $Z_{k,n} = \frac{Y_{k,n}}{\sqrt{2 n \log n}}$. Then we see that if the two two conditions of Lindeberg-Feller theorem hold:
$\sum_{k=1}^n Var(Z_{k,n}) = 1 > 0$ for all $n$ (holds trivially)
For all $\epsilon > 0$, $\sum_{k=1}^n E[|Z_{k,n}|^2 1\{ |Z_{k,n}| > \epsilon \}] \to 0$
then the result follows. So you only need to verify the second condition.
With the second condition you should note that you can rewrite $1\{ |Z_{k,n}| > \epsilon \}$ as
$$1\{ |Z_{k,n}| > \epsilon \} = 1\{ |Y_{k,n}| > \sqrt{n \log n }\epsilon \} = 1\{ X_k 1 \{ |X_k| \leq n \log n \} > \sqrt{n \log n }\epsilon \}$$ | Proof of Convergence in Distribution with unbounded moment
Firstly define $Y_{k,n} = X_k 1\{ |X_k| \leq n \}$. Then it is easy to see that $Var(Y_{k,n}) = 2 \log n$ and that
$$Var (T_n ) = Var \left( \sum_{k=1}^n Y_{k,n} \right) = 2n \log n$$
Letting $S_n = \ |
55,748 | Proof of Convergence in Distribution with unbounded moment | This is a proof by c.f. approach:
The c.f. of $X_i$ is
$$
\phi_i(t) = \int_{R}e^{itx}|x|^{-3}\boldsymbol{1}_{x \notin (-1,1)}dx = 2\int_{1}^{\infty}\frac{\cos(tx)}{x^3}dx.
$$
Hence, for $Y_n = (X_1+X_2+\dots+X_n)(\sqrt{n\log n})^{-1}$, we have
\begin{align*}
\phi_{Y_n}(t) =& \phi_i\left(\frac{t}{\sqrt{n\log n}}\right)^n\\
=& \left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n.\\
\end{align*}
We first consider the integral:
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx,
\end{align*}
since for $x \in [1, \sqrt{n\log\log n}]$, ${\displaystyle \frac{tx}{\sqrt{n\log n}}} \to 0$ as $n \to \infty$. Hence, we can apply the Taylor expansion of the cosine term in the first integral around $0$. Then we have
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx} + \left[\frac{t^4x}{24(n\log n)^2 }-\dots\right]dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx}dx + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
\end{align*}
Now
\begin{align*}
\int_{\sqrt{n\log\log n}}^{\infty}|\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}|dx \leq& \int_{\sqrt{n\log\log n}}^{\infty}\frac{2}{x^3}dx\\
=& \frac{1}{n\log\log n} \in o(1/n).
\end{align*}
Hence,
$$
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx = 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n).
$$
Let $n \to \infty$, we have
$$
\lim_{n \to \infty}\left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n = \lim_{n \to \infty}\left(1 -\frac{t^2\log( n\log\log n)}{2n\log n}\right)^n = \lim_{n \to \infty}\left(1-\frac{t^2}{2n}\right)^n = e^{-t^2/2},
$$
which completes the proof. | Proof of Convergence in Distribution with unbounded moment | This is a proof by c.f. approach:
The c.f. of $X_i$ is
$$
\phi_i(t) = \int_{R}e^{itx}|x|^{-3}\boldsymbol{1}_{x \notin (-1,1)}dx = 2\int_{1}^{\infty}\frac{\cos(tx)}{x^3}dx.
$$
Hence, for $Y_n | Proof of Convergence in Distribution with unbounded moment
This is a proof by c.f. approach:
The c.f. of $X_i$ is
$$
\phi_i(t) = \int_{R}e^{itx}|x|^{-3}\boldsymbol{1}_{x \notin (-1,1)}dx = 2\int_{1}^{\infty}\frac{\cos(tx)}{x^3}dx.
$$
Hence, for $Y_n = (X_1+X_2+\dots+X_n)(\sqrt{n\log n})^{-1}$, we have
\begin{align*}
\phi_{Y_n}(t) =& \phi_i\left(\frac{t}{\sqrt{n\log n}}\right)^n\\
=& \left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n.\\
\end{align*}
We first consider the integral:
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx,
\end{align*}
since for $x \in [1, \sqrt{n\log\log n}]$, ${\displaystyle \frac{tx}{\sqrt{n\log n}}} \to 0$ as $n \to \infty$. Hence, we can apply the Taylor expansion of the cosine term in the first integral around $0$. Then we have
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx} + \left[\frac{t^4x}{24(n\log n)^2 }-\dots\right]dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx}dx + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
\end{align*}
Now
\begin{align*}
\int_{\sqrt{n\log\log n}}^{\infty}|\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}|dx \leq& \int_{\sqrt{n\log\log n}}^{\infty}\frac{2}{x^3}dx\\
=& \frac{1}{n\log\log n} \in o(1/n).
\end{align*}
Hence,
$$
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx = 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n).
$$
Let $n \to \infty$, we have
$$
\lim_{n \to \infty}\left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n = \lim_{n \to \infty}\left(1 -\frac{t^2\log( n\log\log n)}{2n\log n}\right)^n = \lim_{n \to \infty}\left(1-\frac{t^2}{2n}\right)^n = e^{-t^2/2},
$$
which completes the proof. | Proof of Convergence in Distribution with unbounded moment
This is a proof by c.f. approach:
The c.f. of $X_i$ is
$$
\phi_i(t) = \int_{R}e^{itx}|x|^{-3}\boldsymbol{1}_{x \notin (-1,1)}dx = 2\int_{1}^{\infty}\frac{\cos(tx)}{x^3}dx.
$$
Hence, for $Y_n |
55,749 | Proof of Convergence in Distribution with unbounded moment | To see if this gets us anywhere useful, I'm going to go some of the way along the lines suggested by Glen_b in the comments. The characteristic function of the underlying random variables is:
$$\begin{equation} \begin{aligned}
\varphi_X(t) = \mathbb{E}(\exp(itX))
&= \int \limits_{\mathbb{R}} \exp(itx) f_X(x) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \exp(itx) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \cos(tx) dx + i \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \sin(tx) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \cos(tx) dx \\[6pt]
&= - \int \limits_{-\infty}^{-1} \frac{\cos(tx)}{x^3} dx + \int \limits_1^\infty \frac{\cos(tx)}{x^3} dx \\[6pt]
&= 2 \int \limits_1^\infty \frac{\cos(|t|x)}{x^3} dx. \\[6pt]
\end{aligned} \end{equation}$$
Now, using the change of variable $y = x^{-2}$ we have $dy = -2 x^{-3} dx$ which gives:
$$\begin{equation} \begin{aligned}
\varphi_X(t)
&= \int \limits_0^1 \cos \Big( \frac{|t|}{\sqrt{y}} \Big) dy. \\[6pt]
\end{aligned} \end{equation}$$
We can see that the characteristic function is symmetric around $t=0$. Hence, without loss of information we can take $t>0$ and write it in simpler terms as:
$$\varphi_X(t) = \int \limits_0^1 \cos \Big( \frac{t}{\sqrt{y}} \Big) dy.$$
The required limit: Now we define the partial sums:
$$S_n = \frac{X_1 + \cdots + X_n}{\sqrt{2n \log n}}.$$
Using the rules for characteristic functions we then have:
$$\varphi_{S_n}(t)
= \varphi_X \Bigg( \frac{t}{\sqrt{2n \log n}} \Bigg)^n
= \Bigg[ \int \limits_0^1 \cos \Bigg( \frac{t}{\sqrt{2n}} \cdot \frac{1}{\sqrt{y \log n}} \Bigg) dy \Bigg]^n.$$
To prove the convergence result we have to show that $\lim_{n \rightarrow \infty} \varphi_{S_n}(t) = \exp( - t^2/2 )$. Using Bernoulli's expansion for $e$ it would be sufficient to prove that as $n$ becomes large we have:
$$\int \limits_0^1 \cos \Bigg( \frac{t}{\sqrt{2n}} \cdot \frac{1}{\sqrt{y \log n}} \Bigg) dy \longrightarrow 1 - \frac{t^2}{2n}.$$
I will not go any further than this for now. It is not clear to me whether this result holds, or how you would prove it, but at least this gets you to a possible pathway to a solution. To prove this limit, you would need to find some useful expansion of the integrand that will ensure that higher-order terms vanish in the integral as $n \rightarrow \infty$. | Proof of Convergence in Distribution with unbounded moment | To see if this gets us anywhere useful, I'm going to go some of the way along the lines suggested by Glen_b in the comments. The characteristic function of the underlying random variables is:
$$\begi | Proof of Convergence in Distribution with unbounded moment
To see if this gets us anywhere useful, I'm going to go some of the way along the lines suggested by Glen_b in the comments. The characteristic function of the underlying random variables is:
$$\begin{equation} \begin{aligned}
\varphi_X(t) = \mathbb{E}(\exp(itX))
&= \int \limits_{\mathbb{R}} \exp(itx) f_X(x) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \exp(itx) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \cos(tx) dx + i \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \sin(tx) dx \\[6pt]
&= \int \limits_{\mathbb{R}-(-1,1)} |x|^{-3} \cos(tx) dx \\[6pt]
&= - \int \limits_{-\infty}^{-1} \frac{\cos(tx)}{x^3} dx + \int \limits_1^\infty \frac{\cos(tx)}{x^3} dx \\[6pt]
&= 2 \int \limits_1^\infty \frac{\cos(|t|x)}{x^3} dx. \\[6pt]
\end{aligned} \end{equation}$$
Now, using the change of variable $y = x^{-2}$ we have $dy = -2 x^{-3} dx$ which gives:
$$\begin{equation} \begin{aligned}
\varphi_X(t)
&= \int \limits_0^1 \cos \Big( \frac{|t|}{\sqrt{y}} \Big) dy. \\[6pt]
\end{aligned} \end{equation}$$
We can see that the characteristic function is symmetric around $t=0$. Hence, without loss of information we can take $t>0$ and write it in simpler terms as:
$$\varphi_X(t) = \int \limits_0^1 \cos \Big( \frac{t}{\sqrt{y}} \Big) dy.$$
The required limit: Now we define the partial sums:
$$S_n = \frac{X_1 + \cdots + X_n}{\sqrt{2n \log n}}.$$
Using the rules for characteristic functions we then have:
$$\varphi_{S_n}(t)
= \varphi_X \Bigg( \frac{t}{\sqrt{2n \log n}} \Bigg)^n
= \Bigg[ \int \limits_0^1 \cos \Bigg( \frac{t}{\sqrt{2n}} \cdot \frac{1}{\sqrt{y \log n}} \Bigg) dy \Bigg]^n.$$
To prove the convergence result we have to show that $\lim_{n \rightarrow \infty} \varphi_{S_n}(t) = \exp( - t^2/2 )$. Using Bernoulli's expansion for $e$ it would be sufficient to prove that as $n$ becomes large we have:
$$\int \limits_0^1 \cos \Bigg( \frac{t}{\sqrt{2n}} \cdot \frac{1}{\sqrt{y \log n}} \Bigg) dy \longrightarrow 1 - \frac{t^2}{2n}.$$
I will not go any further than this for now. It is not clear to me whether this result holds, or how you would prove it, but at least this gets you to a possible pathway to a solution. To prove this limit, you would need to find some useful expansion of the integrand that will ensure that higher-order terms vanish in the integral as $n \rightarrow \infty$. | Proof of Convergence in Distribution with unbounded moment
To see if this gets us anywhere useful, I'm going to go some of the way along the lines suggested by Glen_b in the comments. The characteristic function of the underlying random variables is:
$$\begi |
55,750 | Proof of Convergence in Distribution with unbounded moment | When I started reading this question I was a bit confused. This factor $\sqrt{n\log n}$ is not intuitive to me. It is not the typical expression in the CLT.
Below I am trying to view your question in a more intuitive way without resorting to characteristic functions and looking at the limits of higher moments (which would be something mimicking the proof of the CLT). I believe that the question is already 'failing' at a simpler level.
The variance of the distribution is $2$ so the variance of the sum is $2 \sqrt{n}$. When this sum is divided by $\sqrt{n\log n}$ you get a variance of $\frac{1}{\log n}$. So this is not gonna equal a distributie of $N(0,1)$ because the variance is not equal.
Unless I am missing something you can not prove the statement that you are trying to prove and all the magic with characteristic functions seems like distraction to me. | Proof of Convergence in Distribution with unbounded moment | When I started reading this question I was a bit confused. This factor $\sqrt{n\log n}$ is not intuitive to me. It is not the typical expression in the CLT.
Below I am trying to view your question in | Proof of Convergence in Distribution with unbounded moment
When I started reading this question I was a bit confused. This factor $\sqrt{n\log n}$ is not intuitive to me. It is not the typical expression in the CLT.
Below I am trying to view your question in a more intuitive way without resorting to characteristic functions and looking at the limits of higher moments (which would be something mimicking the proof of the CLT). I believe that the question is already 'failing' at a simpler level.
The variance of the distribution is $2$ so the variance of the sum is $2 \sqrt{n}$. When this sum is divided by $\sqrt{n\log n}$ you get a variance of $\frac{1}{\log n}$. So this is not gonna equal a distributie of $N(0,1)$ because the variance is not equal.
Unless I am missing something you can not prove the statement that you are trying to prove and all the magic with characteristic functions seems like distraction to me. | Proof of Convergence in Distribution with unbounded moment
When I started reading this question I was a bit confused. This factor $\sqrt{n\log n}$ is not intuitive to me. It is not the typical expression in the CLT.
Below I am trying to view your question in |
55,751 | Why do my XGboosted trees all look the same? | When a gradient boosting machine fits a tree $f(x)$ to the target variable $y$, it calculates the error (used in the next iteration to fit the next tree) as:
$$e = y - \epsilon f(x)$$
where in this case $\epsilon = 0.1$, the learning rate. Now imagine that there is a fairly simple tree $g(x)$ that actually fits the data reasonably well (for a simple tree, that is.) At our first iteration, our model finds $g(x)$ and calculates the errors $e$ for the next step. At that step, $e$ will likely be fit quite well by the same tree, because we didn't subtract all of $g(x)$ from $y$, we only subtracted $10\%$ of it. Writing rather loosely, $90\%$ of $g(x)$ is still in the error term $e$. Because $e \neq y$, the leaf values will be a little different, but the tree structure will be the same, so at the next iteration we'll fit $g(x)$ again.
We may have to do this quite a few times before we have done enough fitting of $g(x)$ to $y$ for other tree structures to become obvious enough for the GBM to find in preference to more fitting of $g$.
The fundamental cause of this is that we are taking a step towards $g(x)$ of fixed length - $\epsilon$ - not performing a line search to find the optimum step length towards $g(x)$ at each iteration then taking one step of that length. This is a feature! It helps prevent overfitting. If you fit "optimum" trees at every step of the GBM, you very rapidly overfit, even with small trees. (Note that they aren't really optimum trees, as they are found through a greedy search heuristic.) Random Forests do fit "optimum" trees at every step, and deep ones too, but deal with the overfitting issue by averaging across trees. GBM takes a different approach, one which may appear to waste iterations in cases such as yours, but in general works quite well on a broad range of problems. | Why do my XGboosted trees all look the same? | When a gradient boosting machine fits a tree $f(x)$ to the target variable $y$, it calculates the error (used in the next iteration to fit the next tree) as:
$$e = y - \epsilon f(x)$$
where in this ca | Why do my XGboosted trees all look the same?
When a gradient boosting machine fits a tree $f(x)$ to the target variable $y$, it calculates the error (used in the next iteration to fit the next tree) as:
$$e = y - \epsilon f(x)$$
where in this case $\epsilon = 0.1$, the learning rate. Now imagine that there is a fairly simple tree $g(x)$ that actually fits the data reasonably well (for a simple tree, that is.) At our first iteration, our model finds $g(x)$ and calculates the errors $e$ for the next step. At that step, $e$ will likely be fit quite well by the same tree, because we didn't subtract all of $g(x)$ from $y$, we only subtracted $10\%$ of it. Writing rather loosely, $90\%$ of $g(x)$ is still in the error term $e$. Because $e \neq y$, the leaf values will be a little different, but the tree structure will be the same, so at the next iteration we'll fit $g(x)$ again.
We may have to do this quite a few times before we have done enough fitting of $g(x)$ to $y$ for other tree structures to become obvious enough for the GBM to find in preference to more fitting of $g$.
The fundamental cause of this is that we are taking a step towards $g(x)$ of fixed length - $\epsilon$ - not performing a line search to find the optimum step length towards $g(x)$ at each iteration then taking one step of that length. This is a feature! It helps prevent overfitting. If you fit "optimum" trees at every step of the GBM, you very rapidly overfit, even with small trees. (Note that they aren't really optimum trees, as they are found through a greedy search heuristic.) Random Forests do fit "optimum" trees at every step, and deep ones too, but deal with the overfitting issue by averaging across trees. GBM takes a different approach, one which may appear to waste iterations in cases such as yours, but in general works quite well on a broad range of problems. | Why do my XGboosted trees all look the same?
When a gradient boosting machine fits a tree $f(x)$ to the target variable $y$, it calculates the error (used in the next iteration to fit the next tree) as:
$$e = y - \epsilon f(x)$$
where in this ca |
55,752 | Uniqueness of Reproducing Kernel Hilbert Spaces | Prof. Dino Sejdinovic, author of the slides my question referred to, was kind enough to send me the following answer:
Feature spaces are not unique and indeed they are all isomorphic. RKHS
of a given kernel is unique as a space of functions, i.e. the one
that contains functions of the form $k(\cdot , x)$. If these functions form a
finite-dimensional RKHS, that RKHS is isomorphic to a standard
Euclidean space of that dimension. For intuition about uniqueness for
infinite-dimensional case, you can consider an orthonormal basis of
$\mathcal H_k$, e.g. given by Mercer's theorem -- then the $\ell^2$ space of
coefficients corresponding to this basis is a valid feature space and
isomorphic to RKHS. You can choose many different bases so there are
many versions of feature spaces. But they all correspond to the same
space of functions which is the RKHS.
In addition, the first twenty minutes of this class by Prof. Lorenzo Rosasco delve a bit deeper into the relation between feature spaces, feature maps and RKHS. | Uniqueness of Reproducing Kernel Hilbert Spaces | Prof. Dino Sejdinovic, author of the slides my question referred to, was kind enough to send me the following answer:
Feature spaces are not unique and indeed they are all isomorphic. RKHS
of a giv | Uniqueness of Reproducing Kernel Hilbert Spaces
Prof. Dino Sejdinovic, author of the slides my question referred to, was kind enough to send me the following answer:
Feature spaces are not unique and indeed they are all isomorphic. RKHS
of a given kernel is unique as a space of functions, i.e. the one
that contains functions of the form $k(\cdot , x)$. If these functions form a
finite-dimensional RKHS, that RKHS is isomorphic to a standard
Euclidean space of that dimension. For intuition about uniqueness for
infinite-dimensional case, you can consider an orthonormal basis of
$\mathcal H_k$, e.g. given by Mercer's theorem -- then the $\ell^2$ space of
coefficients corresponding to this basis is a valid feature space and
isomorphic to RKHS. You can choose many different bases so there are
many versions of feature spaces. But they all correspond to the same
space of functions which is the RKHS.
In addition, the first twenty minutes of this class by Prof. Lorenzo Rosasco delve a bit deeper into the relation between feature spaces, feature maps and RKHS. | Uniqueness of Reproducing Kernel Hilbert Spaces
Prof. Dino Sejdinovic, author of the slides my question referred to, was kind enough to send me the following answer:
Feature spaces are not unique and indeed they are all isomorphic. RKHS
of a giv |
55,753 | Estimating the MLE where the parameter is also the constraint | Your density function is:
$$p_X(x|\alpha,\beta) = \frac{\alpha}{\beta} \Big( \frac{x}{\beta} \Big)^{\alpha-1} \quad \quad \quad \text{for } 0 \leqslant x \leqslant \beta.$$
Hence, your log-likelihood function is:
$$\ell_\mathbf{x}(\alpha, \beta) = n \ln \alpha - n \alpha \ln \beta + (\alpha-1) \sum_{i=1}^n \ln x_i \quad \quad \quad \text{for } 0 \leqslant x_{(1)} \leqslant x_{(n)} \leqslant \beta.$$
The score function and Hessian matrix are given respectively by:
$$\begin{equation} \begin{aligned}
\nabla \ell_\mathbf{x}(\alpha, \beta)
&= \begin{bmatrix}
n/\alpha - n \ln \beta + \sum_{i=1}^n \ln x_i \\[6pt]
n \alpha/\beta \\[6pt]
\end{bmatrix}, \\[10pt]
\nabla^2 \ell_\mathbf{x}(\alpha, \beta)
&= \begin{bmatrix}
-n/\alpha^2 & n/\beta \\[6pt]
n/\beta & - n \alpha/\beta^2 \\[6pt]
\end{bmatrix}.
\end{aligned} \end{equation}$$
The function is strictly increasing with respect to $\beta$ and the function is concave (i.e., the Hessian matrix is negative definite). This means that the MLE of $\beta$ occurs at the boundary point, and the MLE of $\alpha$ occurs at the unique critical point. We have the estimators:
$$\hat{\alpha} = \frac{n}{\sum_{i=1}^n (\ln x_{(n)} - \ln x_i)} \quad \quad \quad \hat{\beta} = x_{(n)}.$$
As you can see, when your parameter enters the density as a bound on the range of $x$ you can get a situation where the MLE occurs at the boundary of the log-likelihood. This is all just standard use of calculus techniques --- sometimes maximising values of an objective function occur at critical points and sometimes they occur at boundary points. | Estimating the MLE where the parameter is also the constraint | Your density function is:
$$p_X(x|\alpha,\beta) = \frac{\alpha}{\beta} \Big( \frac{x}{\beta} \Big)^{\alpha-1} \quad \quad \quad \text{for } 0 \leqslant x \leqslant \beta.$$
Hence, your log-likelihood | Estimating the MLE where the parameter is also the constraint
Your density function is:
$$p_X(x|\alpha,\beta) = \frac{\alpha}{\beta} \Big( \frac{x}{\beta} \Big)^{\alpha-1} \quad \quad \quad \text{for } 0 \leqslant x \leqslant \beta.$$
Hence, your log-likelihood function is:
$$\ell_\mathbf{x}(\alpha, \beta) = n \ln \alpha - n \alpha \ln \beta + (\alpha-1) \sum_{i=1}^n \ln x_i \quad \quad \quad \text{for } 0 \leqslant x_{(1)} \leqslant x_{(n)} \leqslant \beta.$$
The score function and Hessian matrix are given respectively by:
$$\begin{equation} \begin{aligned}
\nabla \ell_\mathbf{x}(\alpha, \beta)
&= \begin{bmatrix}
n/\alpha - n \ln \beta + \sum_{i=1}^n \ln x_i \\[6pt]
n \alpha/\beta \\[6pt]
\end{bmatrix}, \\[10pt]
\nabla^2 \ell_\mathbf{x}(\alpha, \beta)
&= \begin{bmatrix}
-n/\alpha^2 & n/\beta \\[6pt]
n/\beta & - n \alpha/\beta^2 \\[6pt]
\end{bmatrix}.
\end{aligned} \end{equation}$$
The function is strictly increasing with respect to $\beta$ and the function is concave (i.e., the Hessian matrix is negative definite). This means that the MLE of $\beta$ occurs at the boundary point, and the MLE of $\alpha$ occurs at the unique critical point. We have the estimators:
$$\hat{\alpha} = \frac{n}{\sum_{i=1}^n (\ln x_{(n)} - \ln x_i)} \quad \quad \quad \hat{\beta} = x_{(n)}.$$
As you can see, when your parameter enters the density as a bound on the range of $x$ you can get a situation where the MLE occurs at the boundary of the log-likelihood. This is all just standard use of calculus techniques --- sometimes maximising values of an objective function occur at critical points and sometimes they occur at boundary points. | Estimating the MLE where the parameter is also the constraint
Your density function is:
$$p_X(x|\alpha,\beta) = \frac{\alpha}{\beta} \Big( \frac{x}{\beta} \Big)^{\alpha-1} \quad \quad \quad \text{for } 0 \leqslant x \leqslant \beta.$$
Hence, your log-likelihood |
55,754 | Estimating the MLE where the parameter is also the constraint | Looks like both $\alpha$ and $\beta$ are unknown here. So our parameter is $\theta=(\alpha,\beta)$.
The population pdf is $$f_{\theta}(x)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}\mathbf1_{0<x<\beta}\quad,\,\alpha>0$$
So, given the sample $(x_1,x_2,\ldots,x_n)$, likelihood function of $\theta$ is
\begin{align}
L(\theta)&=\prod_{i=1}^n f_{\theta}(x_i)
\\&=\left(\frac{\alpha}{\beta^{\alpha}}\right)^n\left(\prod_{i=1}^n x_i\right)^{\alpha-1}\mathbf1_{0<x_1,x_2,\ldots,x_n<\beta}
\\&=\left(\frac{\alpha}{\beta^{\alpha}}\right)^n\left(\prod_{i=1}^n x_i\right)^{\alpha-1}\mathbf1_{0<x_{(n)}<\beta}\quad,\,\alpha>0
\end{align}
, where $x_{(n)}=\max_{1\le i\le n} x_i$ is the largest order statistic.
The log-likelihood is therefore $$\ell(\theta)=n(\ln\alpha-\alpha\ln\beta)+(\alpha-1)\sum_{i=1}^n \ln x_i+\ln(\mathbf1_{0<x_{(n)}<\beta})$$
Observe that, given the sample, the parameter space has become $$\Theta=\{\theta:\alpha>0,\beta>x_{(n)}\}$$
Keeping $\alpha$ fixed, justify that $\ell(\theta)$ is maximized for the minimum value of $\beta$ subject to the constraint $\beta\in(x_{(n)},\infty)$. In other words, as you say, $\ell(\theta)$ is a decreasing function of $\beta$ for fixed $\alpha$. Hence conclude that MLE of $\beta$ as you guessed is $$\hat\beta=X_{(n)}$$
It is now valid to derive the MLE of $\alpha$ by differentiating the log-likelihood as you have done. This MLE is likely to depend on the MLE of $\beta$.
Indeed,
\begin{align}
\frac{\partial\ell}{\partial\alpha}&=\frac{n}{\alpha}-n\ln\beta+\sum_{i=1}^n \ln x_i
\end{align}
, which vanishes if and only if $$\alpha=\frac{n}{n\ln\beta-\sum_{i=1}^n\ln x_i}$$
(Since $x_i<\beta\implies \ln x_i<\ln\beta\implies \sum \ln x_i<n\ln\beta$, the above expression is defined.)
So our possible candidate for MLE of $\alpha$ is $$\hat\alpha=\frac{n}{n\ln\hat\beta-\sum_{i=1}^n\ln x_i}$$
At this point, you can finish your argument saying that MLE of $\theta=(\alpha,\beta)$ is $\hat\theta=(\hat\alpha,\hat\beta)$.
But since this is a maximization problem in two variables $(\alpha,\beta)$, you could perhaps verify that $$\ell(\hat\theta)\ge \ell (\theta)$$ holds for every $\theta$. This would be a bit more rigorous I think. | Estimating the MLE where the parameter is also the constraint | Looks like both $\alpha$ and $\beta$ are unknown here. So our parameter is $\theta=(\alpha,\beta)$.
The population pdf is $$f_{\theta}(x)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}\mathbf1_{0<x<\beta}\ | Estimating the MLE where the parameter is also the constraint
Looks like both $\alpha$ and $\beta$ are unknown here. So our parameter is $\theta=(\alpha,\beta)$.
The population pdf is $$f_{\theta}(x)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}\mathbf1_{0<x<\beta}\quad,\,\alpha>0$$
So, given the sample $(x_1,x_2,\ldots,x_n)$, likelihood function of $\theta$ is
\begin{align}
L(\theta)&=\prod_{i=1}^n f_{\theta}(x_i)
\\&=\left(\frac{\alpha}{\beta^{\alpha}}\right)^n\left(\prod_{i=1}^n x_i\right)^{\alpha-1}\mathbf1_{0<x_1,x_2,\ldots,x_n<\beta}
\\&=\left(\frac{\alpha}{\beta^{\alpha}}\right)^n\left(\prod_{i=1}^n x_i\right)^{\alpha-1}\mathbf1_{0<x_{(n)}<\beta}\quad,\,\alpha>0
\end{align}
, where $x_{(n)}=\max_{1\le i\le n} x_i$ is the largest order statistic.
The log-likelihood is therefore $$\ell(\theta)=n(\ln\alpha-\alpha\ln\beta)+(\alpha-1)\sum_{i=1}^n \ln x_i+\ln(\mathbf1_{0<x_{(n)}<\beta})$$
Observe that, given the sample, the parameter space has become $$\Theta=\{\theta:\alpha>0,\beta>x_{(n)}\}$$
Keeping $\alpha$ fixed, justify that $\ell(\theta)$ is maximized for the minimum value of $\beta$ subject to the constraint $\beta\in(x_{(n)},\infty)$. In other words, as you say, $\ell(\theta)$ is a decreasing function of $\beta$ for fixed $\alpha$. Hence conclude that MLE of $\beta$ as you guessed is $$\hat\beta=X_{(n)}$$
It is now valid to derive the MLE of $\alpha$ by differentiating the log-likelihood as you have done. This MLE is likely to depend on the MLE of $\beta$.
Indeed,
\begin{align}
\frac{\partial\ell}{\partial\alpha}&=\frac{n}{\alpha}-n\ln\beta+\sum_{i=1}^n \ln x_i
\end{align}
, which vanishes if and only if $$\alpha=\frac{n}{n\ln\beta-\sum_{i=1}^n\ln x_i}$$
(Since $x_i<\beta\implies \ln x_i<\ln\beta\implies \sum \ln x_i<n\ln\beta$, the above expression is defined.)
So our possible candidate for MLE of $\alpha$ is $$\hat\alpha=\frac{n}{n\ln\hat\beta-\sum_{i=1}^n\ln x_i}$$
At this point, you can finish your argument saying that MLE of $\theta=(\alpha,\beta)$ is $\hat\theta=(\hat\alpha,\hat\beta)$.
But since this is a maximization problem in two variables $(\alpha,\beta)$, you could perhaps verify that $$\ell(\hat\theta)\ge \ell (\theta)$$ holds for every $\theta$. This would be a bit more rigorous I think. | Estimating the MLE where the parameter is also the constraint
Looks like both $\alpha$ and $\beta$ are unknown here. So our parameter is $\theta=(\alpha,\beta)$.
The population pdf is $$f_{\theta}(x)=\frac{\alpha}{\beta^{\alpha}}x^{\alpha-1}\mathbf1_{0<x<\beta}\ |
55,755 | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Let $\gamma_{ij} \sim \texttt{gamma}(A a_{ij})$ independently. Recall that construction of the Dirichlet distribution as a normalization of gamma random variables. Then the Dirichlet distribution we start with is equal in distribution to
$$
\left(\frac{\gamma_{00}}{\sum \gamma_{ij}}, \ldots, \frac{\gamma_{11}}{\sum \gamma_{ij}}\right) \sim \texttt{dirichlet}(A a_{00}, \ldots, Aa_{11}).
$$
But now we have, for example,
$$
f_{1 | 0} = \frac{f_{10}}{f_{00} + f_{10}} \stackrel{d}{=} \frac{\gamma_{10}}{\gamma_{00} + \gamma_{10}}.
$$
And we can get a similar expersion for $f_{0|0} = \gamma_{00}/(\gamma_{00} + \gamma_{10})$, whence it follows that
$$
(f_{0|0}, f_{1|0}) \sim \texttt{dirichlet}(A a_{00}, A a_{10}).
$$
Easy.
To answer your question about how the conditional model varies from the joint model: if you specify Dirichlet priors on the conditionals of the first component given the second, and a very particular Dirichlet prior on the marginals of the second, then you get a Dirichlet prior for the joint. So you get a little bit more flexibility by using Dirichlet priors on the marginals and conditionals separately; the most general sort of prior you get this way is a special case of what has been called a hyper-Dirichlet, or Dirichlet-tree prior. The hyper-Dirichlet is also conjugate to multinomial sampling; in addition to the Dirichlet, it also contains all stick-breaking constructions you can get from the Beta distribution, and many other possibilities as well. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Let $\gamma_{ij} \sim \texttt{gamma}(A a_{ij})$ independently. Recall that construction of the Dirichlet distribution as a normalization of gamma random variables. Then the Dirichlet distribution we s | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Let $\gamma_{ij} \sim \texttt{gamma}(A a_{ij})$ independently. Recall that construction of the Dirichlet distribution as a normalization of gamma random variables. Then the Dirichlet distribution we start with is equal in distribution to
$$
\left(\frac{\gamma_{00}}{\sum \gamma_{ij}}, \ldots, \frac{\gamma_{11}}{\sum \gamma_{ij}}\right) \sim \texttt{dirichlet}(A a_{00}, \ldots, Aa_{11}).
$$
But now we have, for example,
$$
f_{1 | 0} = \frac{f_{10}}{f_{00} + f_{10}} \stackrel{d}{=} \frac{\gamma_{10}}{\gamma_{00} + \gamma_{10}}.
$$
And we can get a similar expersion for $f_{0|0} = \gamma_{00}/(\gamma_{00} + \gamma_{10})$, whence it follows that
$$
(f_{0|0}, f_{1|0}) \sim \texttt{dirichlet}(A a_{00}, A a_{10}).
$$
Easy.
To answer your question about how the conditional model varies from the joint model: if you specify Dirichlet priors on the conditionals of the first component given the second, and a very particular Dirichlet prior on the marginals of the second, then you get a Dirichlet prior for the joint. So you get a little bit more flexibility by using Dirichlet priors on the marginals and conditionals separately; the most general sort of prior you get this way is a special case of what has been called a hyper-Dirichlet, or Dirichlet-tree prior. The hyper-Dirichlet is also conjugate to multinomial sampling; in addition to the Dirichlet, it also contains all stick-breaking constructions you can get from the Beta distribution, and many other possibilities as well. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Let $\gamma_{ij} \sim \texttt{gamma}(A a_{ij})$ independently. Recall that construction of the Dirichlet distribution as a normalization of gamma random variables. Then the Dirichlet distribution we s |
55,756 | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Edit:
(I decided not to delete this answer, as it contains a proof of the distributive property of the Dirichlet Distribution. I have however now managed to answer the original post, which I put in a separate answer)
I generally think about these problems using the fundamental theorem of calculus. You make a new variable (let's call it a), write down an integral for the probability that $a<A$, $P(a<A)$, differentiate wrt A (this will usually use the fundamental theorem of calculus, or in more complex cases, Leibniz rule for differentiating under the integral sign, a generalisation of the former), and the result gives you the pdf for a evaluated at A, or $p(A)$. Let's see how this works, to derive an expression for $a=q_{1}+q_{2}$, or what you refer to as the associative property of the Dirichlet Distribution.
$P(a<A)=\int_{0}^{A}dq_{1}\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1}, q_{2}, q_{3}, 1 - q_{1} - q_{2} - q_{3};\alpha)$
Writing this as $\int_{0}^{A}f(A,q_{1})dq_{1}$ and differentiating, one obtains:
$f(A,A) + \int_{0}^{A}\frac{\partial}{\partial A}f(A,q_{1})dq_{1}$
Because
$f(A,q_{1})=\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1},q_{2},q_{3}, 1-q_{1}-q_{2}-q_{3})$
we can see that $f(A,A)=0$, so we only have to worry about
$\int_{0}^{A}dq_{1}\frac{\partial}{\partial A}\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1},q_{2},q_{3}, 1-q_{1}-q_{2}-q_{3})$
which is a relatively simple case of the fundamental theorem of calculus, basically replace all instances of $q_{2}$ with $(A-q_{1})$
$\int_{0}^{A}dq_{1}\int_{0}^{1-q_{1}-(A-q_{1})}dq_{3}D(q_{1}, a-q_{1}, q_{3}, 1 - q_{1} - (A-q_{1})-q_{3};\alpha)$
which, explicitly now subbing in the function form of the Dirichlet Distirbution, is given by
$\frac{1}{B(\alpha)}\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}-1}(A-q_{1})^{\alpha_{2}-1}\int_{0}^{1-A}dq_{3}q_{3}^{\alpha_{3}-1}(1-A -q_{3})^{\alpha_{4}-1}$
This is now a product of integrals rather than a double integral. The first is solved with the substitution $v=\frac{q_{1}}{A}$ and the second with $u=\frac{q_{3}}{1-A}$. The first gives $A^{\alpha_{1}+\alpha_{2}-1}B(\alpha_{1}, \alpha_{2})$ and the second gives $(1-A)^{\alpha_{3}+\alpha_{4}-1}B(\alpha_{3},\alpha_{4})$
Putting these together with the original normalisation constant, you get the desired result.
However for $a=\frac{q_{1}}{q_{1}+q_{2}}$, I've hit a stumbling block which might turn out to be very hard to resolve and mean this doesn't have a closed form solution, or perhaps just that this method doesn't work. The sticking point, is that in order to define the region in 2-d space such that $0< \frac{q_{1}}{q_{1}+q_{2}}<1$, is actually quite hard. For example, if $q_{1}=1$ and $A=\frac{1}{4}$, no (permitted) $q_{2}$ can satisfy $\frac{1}{1+q_{2}}<\frac{1}{4}$.
In fact, if $A>\frac{1}{2}$, this is always possible but if $A<\frac{1}{2}$, then $q_{1}<\frac{A}{1-A}$ is needed. But the condition $q_{1}<\frac{A}{1-A}$ isn't enough, because when $A>\frac{1}{2}$, this constraint is too weak, $q_{1}$ needs to be smaller than a number which is greater than 1, so actually the constraint is $q_{1}<min(1, \frac{A}{1-A})$, and this cannot be differentiated, so I don't see how to differentiate under the integral sign. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Edit:
(I decided not to delete this answer, as it contains a proof of the distributive property of the Dirichlet Distribution. I have however now managed to answer the original post, which I put in a | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Edit:
(I decided not to delete this answer, as it contains a proof of the distributive property of the Dirichlet Distribution. I have however now managed to answer the original post, which I put in a separate answer)
I generally think about these problems using the fundamental theorem of calculus. You make a new variable (let's call it a), write down an integral for the probability that $a<A$, $P(a<A)$, differentiate wrt A (this will usually use the fundamental theorem of calculus, or in more complex cases, Leibniz rule for differentiating under the integral sign, a generalisation of the former), and the result gives you the pdf for a evaluated at A, or $p(A)$. Let's see how this works, to derive an expression for $a=q_{1}+q_{2}$, or what you refer to as the associative property of the Dirichlet Distribution.
$P(a<A)=\int_{0}^{A}dq_{1}\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1}, q_{2}, q_{3}, 1 - q_{1} - q_{2} - q_{3};\alpha)$
Writing this as $\int_{0}^{A}f(A,q_{1})dq_{1}$ and differentiating, one obtains:
$f(A,A) + \int_{0}^{A}\frac{\partial}{\partial A}f(A,q_{1})dq_{1}$
Because
$f(A,q_{1})=\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1},q_{2},q_{3}, 1-q_{1}-q_{2}-q_{3})$
we can see that $f(A,A)=0$, so we only have to worry about
$\int_{0}^{A}dq_{1}\frac{\partial}{\partial A}\int_{0}^{A-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}dq_{3}D(q_{1},q_{2},q_{3}, 1-q_{1}-q_{2}-q_{3})$
which is a relatively simple case of the fundamental theorem of calculus, basically replace all instances of $q_{2}$ with $(A-q_{1})$
$\int_{0}^{A}dq_{1}\int_{0}^{1-q_{1}-(A-q_{1})}dq_{3}D(q_{1}, a-q_{1}, q_{3}, 1 - q_{1} - (A-q_{1})-q_{3};\alpha)$
which, explicitly now subbing in the function form of the Dirichlet Distirbution, is given by
$\frac{1}{B(\alpha)}\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}-1}(A-q_{1})^{\alpha_{2}-1}\int_{0}^{1-A}dq_{3}q_{3}^{\alpha_{3}-1}(1-A -q_{3})^{\alpha_{4}-1}$
This is now a product of integrals rather than a double integral. The first is solved with the substitution $v=\frac{q_{1}}{A}$ and the second with $u=\frac{q_{3}}{1-A}$. The first gives $A^{\alpha_{1}+\alpha_{2}-1}B(\alpha_{1}, \alpha_{2})$ and the second gives $(1-A)^{\alpha_{3}+\alpha_{4}-1}B(\alpha_{3},\alpha_{4})$
Putting these together with the original normalisation constant, you get the desired result.
However for $a=\frac{q_{1}}{q_{1}+q_{2}}$, I've hit a stumbling block which might turn out to be very hard to resolve and mean this doesn't have a closed form solution, or perhaps just that this method doesn't work. The sticking point, is that in order to define the region in 2-d space such that $0< \frac{q_{1}}{q_{1}+q_{2}}<1$, is actually quite hard. For example, if $q_{1}=1$ and $A=\frac{1}{4}$, no (permitted) $q_{2}$ can satisfy $\frac{1}{1+q_{2}}<\frac{1}{4}$.
In fact, if $A>\frac{1}{2}$, this is always possible but if $A<\frac{1}{2}$, then $q_{1}<\frac{A}{1-A}$ is needed. But the condition $q_{1}<\frac{A}{1-A}$ isn't enough, because when $A>\frac{1}{2}$, this constraint is too weak, $q_{1}$ needs to be smaller than a number which is greater than 1, so actually the constraint is $q_{1}<min(1, \frac{A}{1-A})$, and this cannot be differentiated, so I don't see how to differentiate under the integral sign. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Edit:
(I decided not to delete this answer, as it contains a proof of the distributive property of the Dirichlet Distribution. I have however now managed to answer the original post, which I put in a |
55,757 | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Right, I've finally solved this, and decided to add another answer rather than edit my original one as I've edited it a lot of times and it's getting messy.
From the joint distribution $P(\underline{q})=\frac{q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}q_{4}^{\alpha_{4}-1}}{B(\underline{\alpha})}$
defined for $0 \leq q_{i} \leq 1 \forall i$ and $q_{1}+q_{2}+q_{3}+q_{4}=1$
and we define a new variable $a=\frac{q_{1}}{q_{1}+q_{2}}$ and wish to derive its distribution. We do this by finding an expression for $p(a<A)$ and differentiating w.r.t A. This will give us the pdf for a evaluated at A. Constructing this expression for the joint distribution is the hardest part (and the differentiation process is a little messy and requires care).
In particular, to construct $p(a<A)$, we really have to think about all of the regions in $q$-space for which $a<A$, as we have to integrate over all of this space to find $p(a<A)$. This was the part I was struggling with in my previous answers. We can solve this two ways, they equate to the same:
First
$0 \leq \frac{q_{1}}{q_{1}+q_{2}} \leq A$
We rearrange in terms of $q_{1}$ and find that $q_{1}<q_{2}\left(\frac{A}{1-A}\right)$
We also know that $q_{1}+q_{2} \leq 1$ and thus $q_{1} \leq 1 -q_{2}$. As both of these need to be true, we can write $q_{1} \leq min(1-q_{2}, q_{2}\left(\frac{A}{1-A}\right))$
Looking at this, we see that in general, when $q_{2}$ is large, $1-q_{2}$ is going to be the lesser of the two terms and vice versa when $q_{2}$ is small. The crossover occurs when $1-q_{2}=q_{2}\left(\frac{A}{1-A}\right)$ or $q_{2}=1-A$. Because $0\leq A \leq 1$, we know that this cross-over will occur for $0\leq q_{2} \leq 1$, i.e. in the allowed range, so will be important.
For $q_{2} < 1-A$, $q_{1}<q_{2}\left(\frac{A}{1-A}\right)$ and
for $q_{2}>1-A$, $q_{1}<1-q_{2}$
Thus our $(q_{1},q_{2})$ integration limits have to look like:
$\int_{0}^{1-A}dq_{2}\int _{0}^{q_{2}\left(\frac{A}{1-A}\right)}dq_{1} + \int_{1-A}^{1}dq_{2}\int _{0}^{1-q_{2}}dq_{1}$
Second
(note, these have to end up being equivalent)
We start from the same place, namely
$0 \leq \frac{q_{1}}{q_{1}+q_{2}} \leq A$
but re-arrange in terms of $q_{2}$ such that $q_{2}>\left(\frac{1-A}{A}\right)q_{1}$. Also $q_{1}+q_{2}<1$ and thus $q_{2}<1-q_{1}$, or, put together
$\left(\frac{1-A}{A}\right)q_{1}< q_{2} < 1-q_{1}$
Looking at this, we see that if $q_{1}$ becomes large enough, the upper limit for $q_{2}$ will be smaller than the lower limit, so let's check when that crossover occurs.
$1-q_{1}=\left(\frac{1-A}{A}\right)q_{1}$, which happens at $q_{1}=A$, so yes, it happens in the region we care about. Thus as well as having the above constraint on $q_{2}$, we know $q_{1}<A$. So we can write the integral limits as
$\int_{0}^{A}dq_{1}\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}dq_{2}$
It's not obvious to me, even when having these two forms in front of me, why they are the same. The latter one is easier to work with however, so I'm going to use the latter.
We then integrate over all $q_{3}$ values for which $q_{1}+q_{2}+q_{3}<1$ and then finally replace $q_{4}$ with $1-q_{1}-q_{2}-q_{3}$ as it's uniquely determined. Thus the term we need to differentiate is given by:
$\int_{0}^{A}dq_{1}\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}\frac{dq_{3}}{B(\underline{\alpha})}q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}(1-q_{1}-q_{2}-q_{3})^{\alpha_{4}-1}$
We first write this as $\int_{0}^{A}f(q_{1},A)dq_{1}$, so when we differentiate w.r.t A, we get (using Leibniz rule for differentiating under the integral sign)
$\int_{0}^{A}\frac{d}{dA}f(A,q_{1})dq_{1} + f(A,A)$
because we can write
$f(A,q_{1})=\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}g(q_{2})dq_{2}$
it turns that that $f(A,A)$ is given by
$\int_{1-A}^{1-A}g(q_{2})dq_{2}=0$
so we get a nice simplification and only have to calculate
$\int_{0}^{A}\frac{d}{dA}f(A,q_{1})dq_{1}$
given by (essentially replace all $q_{2}$ with $q_{1}\left(\frac{1-A}{A}\right)$ and then also take the derivative of $q_{1}\left( \frac{1-A}{A}\right)$ wrt A)
$\int_{0}^{A}q_{1}\cdot -1 \cdot \frac{d}{dA}\left(\frac{1-A}{A}\right)\int_{0}^{1-q_{1}-q_{1}\left(\frac{1-A}{A}\right)}\frac{dq_{3}}{B(\underline{\alpha})}q_{1}^{\alpha_{1}-1}\left(q_{1}\left(\frac{1-A}{A}\right)\right)^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}(1-q_{1}-q_{1}\left(\frac{1-A}{A}\right)-q_{3})^{\alpha_{4}-1}$
which tidies up to
$\left(\frac{1-A}{A}\right)^{\alpha_{2}-1}\frac{1}{A^{2}}\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}+\alpha_{2}-1}\int_{0}^{1-\frac{q_{1}}{A}}\frac{dq_{3}}{B(\underline{\alpha})}q_{3}^{\alpha_{3}-1}(1-\frac{q_{1}}{A}-q_{3})^{\alpha_{4}-1}$
if you're still with me, we're now basically there. We do the $q_{3}$ integral with the substitution $u=\frac{q_{3}}{1=\frac{q_{1}}{A}}$, to get $(1-\frac{q_{1}}{A})^{\alpha_{3}+\alpha_{4}-1}B(\alpha_{3}, \alpha_{4})$
That leaves us needing to compute the integral $\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}+\alpha_{2}-1}(1-\frac{q_{1}}{A})^{\alpha_{3}+\alpha_{4}-1}$, which is done using the substitution $u=\frac{q_{1}}{A}$ to obtain $A^{\alpha_{1}+\alpha_{2}}B(\alpha_{1}+\alpha_{2}, \alpha_{3}+\alpha_{4})$
Putting this together, we have
$(1-A)^{\alpha_{2}-1}A^{\alpha_{1}-1}\frac{B(\alpha_{1}+\alpha_{2}, \alpha_{3}+\alpha_{4})B(\alpha_{3}, \alpha_{4})}{B(\underline{\alpha})}$
and I'll leave to you to verify that the beta functions do indeed cancel to give $\frac{1}{B(\alpha_{1},\alpha_{2})}$, and so the distribution we're left with, we can recognise as normalised.
And thus, somewhat remarkably, the relative prevalence the first event being 0 and 1 does not have any bearing on how likely the second event is, given the first event. In Dirichlet inference, your $\alpha$s will usually be counts of events (combined with prior constants). If you want to know the relative probabilities of all four outcomes, you can use their respective counts and do Dirichlet inference and if you want to know the conditional probabilities, you can throw away the counts relating to first events that didn't go the right way, and then do binomial inference. The two are entirely compatible with one and other.
Now that I've done the analytics, I'd be interested/relieved to see your numerics give the same. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution | Right, I've finally solved this, and decided to add another answer rather than edit my original one as I've edited it a lot of times and it's getting messy.
From the joint distribution $P(\underline{ | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Right, I've finally solved this, and decided to add another answer rather than edit my original one as I've edited it a lot of times and it's getting messy.
From the joint distribution $P(\underline{q})=\frac{q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}q_{4}^{\alpha_{4}-1}}{B(\underline{\alpha})}$
defined for $0 \leq q_{i} \leq 1 \forall i$ and $q_{1}+q_{2}+q_{3}+q_{4}=1$
and we define a new variable $a=\frac{q_{1}}{q_{1}+q_{2}}$ and wish to derive its distribution. We do this by finding an expression for $p(a<A)$ and differentiating w.r.t A. This will give us the pdf for a evaluated at A. Constructing this expression for the joint distribution is the hardest part (and the differentiation process is a little messy and requires care).
In particular, to construct $p(a<A)$, we really have to think about all of the regions in $q$-space for which $a<A$, as we have to integrate over all of this space to find $p(a<A)$. This was the part I was struggling with in my previous answers. We can solve this two ways, they equate to the same:
First
$0 \leq \frac{q_{1}}{q_{1}+q_{2}} \leq A$
We rearrange in terms of $q_{1}$ and find that $q_{1}<q_{2}\left(\frac{A}{1-A}\right)$
We also know that $q_{1}+q_{2} \leq 1$ and thus $q_{1} \leq 1 -q_{2}$. As both of these need to be true, we can write $q_{1} \leq min(1-q_{2}, q_{2}\left(\frac{A}{1-A}\right))$
Looking at this, we see that in general, when $q_{2}$ is large, $1-q_{2}$ is going to be the lesser of the two terms and vice versa when $q_{2}$ is small. The crossover occurs when $1-q_{2}=q_{2}\left(\frac{A}{1-A}\right)$ or $q_{2}=1-A$. Because $0\leq A \leq 1$, we know that this cross-over will occur for $0\leq q_{2} \leq 1$, i.e. in the allowed range, so will be important.
For $q_{2} < 1-A$, $q_{1}<q_{2}\left(\frac{A}{1-A}\right)$ and
for $q_{2}>1-A$, $q_{1}<1-q_{2}$
Thus our $(q_{1},q_{2})$ integration limits have to look like:
$\int_{0}^{1-A}dq_{2}\int _{0}^{q_{2}\left(\frac{A}{1-A}\right)}dq_{1} + \int_{1-A}^{1}dq_{2}\int _{0}^{1-q_{2}}dq_{1}$
Second
(note, these have to end up being equivalent)
We start from the same place, namely
$0 \leq \frac{q_{1}}{q_{1}+q_{2}} \leq A$
but re-arrange in terms of $q_{2}$ such that $q_{2}>\left(\frac{1-A}{A}\right)q_{1}$. Also $q_{1}+q_{2}<1$ and thus $q_{2}<1-q_{1}$, or, put together
$\left(\frac{1-A}{A}\right)q_{1}< q_{2} < 1-q_{1}$
Looking at this, we see that if $q_{1}$ becomes large enough, the upper limit for $q_{2}$ will be smaller than the lower limit, so let's check when that crossover occurs.
$1-q_{1}=\left(\frac{1-A}{A}\right)q_{1}$, which happens at $q_{1}=A$, so yes, it happens in the region we care about. Thus as well as having the above constraint on $q_{2}$, we know $q_{1}<A$. So we can write the integral limits as
$\int_{0}^{A}dq_{1}\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}dq_{2}$
It's not obvious to me, even when having these two forms in front of me, why they are the same. The latter one is easier to work with however, so I'm going to use the latter.
We then integrate over all $q_{3}$ values for which $q_{1}+q_{2}+q_{3}<1$ and then finally replace $q_{4}$ with $1-q_{1}-q_{2}-q_{3}$ as it's uniquely determined. Thus the term we need to differentiate is given by:
$\int_{0}^{A}dq_{1}\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}dq_{2}\int_{0}^{1-q_{1}-q_{2}}\frac{dq_{3}}{B(\underline{\alpha})}q_{1}^{\alpha_{1}-1}q_{2}^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}(1-q_{1}-q_{2}-q_{3})^{\alpha_{4}-1}$
We first write this as $\int_{0}^{A}f(q_{1},A)dq_{1}$, so when we differentiate w.r.t A, we get (using Leibniz rule for differentiating under the integral sign)
$\int_{0}^{A}\frac{d}{dA}f(A,q_{1})dq_{1} + f(A,A)$
because we can write
$f(A,q_{1})=\int_{\left(\frac{1-A}{A}\right)q_{1}}^{1-q_{1}}g(q_{2})dq_{2}$
it turns that that $f(A,A)$ is given by
$\int_{1-A}^{1-A}g(q_{2})dq_{2}=0$
so we get a nice simplification and only have to calculate
$\int_{0}^{A}\frac{d}{dA}f(A,q_{1})dq_{1}$
given by (essentially replace all $q_{2}$ with $q_{1}\left(\frac{1-A}{A}\right)$ and then also take the derivative of $q_{1}\left( \frac{1-A}{A}\right)$ wrt A)
$\int_{0}^{A}q_{1}\cdot -1 \cdot \frac{d}{dA}\left(\frac{1-A}{A}\right)\int_{0}^{1-q_{1}-q_{1}\left(\frac{1-A}{A}\right)}\frac{dq_{3}}{B(\underline{\alpha})}q_{1}^{\alpha_{1}-1}\left(q_{1}\left(\frac{1-A}{A}\right)\right)^{\alpha_{2}-1}q_{3}^{\alpha_{3}-1}(1-q_{1}-q_{1}\left(\frac{1-A}{A}\right)-q_{3})^{\alpha_{4}-1}$
which tidies up to
$\left(\frac{1-A}{A}\right)^{\alpha_{2}-1}\frac{1}{A^{2}}\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}+\alpha_{2}-1}\int_{0}^{1-\frac{q_{1}}{A}}\frac{dq_{3}}{B(\underline{\alpha})}q_{3}^{\alpha_{3}-1}(1-\frac{q_{1}}{A}-q_{3})^{\alpha_{4}-1}$
if you're still with me, we're now basically there. We do the $q_{3}$ integral with the substitution $u=\frac{q_{3}}{1=\frac{q_{1}}{A}}$, to get $(1-\frac{q_{1}}{A})^{\alpha_{3}+\alpha_{4}-1}B(\alpha_{3}, \alpha_{4})$
That leaves us needing to compute the integral $\int_{0}^{A}dq_{1}q_{1}^{\alpha_{1}+\alpha_{2}-1}(1-\frac{q_{1}}{A})^{\alpha_{3}+\alpha_{4}-1}$, which is done using the substitution $u=\frac{q_{1}}{A}$ to obtain $A^{\alpha_{1}+\alpha_{2}}B(\alpha_{1}+\alpha_{2}, \alpha_{3}+\alpha_{4})$
Putting this together, we have
$(1-A)^{\alpha_{2}-1}A^{\alpha_{1}-1}\frac{B(\alpha_{1}+\alpha_{2}, \alpha_{3}+\alpha_{4})B(\alpha_{3}, \alpha_{4})}{B(\underline{\alpha})}$
and I'll leave to you to verify that the beta functions do indeed cancel to give $\frac{1}{B(\alpha_{1},\alpha_{2})}$, and so the distribution we're left with, we can recognise as normalised.
And thus, somewhat remarkably, the relative prevalence the first event being 0 and 1 does not have any bearing on how likely the second event is, given the first event. In Dirichlet inference, your $\alpha$s will usually be counts of events (combined with prior constants). If you want to know the relative probabilities of all four outcomes, you can use their respective counts and do Dirichlet inference and if you want to know the conditional probabilities, you can throw away the counts relating to first events that didn't go the right way, and then do binomial inference. The two are entirely compatible with one and other.
Now that I've done the analytics, I'd be interested/relieved to see your numerics give the same. | Distribution of *conditional* frequencies when frequencies follow a Dirichlet distribution
Right, I've finally solved this, and decided to add another answer rather than edit my original one as I've edited it a lot of times and it's getting messy.
From the joint distribution $P(\underline{ |
55,758 | What are global sensitivity and local sensitivity in differential privacy? | I have read the reference to the definition in Zhu's book. The paper is 《Smooth Sensitivity and Sampling in Private Data Analysis》, which gives more distinct definitons of global sensitivity and local sensitivity[1].
Obviously, $GS_f=max_{x}LS_f(x)$
[1] K. Nissim, S. Raskhodnikova, and A. Smith, “Smooth sensitivity and sampling in private data analysis,” Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC 07, 2007. | What are global sensitivity and local sensitivity in differential privacy? | I have read the reference to the definition in Zhu's book. The paper is 《Smooth Sensitivity and Sampling in Private Data Analysis》, which gives more distinct definitons of global sensitivity and local | What are global sensitivity and local sensitivity in differential privacy?
I have read the reference to the definition in Zhu's book. The paper is 《Smooth Sensitivity and Sampling in Private Data Analysis》, which gives more distinct definitons of global sensitivity and local sensitivity[1].
Obviously, $GS_f=max_{x}LS_f(x)$
[1] K. Nissim, S. Raskhodnikova, and A. Smith, “Smooth sensitivity and sampling in private data analysis,” Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC 07, 2007. | What are global sensitivity and local sensitivity in differential privacy?
I have read the reference to the definition in Zhu's book. The paper is 《Smooth Sensitivity and Sampling in Private Data Analysis》, which gives more distinct definitons of global sensitivity and local |
55,759 | What are global sensitivity and local sensitivity in differential privacy? | Global Sensitivity (GS) depends only on the function f. When trying to figure out the GS of function f, we examine all possible pairs of neighboring datasets in the domain of the function f.
Local Sensitivity (LS) depends on both the function f and the data set at hand D (known as an instance). When trying to figure out the LS of (f, D), we only examine a subset of all possible pairs of neighboring datasets, holding one of the dataset constant at D. | What are global sensitivity and local sensitivity in differential privacy? | Global Sensitivity (GS) depends only on the function f. When trying to figure out the GS of function f, we examine all possible pairs of neighboring datasets in the domain of the function f.
Local Sen | What are global sensitivity and local sensitivity in differential privacy?
Global Sensitivity (GS) depends only on the function f. When trying to figure out the GS of function f, we examine all possible pairs of neighboring datasets in the domain of the function f.
Local Sensitivity (LS) depends on both the function f and the data set at hand D (known as an instance). When trying to figure out the LS of (f, D), we only examine a subset of all possible pairs of neighboring datasets, holding one of the dataset constant at D. | What are global sensitivity and local sensitivity in differential privacy?
Global Sensitivity (GS) depends only on the function f. When trying to figure out the GS of function f, we examine all possible pairs of neighboring datasets in the domain of the function f.
Local Sen |
55,760 | Binary predictor with highly skewed distribution | In general, it's not an issue; you should keep it if it makes sense to be in the model, which presumably it does or it wouldn't be there to begin with.
Consider, for example, a model for weekly sales of chayote squash in the New Orleans area (see https://en.wikipedia.org/wiki/Chayote, down in the "Americas" section.) Such a model would likely need a dummy variable for Thanksgiving week in order to capture the very large increase in chayote sales at Thanksgiving (> 5x "regular" sales.) This dummy variable would take on the value "1" once every 52 weeks and "0" the rest of the time, so the "not Thanksgiving week" category represents roughly 98% of the data. If we take the dummy variable out, our Thanksgiving forecasts will be terrible and likely all the rest of our forecasts will be a lot worse, because they would be affected by the Thanksgiving data point in various ways (e.g., trends look much steeper if Thanksgiving is near the end of the modeling horizon, ...).
It's important, however, to note the following caveat. @Henry's comment in response to the OP is of course correct; if you only have one observation for one of the two categories, including the dummy variable will, in effect, simply remove that observation from the data set, and all your (other) parameter estimates would be the same as if you had just deleted that observation. | Binary predictor with highly skewed distribution | In general, it's not an issue; you should keep it if it makes sense to be in the model, which presumably it does or it wouldn't be there to begin with.
Consider, for example, a model for weekly sale | Binary predictor with highly skewed distribution
In general, it's not an issue; you should keep it if it makes sense to be in the model, which presumably it does or it wouldn't be there to begin with.
Consider, for example, a model for weekly sales of chayote squash in the New Orleans area (see https://en.wikipedia.org/wiki/Chayote, down in the "Americas" section.) Such a model would likely need a dummy variable for Thanksgiving week in order to capture the very large increase in chayote sales at Thanksgiving (> 5x "regular" sales.) This dummy variable would take on the value "1" once every 52 weeks and "0" the rest of the time, so the "not Thanksgiving week" category represents roughly 98% of the data. If we take the dummy variable out, our Thanksgiving forecasts will be terrible and likely all the rest of our forecasts will be a lot worse, because they would be affected by the Thanksgiving data point in various ways (e.g., trends look much steeper if Thanksgiving is near the end of the modeling horizon, ...).
It's important, however, to note the following caveat. @Henry's comment in response to the OP is of course correct; if you only have one observation for one of the two categories, including the dummy variable will, in effect, simply remove that observation from the data set, and all your (other) parameter estimates would be the same as if you had just deleted that observation. | Binary predictor with highly skewed distribution
In general, it's not an issue; you should keep it if it makes sense to be in the model, which presumably it does or it wouldn't be there to begin with.
Consider, for example, a model for weekly sale |
55,761 | Practical limits to collinearity problems? | Why am I relatively unlikely to get, say, an x_1 coefficient of -105 and an x_2 coefficient of 110? Those add up to 5, but there is something pushing the results toward 2.5, 2.5.
Linear combinations of your two variables $x_1 = x + \epsilon_1$ and $x_2 = x+\epsilon_2$ can be described like:
$$ \frac{a+b}{2} (x + \epsilon_1) + \frac{a-b}{2}(x + \epsilon_2) = a x + \frac{1}{2} a (\epsilon_1 + \epsilon_2) + \frac{1}{2} b (\epsilon_1 - \epsilon_2)$$
The parameter a will be approximately equal to the parameter associated with the variale $x$. In your case that is $a = 5$.
The parameter b will be related to the variance and correlation of $y=(\epsilon_1 + \epsilon_2)$ and $z=(\epsilon_1 - \epsilon_2)$ by: $$\text{Var} \left( a y + b z \right) = a^2 \text{Var} (y) + b^2 \text{Var}(z) + 2ab \sqrt{\text{Var}(y)\text{Var}(z)} \rho_{y,z} $$ note that $y=(\epsilon_1 + \epsilon_2)$ and $z=(\epsilon_1 - \epsilon_2)$ are iid distribtued variables and $\rho_{y,z}$ will be distributed around zero.
So mostly you will be close to $b=0$
So in the case of -105 and 110 you would get larger contributions from the error terms, which only gets 'undone' when there is an a strong correlation in the particular sample of the error terms.
Influence of $\sigma$
I can model as well the influence of $\sigma$ but I do not get the same pattern as you. Below you see that with larger variance, the sum of the parameters will be smaller than $5$ (to decrease the effect of the error terms) and also the difference will be smaller as it relates to the size of the sum of the parameters. But, I do not see why the parameters would go from 0 to 5 as in your last graph.
This is for thousand repetitions of the data:
$$\begin{array}{rcl}
x_1 &=& x + \epsilon_1 \\
x_2 &=& x + \epsilon_2 \\
y &=& 5x + 4
\end{array}$$
where $x$ is a vector of size $n=100$ varying from 0 to 1, $\epsilon_1$ and $\epsilon_2$ are Gaussian random noise.
Which is modeled as a linear model minimizing the sum of least squared error $\epsilon$
$$y = a x_1 + b x_2 + \epsilon$$ | Practical limits to collinearity problems? | Why am I relatively unlikely to get, say, an x_1 coefficient of -105 and an x_2 coefficient of 110? Those add up to 5, but there is something pushing the results toward 2.5, 2.5.
Linear combinations o | Practical limits to collinearity problems?
Why am I relatively unlikely to get, say, an x_1 coefficient of -105 and an x_2 coefficient of 110? Those add up to 5, but there is something pushing the results toward 2.5, 2.5.
Linear combinations of your two variables $x_1 = x + \epsilon_1$ and $x_2 = x+\epsilon_2$ can be described like:
$$ \frac{a+b}{2} (x + \epsilon_1) + \frac{a-b}{2}(x + \epsilon_2) = a x + \frac{1}{2} a (\epsilon_1 + \epsilon_2) + \frac{1}{2} b (\epsilon_1 - \epsilon_2)$$
The parameter a will be approximately equal to the parameter associated with the variale $x$. In your case that is $a = 5$.
The parameter b will be related to the variance and correlation of $y=(\epsilon_1 + \epsilon_2)$ and $z=(\epsilon_1 - \epsilon_2)$ by: $$\text{Var} \left( a y + b z \right) = a^2 \text{Var} (y) + b^2 \text{Var}(z) + 2ab \sqrt{\text{Var}(y)\text{Var}(z)} \rho_{y,z} $$ note that $y=(\epsilon_1 + \epsilon_2)$ and $z=(\epsilon_1 - \epsilon_2)$ are iid distribtued variables and $\rho_{y,z}$ will be distributed around zero.
So mostly you will be close to $b=0$
So in the case of -105 and 110 you would get larger contributions from the error terms, which only gets 'undone' when there is an a strong correlation in the particular sample of the error terms.
Influence of $\sigma$
I can model as well the influence of $\sigma$ but I do not get the same pattern as you. Below you see that with larger variance, the sum of the parameters will be smaller than $5$ (to decrease the effect of the error terms) and also the difference will be smaller as it relates to the size of the sum of the parameters. But, I do not see why the parameters would go from 0 to 5 as in your last graph.
This is for thousand repetitions of the data:
$$\begin{array}{rcl}
x_1 &=& x + \epsilon_1 \\
x_2 &=& x + \epsilon_2 \\
y &=& 5x + 4
\end{array}$$
where $x$ is a vector of size $n=100$ varying from 0 to 1, $\epsilon_1$ and $\epsilon_2$ are Gaussian random noise.
Which is modeled as a linear model minimizing the sum of least squared error $\epsilon$
$$y = a x_1 + b x_2 + \epsilon$$ | Practical limits to collinearity problems?
Why am I relatively unlikely to get, say, an x_1 coefficient of -105 and an x_2 coefficient of 110? Those add up to 5, but there is something pushing the results toward 2.5, 2.5.
Linear combinations o |
55,762 | Practical limits to collinearity problems? | In the way you've added noise, you could write $x' = x +\epsilon$ (where $\epsilon$ is a normally distributed variable representing noise). Furthermore, $y=5x+4+\eta$, where $\eta$ is another Gaussian noise term.
You are trying to fit a regression of the form $Ax + Bx' +C$, and you know that you're fitting it to a target variable which was generated by $y=5x+4+\eta$, thus :
$Ax + B(x + \epsilon) + C= 5x+4+\eta$
or
$(A+B)x + C + B\epsilon = 5x + 4 + \eta$
Hopefully, this sheds some light on what's happening (together with your numerics), even if it doesn't formally show it. If $\epsilon$ is so small that $B\epsilon$ is also very small, then pretty much any combination of (A,B) which satisfies $A+B=5$ will likely be a good fit, and which one is best is a question of noise. The large $\epsilon$ is, the more $B\epsilon$ will in general make the LHS fluctuate wrt to the RHS and the more B will go to zero ( a trivial restatement of the fact that x is the truly explanatory variable and B is only correlated, and as you shrink that correlation, the regression will find it easier to learn this)
It's less clear to me what happens when $\epsilon$ and $\eta$ are of similar sizes. In general there will be competing effects where you might be able to match the data better with non-zero B. What happens if you simulate a larger data set? Does the effect still hold? | Practical limits to collinearity problems? | In the way you've added noise, you could write $x' = x +\epsilon$ (where $\epsilon$ is a normally distributed variable representing noise). Furthermore, $y=5x+4+\eta$, where $\eta$ is another Gaussian | Practical limits to collinearity problems?
In the way you've added noise, you could write $x' = x +\epsilon$ (where $\epsilon$ is a normally distributed variable representing noise). Furthermore, $y=5x+4+\eta$, where $\eta$ is another Gaussian noise term.
You are trying to fit a regression of the form $Ax + Bx' +C$, and you know that you're fitting it to a target variable which was generated by $y=5x+4+\eta$, thus :
$Ax + B(x + \epsilon) + C= 5x+4+\eta$
or
$(A+B)x + C + B\epsilon = 5x + 4 + \eta$
Hopefully, this sheds some light on what's happening (together with your numerics), even if it doesn't formally show it. If $\epsilon$ is so small that $B\epsilon$ is also very small, then pretty much any combination of (A,B) which satisfies $A+B=5$ will likely be a good fit, and which one is best is a question of noise. The large $\epsilon$ is, the more $B\epsilon$ will in general make the LHS fluctuate wrt to the RHS and the more B will go to zero ( a trivial restatement of the fact that x is the truly explanatory variable and B is only correlated, and as you shrink that correlation, the regression will find it easier to learn this)
It's less clear to me what happens when $\epsilon$ and $\eta$ are of similar sizes. In general there will be competing effects where you might be able to match the data better with non-zero B. What happens if you simulate a larger data set? Does the effect still hold? | Practical limits to collinearity problems?
In the way you've added noise, you could write $x' = x +\epsilon$ (where $\epsilon$ is a normally distributed variable representing noise). Furthermore, $y=5x+4+\eta$, where $\eta$ is another Gaussian |
55,763 | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$ | I provided a proof of the Eckart-Young theorem in my answer to What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?. Let's take a look at the first steps of this proof:
We want to find matrix $A$ of rank $k$ that minimizes $\|X-A\|^2_F$. We can factorize $A=BW^\top$, where $W$ has $k$ orthonormal columns. Minimizing $\|X-BW^\top\|^2$ for fixed $W$ is a regression problem with solution $B=XW$. Plugging it in, we see that we now need to minimize $\|X-XWW^\top\|^2=\ldots$
This seems to use your formulation #2.
But what would change if we use formulation #1? Matrix $A$ of rank $\le k$ can still be factorized as $A=BW^\top$ where $W$ has $k$ orthonormal columns; if $\text{rank}(A)<k$ then some columns of $B$ will be zero. In any case, we immediately find that the optimal $B$ is given by $B=XW$.
So nothing at all changes in the proof after that. The best approximation of rank $\le k$ turns out to have rank $k$. | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$ | I provided a proof of the Eckart-Young theorem in my answer to What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?. Let's take a look at the firs | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$
I provided a proof of the Eckart-Young theorem in my answer to What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?. Let's take a look at the first steps of this proof:
We want to find matrix $A$ of rank $k$ that minimizes $\|X-A\|^2_F$. We can factorize $A=BW^\top$, where $W$ has $k$ orthonormal columns. Minimizing $\|X-BW^\top\|^2$ for fixed $W$ is a regression problem with solution $B=XW$. Plugging it in, we see that we now need to minimize $\|X-XWW^\top\|^2=\ldots$
This seems to use your formulation #2.
But what would change if we use formulation #1? Matrix $A$ of rank $\le k$ can still be factorized as $A=BW^\top$ where $W$ has $k$ orthonormal columns; if $\text{rank}(A)<k$ then some columns of $B$ will be zero. In any case, we immediately find that the optimal $B$ is given by $B=XW$.
So nothing at all changes in the proof after that. The best approximation of rank $\le k$ turns out to have rank $k$. | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$
I provided a proof of the Eckart-Young theorem in my answer to What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?. Let's take a look at the firs |
55,764 | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$ | The purpose of this answer is to show (a) the result is far more general and (b) when you look at it the right way, it is obvious: because the matrices of rank less than $k$ occupy a negligible portion of the space of matrices of ranks less than or equal to $k$ (their complement is dense), you can never do any worse when you optimize a continuous function over the higher-rank matrices alone.
I hope that pointing this out will suffice, but for completeness the rest of this post provides some details and explanation.
It is enough to consider the set $\mathcal X$ of $n\times d$ matrices as a topological space (determined by the Frobenius norm in this case) and the set of matrices of rank $k$ or less, $\mathcal{X}^k\subset \mathcal X,$ to be a subspace. All we need to know about the function
$$f:\mathcal X\to \mathbb{R}$$
given by
$$f(X) = ||X-A||^2$$
is that it is continuous--and I hope this is obvious.
The general principle is this:
When $\mathcal{Y}\subset \mathcal{X}$ is dense and $f:\mathcal{X}\to \mathbb R$ is continuous, then $\sup f(\mathcal Y) = \sup f(\mathcal X).$
I call this a "principle" rather than a "theorem" because (a) it follows directly from the definitions, yet (b) is of widespread applicability and helpfulness, despite its obviousness. Before demonstrating this principle, let's apply it in an obvious and trivial way.
Corollary: The supremum of a continuous function on a dense subset of a topological space is never less than its supremum on the complement of that subset.
Let's deal with (a). "Dense" means that every $x\in\mathcal X$ can be approached arbitrarily closely by elements of $\mathcal Y.$ "Continuous" means values $f(x)$ can be approached arbitrarily closely by $f(z)$ where $z$ is "close" to $x$ in $\mathcal X.$ Put the two definitions together and the principle follows.
Now let's apply the corollary to $\mathcal Y^k = \mathcal X^k \setminus \mathcal X^{k-1},$ which is the set of matrices of rank $k.$
Lemma: When $k \lt \min(n,d),$ $\mathcal Y^k$ is dense in $\mathcal X^k.$
I offer two demonstrations. Algebra teaches us that $\mathcal{X}^{k}$ is the intersection of zeros of polynomial functions: namely, determinants of the $k\times k$ minors of the matrices. It is easy to see these functions are algebraically independent and are $(n-k+1)(d-k+1)$ in number. Thus locally $\mathcal{X}^k$ is a manifold of dimension $nd - (n-k+1)(d-k+1).$ Since these dimensions strictly increase as $k$ ranges from $0$ to $\min(n,d),$ the complement of each $\mathcal{X}^{k-1}$ is dense within its containing $\mathcal{X}^{k}.$
The second demonstration views $X$ as representing a linear transformation from $\mathbb{R}^d$ to $\mathbb{R}^n.$ If it is not of full rank (as assumed in the Lemma), it has nontrivial kernel and nontrivial cokernel. That is, there is a nonzero vector $x\in\mathbb{R}^d$ for which $Xx=0$ and there is a nonzero vector $y\in\mathbb{R}^n$ independent of the set of $\{Xx\mid x\in\mathbb{R}^d\}$ (this is the "column space" of $X$).
For every real number $\lambda$ define
$$X(\lambda) = X + \lambda y x^\prime.$$
Geometrically, this is altering $X$ by taking one of the directions it collapses ($x$) and instead mapping it to a new direction $y$ in the image, because
$$X(\lambda)x = (X + \lambda y x^\prime)x = Xx + \lambda y x^\prime x = 0 + \lambda(x^\prime x) y = (\lambda\, xx^\prime)y$$
is a nonzero multiple of $y.$
The dimensions of the kernel and cokernel of $X(\lambda)$ are thereby both decreased, making its rank one greater than the rank of $X.$ By choosing a sequence of nonzero values of $\lambda$ converging to $0$ we obtain a sequence $X(\lambda)$ of higher-rank matrices converging (in the Frobenius norm) to the original matrix $X,$ QED. | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$ | The purpose of this answer is to show (a) the result is far more general and (b) when you look at it the right way, it is obvious: because the matrices of rank less than $k$ occupy a negligible portio | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$
The purpose of this answer is to show (a) the result is far more general and (b) when you look at it the right way, it is obvious: because the matrices of rank less than $k$ occupy a negligible portion of the space of matrices of ranks less than or equal to $k$ (their complement is dense), you can never do any worse when you optimize a continuous function over the higher-rank matrices alone.
I hope that pointing this out will suffice, but for completeness the rest of this post provides some details and explanation.
It is enough to consider the set $\mathcal X$ of $n\times d$ matrices as a topological space (determined by the Frobenius norm in this case) and the set of matrices of rank $k$ or less, $\mathcal{X}^k\subset \mathcal X,$ to be a subspace. All we need to know about the function
$$f:\mathcal X\to \mathbb{R}$$
given by
$$f(X) = ||X-A||^2$$
is that it is continuous--and I hope this is obvious.
The general principle is this:
When $\mathcal{Y}\subset \mathcal{X}$ is dense and $f:\mathcal{X}\to \mathbb R$ is continuous, then $\sup f(\mathcal Y) = \sup f(\mathcal X).$
I call this a "principle" rather than a "theorem" because (a) it follows directly from the definitions, yet (b) is of widespread applicability and helpfulness, despite its obviousness. Before demonstrating this principle, let's apply it in an obvious and trivial way.
Corollary: The supremum of a continuous function on a dense subset of a topological space is never less than its supremum on the complement of that subset.
Let's deal with (a). "Dense" means that every $x\in\mathcal X$ can be approached arbitrarily closely by elements of $\mathcal Y.$ "Continuous" means values $f(x)$ can be approached arbitrarily closely by $f(z)$ where $z$ is "close" to $x$ in $\mathcal X.$ Put the two definitions together and the principle follows.
Now let's apply the corollary to $\mathcal Y^k = \mathcal X^k \setminus \mathcal X^{k-1},$ which is the set of matrices of rank $k.$
Lemma: When $k \lt \min(n,d),$ $\mathcal Y^k$ is dense in $\mathcal X^k.$
I offer two demonstrations. Algebra teaches us that $\mathcal{X}^{k}$ is the intersection of zeros of polynomial functions: namely, determinants of the $k\times k$ minors of the matrices. It is easy to see these functions are algebraically independent and are $(n-k+1)(d-k+1)$ in number. Thus locally $\mathcal{X}^k$ is a manifold of dimension $nd - (n-k+1)(d-k+1).$ Since these dimensions strictly increase as $k$ ranges from $0$ to $\min(n,d),$ the complement of each $\mathcal{X}^{k-1}$ is dense within its containing $\mathcal{X}^{k}.$
The second demonstration views $X$ as representing a linear transformation from $\mathbb{R}^d$ to $\mathbb{R}^n.$ If it is not of full rank (as assumed in the Lemma), it has nontrivial kernel and nontrivial cokernel. That is, there is a nonzero vector $x\in\mathbb{R}^d$ for which $Xx=0$ and there is a nonzero vector $y\in\mathbb{R}^n$ independent of the set of $\{Xx\mid x\in\mathbb{R}^d\}$ (this is the "column space" of $X$).
For every real number $\lambda$ define
$$X(\lambda) = X + \lambda y x^\prime.$$
Geometrically, this is altering $X$ by taking one of the directions it collapses ($x$) and instead mapping it to a new direction $y$ in the image, because
$$X(\lambda)x = (X + \lambda y x^\prime)x = Xx + \lambda y x^\prime x = 0 + \lambda(x^\prime x) y = (\lambda\, xx^\prime)y$$
is a nonzero multiple of $y.$
The dimensions of the kernel and cokernel of $X(\lambda)$ are thereby both decreased, making its rank one greater than the rank of $X.$ By choosing a sequence of nonzero values of $\lambda$ converging to $0$ we obtain a sequence $X(\lambda)$ of higher-rank matrices converging (in the Frobenius norm) to the original matrix $X,$ QED. | Eckart-Young-Mirsky theorem: rank $≤k$ or rank $=k$
The purpose of this answer is to show (a) the result is far more general and (b) when you look at it the right way, it is obvious: because the matrices of rank less than $k$ occupy a negligible portio |
55,765 | Bootstrap intervals for predictions, how to interpret it? | Yes, it is perfectly sensible.
For a quick interpretation, I like the one provided by Davison: Assuming $T$ is an estimator of a parameter $\psi$ based on a random sample $Y_1, . . . , Y_n$, $V_T^{0.5}$ is the standard error of $T$, $n \rightarrow \infty$ and $\zeta_\alpha$ is the $\alpha$-th quantile of a standard normal distribution function, the interval with endpoints: $T − \zeta_{1−\alpha} V_T^{0.5}, T + \zeta_{\alpha} V_T^{0.5}$ contains $\psi_0$, the true but unknown value of $\psi$, with probability approximately $(1 − 2\alpha)$. (See A.C. Davison "Statistical models", Chapt. 3 for more.)
These said, there are some insightful threads in CV as why this is a somewhat over-simplified view of what a confidence interval is:
"Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?",
"Interpretation of confidence interval" and "Is it true that the percentile bootstrap should never be used?".
To quote Hastie et al. from the book "Elements of Statistical Learning" (Sect. 8.4) directly: "we might think of the bootstrap distribution as a "poor man's" Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out."
When bootstrapping I think is much more important not to forget accounting for dependence structures (e.g. as described for example in Owen and Eckles' Bootstrapping data arrays of arbitrary order). These might be due to clustering of the data, heteroskedasticity (e.g. see the notion of wild bootstrap) and other deviations from IID data generating procedures. Ignoring such issues will render any discussions about the subsequent interpretation of the generated CIs, moot. | Bootstrap intervals for predictions, how to interpret it? | Yes, it is perfectly sensible.
For a quick interpretation, I like the one provided by Davison: Assuming $T$ is an estimator of a parameter $\psi$ based on a random sample $Y_1, . . . , Y_n$, $V_T^{0.5 | Bootstrap intervals for predictions, how to interpret it?
Yes, it is perfectly sensible.
For a quick interpretation, I like the one provided by Davison: Assuming $T$ is an estimator of a parameter $\psi$ based on a random sample $Y_1, . . . , Y_n$, $V_T^{0.5}$ is the standard error of $T$, $n \rightarrow \infty$ and $\zeta_\alpha$ is the $\alpha$-th quantile of a standard normal distribution function, the interval with endpoints: $T − \zeta_{1−\alpha} V_T^{0.5}, T + \zeta_{\alpha} V_T^{0.5}$ contains $\psi_0$, the true but unknown value of $\psi$, with probability approximately $(1 − 2\alpha)$. (See A.C. Davison "Statistical models", Chapt. 3 for more.)
These said, there are some insightful threads in CV as why this is a somewhat over-simplified view of what a confidence interval is:
"Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean?",
"Interpretation of confidence interval" and "Is it true that the percentile bootstrap should never be used?".
To quote Hastie et al. from the book "Elements of Statistical Learning" (Sect. 8.4) directly: "we might think of the bootstrap distribution as a "poor man's" Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out."
When bootstrapping I think is much more important not to forget accounting for dependence structures (e.g. as described for example in Owen and Eckles' Bootstrapping data arrays of arbitrary order). These might be due to clustering of the data, heteroskedasticity (e.g. see the notion of wild bootstrap) and other deviations from IID data generating procedures. Ignoring such issues will render any discussions about the subsequent interpretation of the generated CIs, moot. | Bootstrap intervals for predictions, how to interpret it?
Yes, it is perfectly sensible.
For a quick interpretation, I like the one provided by Davison: Assuming $T$ is an estimator of a parameter $\psi$ based on a random sample $Y_1, . . . , Y_n$, $V_T^{0.5 |
55,766 | Why use parametric test at all if non parametric tests are 'less strict' | It is true that precisely normal populations are rare in the real world.
However, some very useful procedures are 'robust' against mild non-normality.
Perhaps the most important of them is the t test, which performs remarkably well with samples of moderate or large size that are not exactly normal.
Also, some
tests that were derived for use with normal data have better power than
nonparametric alternatives (that is, they are more likely to reject the null
hypothesis when it is false), and this advantage persists to an extent when
these tests are used with slightly non-normal data.
Nonparametric tests such as sign tests
and the rank-based Wilcoxon, Kruskal-Wallis, and Friedman tests lose
information when data are reduced to ranks (or to +'s and -'s), and the
result can be failure to find a real effect when it is present in experimental
data.
You are correct that some ANOVA tests behave badly when data are not normal, but many tests using the chi-squared distribution are for categorical data and
normality is not an issue.
Recently, new nonparametric methods of data analysis have been invented and come into
common use because computation is cheaper and more convenient now than it
was several years ago. Some examples are bootstrapping and permutation tests.
Sometimes they require hundreds of thousands or millions of computations
compared with dozens for traditional tests. But the extra computation may
take only seconds or a few minutes with modern computers.
Admittedly, some statisticians are not familiar with these new methods and
fail to take appropriate advantage of them. Also, part of the reluctance
to change is that consumers of or clients for statistical analyses may not trust results from procedures they have never heard of. But that is changing over time.
Fortunately, modern software and computers also make it possible to
visualize data in ways that were previously tedious to show. As a very simple
example (not using very fancy graphics), here are two plots of some data that I know cannot possibly be normal (even though they do manage to pass a couple of tests of normality because of the small sample size.)
These data are also pretty obviously not centered at $0.$ The optimum statistical
procedure to confirm that would not be a t test or even a nonparametric
Wilcoxon test. But both of these tests reject the null hypothesis that the
data are centered at $0$: the t test with a P-value 0.013, the Wilcoxon
test with P-value 0.0099. Both P-values are less than 0.05, so both
confirm the obvious at the 5% level.
It is hardly a loss to science if
I don't get around to using the optimal test. And some of the people reading
my findings might be a lot more comfortable having the results of a t test.
Maybe the next generation of clients will be more demanding. | Why use parametric test at all if non parametric tests are 'less strict' | It is true that precisely normal populations are rare in the real world.
However, some very useful procedures are 'robust' against mild non-normality.
Perhaps the most important of them is the t test, | Why use parametric test at all if non parametric tests are 'less strict'
It is true that precisely normal populations are rare in the real world.
However, some very useful procedures are 'robust' against mild non-normality.
Perhaps the most important of them is the t test, which performs remarkably well with samples of moderate or large size that are not exactly normal.
Also, some
tests that were derived for use with normal data have better power than
nonparametric alternatives (that is, they are more likely to reject the null
hypothesis when it is false), and this advantage persists to an extent when
these tests are used with slightly non-normal data.
Nonparametric tests such as sign tests
and the rank-based Wilcoxon, Kruskal-Wallis, and Friedman tests lose
information when data are reduced to ranks (or to +'s and -'s), and the
result can be failure to find a real effect when it is present in experimental
data.
You are correct that some ANOVA tests behave badly when data are not normal, but many tests using the chi-squared distribution are for categorical data and
normality is not an issue.
Recently, new nonparametric methods of data analysis have been invented and come into
common use because computation is cheaper and more convenient now than it
was several years ago. Some examples are bootstrapping and permutation tests.
Sometimes they require hundreds of thousands or millions of computations
compared with dozens for traditional tests. But the extra computation may
take only seconds or a few minutes with modern computers.
Admittedly, some statisticians are not familiar with these new methods and
fail to take appropriate advantage of them. Also, part of the reluctance
to change is that consumers of or clients for statistical analyses may not trust results from procedures they have never heard of. But that is changing over time.
Fortunately, modern software and computers also make it possible to
visualize data in ways that were previously tedious to show. As a very simple
example (not using very fancy graphics), here are two plots of some data that I know cannot possibly be normal (even though they do manage to pass a couple of tests of normality because of the small sample size.)
These data are also pretty obviously not centered at $0.$ The optimum statistical
procedure to confirm that would not be a t test or even a nonparametric
Wilcoxon test. But both of these tests reject the null hypothesis that the
data are centered at $0$: the t test with a P-value 0.013, the Wilcoxon
test with P-value 0.0099. Both P-values are less than 0.05, so both
confirm the obvious at the 5% level.
It is hardly a loss to science if
I don't get around to using the optimal test. And some of the people reading
my findings might be a lot more comfortable having the results of a t test.
Maybe the next generation of clients will be more demanding. | Why use parametric test at all if non parametric tests are 'less strict'
It is true that precisely normal populations are rare in the real world.
However, some very useful procedures are 'robust' against mild non-normality.
Perhaps the most important of them is the t test, |
55,767 | Why use parametric test at all if non parametric tests are 'less strict' | All computation is based on model building. When you count two apples you imply, that there are either two identical things or a precise definition of what an apple exactly is and on what an apple exactly not is. The truth is: There is no sharp and precise definition of what an apple is, but our usual feeling, whether something is an apple, is "precise" enough.
An ANOVA or a t-test needs the distribution to be perfectly normal in order to produce a perfectly true p-value. However, nobody is in need of a p-value to be exact to the 10th digit. Never!
So the question boils down to the question, when is a distribution "close enough to normal" and fact is, many distributions are "close enough" to make parametric tests worthwile learning and using.
Parametric tests perform very well when you need an easy to understand effect measure like Cohen's d or when you are in need of a power calculation.
That being said, there will probably be a trend away from classical tests to computationally involving tests. However, for the foreseeable future, linear regression will always come with a t-test for each predictor and when you compute Bravais-Pearson correlation, standard computer programs will test that using the t-distribution. Right now, parametric tests are dominant. | Why use parametric test at all if non parametric tests are 'less strict' | All computation is based on model building. When you count two apples you imply, that there are either two identical things or a precise definition of what an apple exactly is and on what an apple exa | Why use parametric test at all if non parametric tests are 'less strict'
All computation is based on model building. When you count two apples you imply, that there are either two identical things or a precise definition of what an apple exactly is and on what an apple exactly not is. The truth is: There is no sharp and precise definition of what an apple is, but our usual feeling, whether something is an apple, is "precise" enough.
An ANOVA or a t-test needs the distribution to be perfectly normal in order to produce a perfectly true p-value. However, nobody is in need of a p-value to be exact to the 10th digit. Never!
So the question boils down to the question, when is a distribution "close enough to normal" and fact is, many distributions are "close enough" to make parametric tests worthwile learning and using.
Parametric tests perform very well when you need an easy to understand effect measure like Cohen's d or when you are in need of a power calculation.
That being said, there will probably be a trend away from classical tests to computationally involving tests. However, for the foreseeable future, linear regression will always come with a t-test for each predictor and when you compute Bravais-Pearson correlation, standard computer programs will test that using the t-distribution. Right now, parametric tests are dominant. | Why use parametric test at all if non parametric tests are 'less strict'
All computation is based on model building. When you count two apples you imply, that there are either two identical things or a precise definition of what an apple exactly is and on what an apple exa |
55,768 | Why use parametric test at all if non parametric tests are 'less strict' | Tests such as t-tests don't actually require the data to be normal. What they require is that the distribution of the sample mean of the data (under the null hypothesis) follows a normal distribution (or very close to it). This will cause the t-statistic to follow a t-distribution, as it should. This happens when the data is normal, but also frequently when the data is not normal, as long as the sample size is moderately large. This is a consequence of the central limit theorem. Try this: make some fake non-normal data. This will represent your population. Repeatedly draw a sample of a large size from this data, and calculate its sample mean. Plot all of these sample means and you will see that they look normal. You can also plot all of the t-statistics and you will see that they look like they follow a t-distribution. | Why use parametric test at all if non parametric tests are 'less strict' | Tests such as t-tests don't actually require the data to be normal. What they require is that the distribution of the sample mean of the data (under the null hypothesis) follows a normal distribution | Why use parametric test at all if non parametric tests are 'less strict'
Tests such as t-tests don't actually require the data to be normal. What they require is that the distribution of the sample mean of the data (under the null hypothesis) follows a normal distribution (or very close to it). This will cause the t-statistic to follow a t-distribution, as it should. This happens when the data is normal, but also frequently when the data is not normal, as long as the sample size is moderately large. This is a consequence of the central limit theorem. Try this: make some fake non-normal data. This will represent your population. Repeatedly draw a sample of a large size from this data, and calculate its sample mean. Plot all of these sample means and you will see that they look normal. You can also plot all of the t-statistics and you will see that they look like they follow a t-distribution. | Why use parametric test at all if non parametric tests are 'less strict'
Tests such as t-tests don't actually require the data to be normal. What they require is that the distribution of the sample mean of the data (under the null hypothesis) follows a normal distribution |
55,769 | Why use parametric test at all if non parametric tests are 'less strict' | I think I can summarise the other answers as follows: in many non-normal cases (if the breach of normality is not too big), parametric tests still have good power, while (important!) the actual type I-error rate is close to the nominal $\alpha$-level, so why not use them?
I would like to add that parametric tests offer the huge advantage of providing an estimate of the effect size, while Wilcoxon etc. only offer p-values. | Why use parametric test at all if non parametric tests are 'less strict' | I think I can summarise the other answers as follows: in many non-normal cases (if the breach of normality is not too big), parametric tests still have good power, while (important!) the actual type I | Why use parametric test at all if non parametric tests are 'less strict'
I think I can summarise the other answers as follows: in many non-normal cases (if the breach of normality is not too big), parametric tests still have good power, while (important!) the actual type I-error rate is close to the nominal $\alpha$-level, so why not use them?
I would like to add that parametric tests offer the huge advantage of providing an estimate of the effect size, while Wilcoxon etc. only offer p-values. | Why use parametric test at all if non parametric tests are 'less strict'
I think I can summarise the other answers as follows: in many non-normal cases (if the breach of normality is not too big), parametric tests still have good power, while (important!) the actual type I |
55,770 | When to switch off the continuity correction in chisq.test function? | The test statistic is approximately $\chi^2$ distributed. According to Agresti in Categorical Data Analysis, Yates said Fisher recommended the hypergeometric distribution for an exact test. And Yates proposed this correction so the continuity corrected $\chi^2$ test approximates the result of the exact test. But nowadays, any computer can perform Fisher's exact test. In R:
fisher.test(data)$p.value
[1] 0.1889455
So instead of using the correction, you might as well use the exact test. So there is no good reason to use the continuity correction nowadays. All of this usually only matters at small sample sizes. At large sample sizes, all the methods should be quite similar.
However, at small sample sizes, one is usually better off performing a mid-P test. This is because Fisher's exact test is based on the hypergeometric distribution. This distribution is discrete, so at small sample sizes, it can only return a limited number of p-values. And Fisher's exact test usually ends up being conservative.
For the same data, a mid-P test would return:
library(epitools)
oddsratio(as.matrix(data))$p.value
NA
two-sided midp.exact fisher.exact chi.square
[1,] NA NA NA
[2,] 0.1560137 0.1889455 0.1492988
.156 which is between the uncorrected $\chi^2$ of .149 and the Fisher test of .189. All of them are lower than the continuity corrected $\chi^2$ p-value of .201. Agresti recommends the mid-P test at small sample sizes. For what it's worth, you know the sample size is small when the test you use matters.
Agresti's Categorical Data Analysis text is a very good resource for these topics. | When to switch off the continuity correction in chisq.test function? | The test statistic is approximately $\chi^2$ distributed. According to Agresti in Categorical Data Analysis, Yates said Fisher recommended the hypergeometric distribution for an exact test. And Yates | When to switch off the continuity correction in chisq.test function?
The test statistic is approximately $\chi^2$ distributed. According to Agresti in Categorical Data Analysis, Yates said Fisher recommended the hypergeometric distribution for an exact test. And Yates proposed this correction so the continuity corrected $\chi^2$ test approximates the result of the exact test. But nowadays, any computer can perform Fisher's exact test. In R:
fisher.test(data)$p.value
[1] 0.1889455
So instead of using the correction, you might as well use the exact test. So there is no good reason to use the continuity correction nowadays. All of this usually only matters at small sample sizes. At large sample sizes, all the methods should be quite similar.
However, at small sample sizes, one is usually better off performing a mid-P test. This is because Fisher's exact test is based on the hypergeometric distribution. This distribution is discrete, so at small sample sizes, it can only return a limited number of p-values. And Fisher's exact test usually ends up being conservative.
For the same data, a mid-P test would return:
library(epitools)
oddsratio(as.matrix(data))$p.value
NA
two-sided midp.exact fisher.exact chi.square
[1,] NA NA NA
[2,] 0.1560137 0.1889455 0.1492988
.156 which is between the uncorrected $\chi^2$ of .149 and the Fisher test of .189. All of them are lower than the continuity corrected $\chi^2$ p-value of .201. Agresti recommends the mid-P test at small sample sizes. For what it's worth, you know the sample size is small when the test you use matters.
Agresti's Categorical Data Analysis text is a very good resource for these topics. | When to switch off the continuity correction in chisq.test function?
The test statistic is approximately $\chi^2$ distributed. According to Agresti in Categorical Data Analysis, Yates said Fisher recommended the hypergeometric distribution for an exact test. And Yates |
55,771 | Can I run PCA on a 4-tensor? [closed] | PCA won't work on a 4D tensor, but you could use an auto-encoder.
Note that PCA will take a 2D dataset and reduce the number of columns in it (say 100 columns to 10).
With a 4D dataset, you could use an autoencoder to either reduce it to a 4D dataset with fewer "columns" or reduce it to a 3D dataset. | Can I run PCA on a 4-tensor? [closed] | PCA won't work on a 4D tensor, but you could use an auto-encoder.
Note that PCA will take a 2D dataset and reduce the number of columns in it (say 100 columns to 10).
With a 4D dataset, you could use | Can I run PCA on a 4-tensor? [closed]
PCA won't work on a 4D tensor, but you could use an auto-encoder.
Note that PCA will take a 2D dataset and reduce the number of columns in it (say 100 columns to 10).
With a 4D dataset, you could use an autoencoder to either reduce it to a 4D dataset with fewer "columns" or reduce it to a 3D dataset. | Can I run PCA on a 4-tensor? [closed]
PCA won't work on a 4D tensor, but you could use an auto-encoder.
Note that PCA will take a 2D dataset and reduce the number of columns in it (say 100 columns to 10).
With a 4D dataset, you could use |
55,772 | Can I run PCA on a 4-tensor? [closed] | There are actually a few generalizations of PCA to higher-order tensors:
The Tucker decomposition used in "higher-order singular value decomposition".
PARAFAC aka CANDECOMP, is in some ways a special case of Tucker decompositions.
Another, more recent variant uses the tensor train decomposition. | Can I run PCA on a 4-tensor? [closed] | There are actually a few generalizations of PCA to higher-order tensors:
The Tucker decomposition used in "higher-order singular value decomposition".
PARAFAC aka CANDECOMP, is in some ways a special | Can I run PCA on a 4-tensor? [closed]
There are actually a few generalizations of PCA to higher-order tensors:
The Tucker decomposition used in "higher-order singular value decomposition".
PARAFAC aka CANDECOMP, is in some ways a special case of Tucker decompositions.
Another, more recent variant uses the tensor train decomposition. | Can I run PCA on a 4-tensor? [closed]
There are actually a few generalizations of PCA to higher-order tensors:
The Tucker decomposition used in "higher-order singular value decomposition".
PARAFAC aka CANDECOMP, is in some ways a special |
55,773 | Can I run PCA on a 4-tensor? [closed] | The answer to the question in the title is yes, you can perform a PCA on any set of data described by a coordinate system with any number of axes. The result is a new set of axes, called principal coordinates. The PCA produces as many PC as there are original axes, but the new coordinate system as different properties, such as:
The principal components are ranked based on how much variability
they account for, the first PC being the axis with maximum
variability along itself;
by definition, all principal components are orthogonal to each other.
Here is a very good interactive explanation of what PCA does and how does it work. For further reference see here, here.
The details in the text body seem to refer to a specific program (I'm guessing Python), which make me think that this question may be more suitable for Stack Overflow. | Can I run PCA on a 4-tensor? [closed] | The answer to the question in the title is yes, you can perform a PCA on any set of data described by a coordinate system with any number of axes. The result is a new set of axes, called principal coo | Can I run PCA on a 4-tensor? [closed]
The answer to the question in the title is yes, you can perform a PCA on any set of data described by a coordinate system with any number of axes. The result is a new set of axes, called principal coordinates. The PCA produces as many PC as there are original axes, but the new coordinate system as different properties, such as:
The principal components are ranked based on how much variability
they account for, the first PC being the axis with maximum
variability along itself;
by definition, all principal components are orthogonal to each other.
Here is a very good interactive explanation of what PCA does and how does it work. For further reference see here, here.
The details in the text body seem to refer to a specific program (I'm guessing Python), which make me think that this question may be more suitable for Stack Overflow. | Can I run PCA on a 4-tensor? [closed]
The answer to the question in the title is yes, you can perform a PCA on any set of data described by a coordinate system with any number of axes. The result is a new set of axes, called principal coo |
55,774 | What are the different states in Open AI Taxi Environment? | From the paper Hierarchical Reinforcement Learning: Learning sub-goals and state-abstraction
In terms of state space there are 500 possible states:
25 squares, 5 locations for the passenger (counting the four starting locations and the taxi), and 4 destinations.
When working with enumerated states, then the count of classes in each dimension in which state can vary multiplies out to get the total state space "volume". It is valid for the taxi to be in any location on the grid (25), for the passenger to in any of 5 locations at that time (including in the taxi or in some location that they did not want to be dropped in), and for the passenger's destination to be any of the four special locations. These values are all independent and could occur in any combination. Hence 25 * 5 * 4 = 500 total states.
The terminal state is when the passenger location is the same as the destination location. Technically there are some unreachable states which show the passenger at the destination and the taxi somewhere else. But the state representation still includes those, because it is easier to do so. | What are the different states in Open AI Taxi Environment? | From the paper Hierarchical Reinforcement Learning: Learning sub-goals and state-abstraction
In terms of state space there are 500 possible states:
25 squares, 5 locations for the passenger (counti | What are the different states in Open AI Taxi Environment?
From the paper Hierarchical Reinforcement Learning: Learning sub-goals and state-abstraction
In terms of state space there are 500 possible states:
25 squares, 5 locations for the passenger (counting the four starting locations and the taxi), and 4 destinations.
When working with enumerated states, then the count of classes in each dimension in which state can vary multiplies out to get the total state space "volume". It is valid for the taxi to be in any location on the grid (25), for the passenger to in any of 5 locations at that time (including in the taxi or in some location that they did not want to be dropped in), and for the passenger's destination to be any of the four special locations. These values are all independent and could occur in any combination. Hence 25 * 5 * 4 = 500 total states.
The terminal state is when the passenger location is the same as the destination location. Technically there are some unreachable states which show the passenger at the destination and the taxi somewhere else. But the state representation still includes those, because it is easier to do so. | What are the different states in Open AI Taxi Environment?
From the paper Hierarchical Reinforcement Learning: Learning sub-goals and state-abstraction
In terms of state space there are 500 possible states:
25 squares, 5 locations for the passenger (counti |
55,775 | Choice of hyper-parameters for Recursive Feature Elimination (SVM) | I will answer my own question for posterity.
In the excellent book Applied Predictive Modeling by Kjell Johnson and Max Kuhn, the RFE algorithm is stated very clearly. It is not stated so clearly (in my opinion) in Guyon et al's original paper. Here it is:
Apparently, the correct procedure is to fully tune and train your model on the original data set, then (using the model) calculate the importances of the variables. Remove $k$ of them, then retrain and tune the model on the feature subset and repeat the process.
I am a python user, so I will speak to that: if you are using sklearn's RFE or RFECV function, this form of the algorithm is not done. Instead, you pass a model (presumably already tuned), and the entire RFE algorithm is performed with that model. While I can offer no formal proof, I suspect that if you use the same model for RFE selection, you will likely overfit your data, and so care should probably be taken when using sklearn's RFE or RFECV functions out of the box. Though, since RFECV does indeed perform cross-validation, it is likely the better choice.
As to whether RFE should be done with an $l_1$ penalty in the model -- I don't know. My anecdotal evidence was that, upon trying this, my model (a linear support vector classifier) did not generalize well. However, this was for a particular data set with some problems. Take that statement with a large spoon of salt.
I will update this post if I learn more. | Choice of hyper-parameters for Recursive Feature Elimination (SVM) | I will answer my own question for posterity.
In the excellent book Applied Predictive Modeling by Kjell Johnson and Max Kuhn, the RFE algorithm is stated very clearly. It is not stated so clearly (in | Choice of hyper-parameters for Recursive Feature Elimination (SVM)
I will answer my own question for posterity.
In the excellent book Applied Predictive Modeling by Kjell Johnson and Max Kuhn, the RFE algorithm is stated very clearly. It is not stated so clearly (in my opinion) in Guyon et al's original paper. Here it is:
Apparently, the correct procedure is to fully tune and train your model on the original data set, then (using the model) calculate the importances of the variables. Remove $k$ of them, then retrain and tune the model on the feature subset and repeat the process.
I am a python user, so I will speak to that: if you are using sklearn's RFE or RFECV function, this form of the algorithm is not done. Instead, you pass a model (presumably already tuned), and the entire RFE algorithm is performed with that model. While I can offer no formal proof, I suspect that if you use the same model for RFE selection, you will likely overfit your data, and so care should probably be taken when using sklearn's RFE or RFECV functions out of the box. Though, since RFECV does indeed perform cross-validation, it is likely the better choice.
As to whether RFE should be done with an $l_1$ penalty in the model -- I don't know. My anecdotal evidence was that, upon trying this, my model (a linear support vector classifier) did not generalize well. However, this was for a particular data set with some problems. Take that statement with a large spoon of salt.
I will update this post if I learn more. | Choice of hyper-parameters for Recursive Feature Elimination (SVM)
I will answer my own question for posterity.
In the excellent book Applied Predictive Modeling by Kjell Johnson and Max Kuhn, the RFE algorithm is stated very clearly. It is not stated so clearly (in |
55,776 | Deriving $\chi^2$ density from the standard normal $Z$ density | Let us first find the density of the absolute value of $X\sim N(0,1)$, $Y=|X|$, $g(y)$. For $y>0$,
\begin{eqnarray*}
G(y)\equiv\Pr(Y\leqslant y)&=&\Pr(|X|\leqslant y)\\
&=&\Pr(-y\leqslant X\leqslant y)\\
&=&F(y)-F(-y)
\end{eqnarray*}
and hence
$$g(y)=G'(y)=f(y)+f(-y)$$ (note the second inner derivative equals $-1$). Since $y=|x|$ cannot be negative, we have $g(y)=0$ for $y<0$. For densities symmetric about zero (such as the normal density) we have
$$f(y)=f(-y)$$
and therefore
$$
g(y)=\begin{cases}0&y<0\\
2f(y)&y\geqslant0\end{cases}
$$
For the second step (the change of variable technique that you were referred to in the comments), we have $y=w(z)=\sqrt{z}$ and hence $w'(z)=\frac{1}{2}z^{-\frac{1}{2}}$. Thus,
\begin{eqnarray*}
h(z)&=&\frac{2}{\sqrt{2\pi}}\exp\biggl\{-\frac{(\sqrt{z})^2}{2}\biggr\}\left|\frac{1}{2}z^{-\frac{1}{2}}\right|\\
&=&\frac{1}{\sqrt{2\pi}}z^{-\frac{1}{2}}e^{-\frac{z}{2}}
\end{eqnarray*}
for $z>0$ and 0 elsewhere. This is the density of the chi-square distribution with $\nu=1$. | Deriving $\chi^2$ density from the standard normal $Z$ density | Let us first find the density of the absolute value of $X\sim N(0,1)$, $Y=|X|$, $g(y)$. For $y>0$,
\begin{eqnarray*}
G(y)\equiv\Pr(Y\leqslant y)&=&\Pr(|X|\leqslant y)\\
&=&\Pr(-y\leqslant X\leqslant y | Deriving $\chi^2$ density from the standard normal $Z$ density
Let us first find the density of the absolute value of $X\sim N(0,1)$, $Y=|X|$, $g(y)$. For $y>0$,
\begin{eqnarray*}
G(y)\equiv\Pr(Y\leqslant y)&=&\Pr(|X|\leqslant y)\\
&=&\Pr(-y\leqslant X\leqslant y)\\
&=&F(y)-F(-y)
\end{eqnarray*}
and hence
$$g(y)=G'(y)=f(y)+f(-y)$$ (note the second inner derivative equals $-1$). Since $y=|x|$ cannot be negative, we have $g(y)=0$ for $y<0$. For densities symmetric about zero (such as the normal density) we have
$$f(y)=f(-y)$$
and therefore
$$
g(y)=\begin{cases}0&y<0\\
2f(y)&y\geqslant0\end{cases}
$$
For the second step (the change of variable technique that you were referred to in the comments), we have $y=w(z)=\sqrt{z}$ and hence $w'(z)=\frac{1}{2}z^{-\frac{1}{2}}$. Thus,
\begin{eqnarray*}
h(z)&=&\frac{2}{\sqrt{2\pi}}\exp\biggl\{-\frac{(\sqrt{z})^2}{2}\biggr\}\left|\frac{1}{2}z^{-\frac{1}{2}}\right|\\
&=&\frac{1}{\sqrt{2\pi}}z^{-\frac{1}{2}}e^{-\frac{z}{2}}
\end{eqnarray*}
for $z>0$ and 0 elsewhere. This is the density of the chi-square distribution with $\nu=1$. | Deriving $\chi^2$ density from the standard normal $Z$ density
Let us first find the density of the absolute value of $X\sim N(0,1)$, $Y=|X|$, $g(y)$. For $y>0$,
\begin{eqnarray*}
G(y)\equiv\Pr(Y\leqslant y)&=&\Pr(|X|\leqslant y)\\
&=&\Pr(-y\leqslant X\leqslant y |
55,777 | Deriving $\chi^2$ density from the standard normal $Z$ density | As pointed out in the comments, your error here is that your density transformation does not take account of the nonlinearity of the transformation. I will show you a better empirical demonstration of the distributional equivalence, where we don't attempt the transform at all, but simply compare the kernel density of simulated values of the sum with the postulated chi-squared density. I will also show you a simple proof of the distributional equivalence at issue.
Empirical simulation: Rather than attempting the density transformation, let's proceed by simulation, by simulating $N=10^5$ sets of $n=4$ standard normal random variables.
#Generate matrix of simulations
set.seed(1)
N <- 10^5
n <- 4
SIMS <- matrix(rnorm(N*n), nrow = N, ncol = n)
#Compute statistic of interest and its kernel density
STAT <- rep(0, N)
for (i in 1:N) { STAT[i] <- sum(SIMS[i,]^2) }
DENS <- density(STAT)
#Set chi-squared density function
CCC <- function(x) { dchisq(x, df = n) }
#Plot the kernel density against the postulated chi-squared density
plot(DENS, xlim = c(0, 20), lwd = 2, main = 'Simulation of Density')
plot(CCC, xlim = c(0, 20), lwd = 2, lty = 2, col = 'red', add = TRUE)
As you can see from the plot, the simulated values closely follow the postulated chi-squared density. You can easily repeat this simulation analysis for different values of $n$ if you would like to demonstrate the distributional equivalence for other values.
Proving equivalence in distribution: For completeness, I supply you here with a proof of the distributional result you are trying to show by simulation. The simplest way to prove this result is via moment generating functions (or characteristic functions). Let $Z_1, ..., Z_n \sim \text{IID N}(0,1)$ be a set of IID standard normal random variables and let $G = \sum_{i=1}^n Z_i^2$. Using the law of the unconscious statistician and the substitution $y = z \sqrt{1/2-t}$, for all $t < \tfrac{1}{2}$ we have:
$$\begin{align}
\mathbb{E}(\exp(t Z_i^2))
&= \int \limits_{-\infty}^\infty
\exp(t z^2) \cdot \frac{1}{\sqrt{2 \pi}} \exp \bigg( -\frac{1}{2} z^2 \bigg) \ dz \\[6pt]
&= \frac{1}{\sqrt{2 \pi}} \int \limits_{-\infty}^\infty
\exp \bigg( - \Big( \frac{1}{2} - t \Big) z^2 \bigg) \ dz \\[6pt]
&= \frac{1}{\sqrt{(1-2t) \pi}} \int \limits_{-\infty}^\infty
\exp ( -y^2 ) \ dy \\[6pt]
&= \frac{1}{\sqrt{(1-2t)}}, \\[6pt]
\end{align}$$
Thus, for all $t < \tfrac{1}{2}$ the moment generating function for $G$ is:
$$\begin{align}
m_G(t)
&\equiv \mathbb{E}(\exp(tG)) \\[10pt]
&= \prod_{i=1}^n \mathbb{E}(\exp(t Z_i^2)) \\[6pt]
&= \prod_{i=1}^n \frac{1}{\sqrt{(1-2t)}} \\[10pt]
&= (1-2t)^{-n/2}. \\[6pt]
\end{align}$$
This is the moment generating function for the chi-squared distribution, which demonstrates that $G$ is a chi-squared random variable. (See here for proof that the moment generating function determines the distribution.) | Deriving $\chi^2$ density from the standard normal $Z$ density | As pointed out in the comments, your error here is that your density transformation does not take account of the nonlinearity of the transformation. I will show you a better empirical demonstration o | Deriving $\chi^2$ density from the standard normal $Z$ density
As pointed out in the comments, your error here is that your density transformation does not take account of the nonlinearity of the transformation. I will show you a better empirical demonstration of the distributional equivalence, where we don't attempt the transform at all, but simply compare the kernel density of simulated values of the sum with the postulated chi-squared density. I will also show you a simple proof of the distributional equivalence at issue.
Empirical simulation: Rather than attempting the density transformation, let's proceed by simulation, by simulating $N=10^5$ sets of $n=4$ standard normal random variables.
#Generate matrix of simulations
set.seed(1)
N <- 10^5
n <- 4
SIMS <- matrix(rnorm(N*n), nrow = N, ncol = n)
#Compute statistic of interest and its kernel density
STAT <- rep(0, N)
for (i in 1:N) { STAT[i] <- sum(SIMS[i,]^2) }
DENS <- density(STAT)
#Set chi-squared density function
CCC <- function(x) { dchisq(x, df = n) }
#Plot the kernel density against the postulated chi-squared density
plot(DENS, xlim = c(0, 20), lwd = 2, main = 'Simulation of Density')
plot(CCC, xlim = c(0, 20), lwd = 2, lty = 2, col = 'red', add = TRUE)
As you can see from the plot, the simulated values closely follow the postulated chi-squared density. You can easily repeat this simulation analysis for different values of $n$ if you would like to demonstrate the distributional equivalence for other values.
Proving equivalence in distribution: For completeness, I supply you here with a proof of the distributional result you are trying to show by simulation. The simplest way to prove this result is via moment generating functions (or characteristic functions). Let $Z_1, ..., Z_n \sim \text{IID N}(0,1)$ be a set of IID standard normal random variables and let $G = \sum_{i=1}^n Z_i^2$. Using the law of the unconscious statistician and the substitution $y = z \sqrt{1/2-t}$, for all $t < \tfrac{1}{2}$ we have:
$$\begin{align}
\mathbb{E}(\exp(t Z_i^2))
&= \int \limits_{-\infty}^\infty
\exp(t z^2) \cdot \frac{1}{\sqrt{2 \pi}} \exp \bigg( -\frac{1}{2} z^2 \bigg) \ dz \\[6pt]
&= \frac{1}{\sqrt{2 \pi}} \int \limits_{-\infty}^\infty
\exp \bigg( - \Big( \frac{1}{2} - t \Big) z^2 \bigg) \ dz \\[6pt]
&= \frac{1}{\sqrt{(1-2t) \pi}} \int \limits_{-\infty}^\infty
\exp ( -y^2 ) \ dy \\[6pt]
&= \frac{1}{\sqrt{(1-2t)}}, \\[6pt]
\end{align}$$
Thus, for all $t < \tfrac{1}{2}$ the moment generating function for $G$ is:
$$\begin{align}
m_G(t)
&\equiv \mathbb{E}(\exp(tG)) \\[10pt]
&= \prod_{i=1}^n \mathbb{E}(\exp(t Z_i^2)) \\[6pt]
&= \prod_{i=1}^n \frac{1}{\sqrt{(1-2t)}} \\[10pt]
&= (1-2t)^{-n/2}. \\[6pt]
\end{align}$$
This is the moment generating function for the chi-squared distribution, which demonstrates that $G$ is a chi-squared random variable. (See here for proof that the moment generating function determines the distribution.) | Deriving $\chi^2$ density from the standard normal $Z$ density
As pointed out in the comments, your error here is that your density transformation does not take account of the nonlinearity of the transformation. I will show you a better empirical demonstration o |
55,778 | Coordinate descent with constraints | The bland answer is that to solve constrained optimization problems, you need an algorithm that knows and uses the constraints - simply applying unconstrained optimization algorithms will not work.
Box constraints
Taking the algorithm you described in your answer, it will ''not work'' for box constraints, depending on the value of the step-size $\alpha$;
Take $f(x) = x$, $x \leq 0.9$, start at $x_0 = 0$ with a step-size $\alpha=1$.
The only step available does not satisfy the constraint.
You can decrease $\alpha$ to make it ''work'', but you can always find a function/starting point for which it does not work; the correct step-size depends on the interaction between the function and the constraints and thus require solving a constrained optimization problem to set it.
If you are able to solve the subproblem of finding the minimum along the dimension you are optimizing while respecting the constraint, i.e. taking steps
$$x_i^{k+1} = \arg \min_{x_i \in C} f(x_1^{k}, ..., x_{i-1}^{k}, x_i, x_{i+1}^{k}, ..., x_D^{k}),$$
you can solve box constraints, as in your first reference.
Linear constraints
However, this approach will choke on linear constraints.
Take $f(x,y) = x^2 + y^2$, with $x + y < -1$. The minimum is at $(-0.5,-0.5)$, but if you start at $(-1,0)$ you can not make progress on $x$ (not allowed to be bigger than $-1$) nor $y$ (is at the minimum given $x = -1$).
Your second ref. is able to get around that issue by considering block coordinate updates; changing $k$ coordinates at each iteration makes it possible to get around linear constraints involving less than $k$ variable at a time.
Non-linear constraints
Your non-linear constraint is also non convex; start at $(0,-3)$ and the algorithm would converge to $(0, -2.9...)$, not at the minimum.
What to do
You can either use a constrained optimization algorithm (Frank-Wolfe comes to mind) or re-parametrize your problem so that it incorporates the constraints; if you want to find the minimum of $f(x), x \geq 0$, try to find the minimum of $g(y) = f(y^2), y \in \mathbb{R}$ | Coordinate descent with constraints | The bland answer is that to solve constrained optimization problems, you need an algorithm that knows and uses the constraints - simply applying unconstrained optimization algorithms will not work.
Bo | Coordinate descent with constraints
The bland answer is that to solve constrained optimization problems, you need an algorithm that knows and uses the constraints - simply applying unconstrained optimization algorithms will not work.
Box constraints
Taking the algorithm you described in your answer, it will ''not work'' for box constraints, depending on the value of the step-size $\alpha$;
Take $f(x) = x$, $x \leq 0.9$, start at $x_0 = 0$ with a step-size $\alpha=1$.
The only step available does not satisfy the constraint.
You can decrease $\alpha$ to make it ''work'', but you can always find a function/starting point for which it does not work; the correct step-size depends on the interaction between the function and the constraints and thus require solving a constrained optimization problem to set it.
If you are able to solve the subproblem of finding the minimum along the dimension you are optimizing while respecting the constraint, i.e. taking steps
$$x_i^{k+1} = \arg \min_{x_i \in C} f(x_1^{k}, ..., x_{i-1}^{k}, x_i, x_{i+1}^{k}, ..., x_D^{k}),$$
you can solve box constraints, as in your first reference.
Linear constraints
However, this approach will choke on linear constraints.
Take $f(x,y) = x^2 + y^2$, with $x + y < -1$. The minimum is at $(-0.5,-0.5)$, but if you start at $(-1,0)$ you can not make progress on $x$ (not allowed to be bigger than $-1$) nor $y$ (is at the minimum given $x = -1$).
Your second ref. is able to get around that issue by considering block coordinate updates; changing $k$ coordinates at each iteration makes it possible to get around linear constraints involving less than $k$ variable at a time.
Non-linear constraints
Your non-linear constraint is also non convex; start at $(0,-3)$ and the algorithm would converge to $(0, -2.9...)$, not at the minimum.
What to do
You can either use a constrained optimization algorithm (Frank-Wolfe comes to mind) or re-parametrize your problem so that it incorporates the constraints; if you want to find the minimum of $f(x), x \geq 0$, try to find the minimum of $g(y) = f(y^2), y \in \mathbb{R}$ | Coordinate descent with constraints
The bland answer is that to solve constrained optimization problems, you need an algorithm that knows and uses the constraints - simply applying unconstrained optimization algorithms will not work.
Bo |
55,779 | Coordinate descent with constraints | Perhaps this question is more difficult than I thought, or would be better asked on math.stackexchange.com. But since it arose in the context of SVM I will leave it here.
There are some papers discussing coordinate descent and constrained optimization with the following key words.
Linear and box constraints and blocks of primal variables here
Coupled constraints, Compact convex sets and Polyhedra here
A cryptic MIT paper
While the mathematics are beyond me, I will proceed to show visually and experimentally that for the following toy, convex function, coordinate descent works for box, linear and highly non linear constraints.
Consider the function to be minimized:
$$f(x_1,x_2) = .01x_1^2 + 0.03x_2^2$$
$$\nabla f = [2 \times 0.01 x_1, 2 \times .03 x_2]$$
Since the gradient has only terms in $x_1$ and $x_2$ respectively, we can evaluate it at each step $k$ holding the other variable constant. Checking the constraint at each step allows to
Do nothing if $x_i^{(k+1)}$ has a value outside the constraint boundary
Update $x_i^{(k+1)} = x_i^{(k)} - \alpha \ \nabla_i \ f(x_1, x_2)$ otherwise
A few examples
1) Box and 2) linear constraints
$c1: \ x_1 \leq -1 \ \ c2: \ x_2 \leq -1.5 $
$c: \ x_1 + x_2 \leq -2$
3) Non linear constraint
$c: \ (x_1 + 1.5)^3 + (x_2 + 1.5)^3 \leq 2$
Next steps
Perhaps a function which is convex, but whose gradient has cross terms $x_1x_2$ will prove more difficult or even impossible to optimize using CD | Coordinate descent with constraints | Perhaps this question is more difficult than I thought, or would be better asked on math.stackexchange.com. But since it arose in the context of SVM I will leave it here.
There are some papers discus | Coordinate descent with constraints
Perhaps this question is more difficult than I thought, or would be better asked on math.stackexchange.com. But since it arose in the context of SVM I will leave it here.
There are some papers discussing coordinate descent and constrained optimization with the following key words.
Linear and box constraints and blocks of primal variables here
Coupled constraints, Compact convex sets and Polyhedra here
A cryptic MIT paper
While the mathematics are beyond me, I will proceed to show visually and experimentally that for the following toy, convex function, coordinate descent works for box, linear and highly non linear constraints.
Consider the function to be minimized:
$$f(x_1,x_2) = .01x_1^2 + 0.03x_2^2$$
$$\nabla f = [2 \times 0.01 x_1, 2 \times .03 x_2]$$
Since the gradient has only terms in $x_1$ and $x_2$ respectively, we can evaluate it at each step $k$ holding the other variable constant. Checking the constraint at each step allows to
Do nothing if $x_i^{(k+1)}$ has a value outside the constraint boundary
Update $x_i^{(k+1)} = x_i^{(k)} - \alpha \ \nabla_i \ f(x_1, x_2)$ otherwise
A few examples
1) Box and 2) linear constraints
$c1: \ x_1 \leq -1 \ \ c2: \ x_2 \leq -1.5 $
$c: \ x_1 + x_2 \leq -2$
3) Non linear constraint
$c: \ (x_1 + 1.5)^3 + (x_2 + 1.5)^3 \leq 2$
Next steps
Perhaps a function which is convex, but whose gradient has cross terms $x_1x_2$ will prove more difficult or even impossible to optimize using CD | Coordinate descent with constraints
Perhaps this question is more difficult than I thought, or would be better asked on math.stackexchange.com. But since it arose in the context of SVM I will leave it here.
There are some papers discus |
55,780 | Regularized parameter overfitting the data (example) | Regularization does not guarantee to reduce overfit.
Regularization reduces overfit in many cases because in these cases the real data model (e.g., physics models) have small weights. Reguarlization is a way to inject this knowledge in our model. It weeds out those models that have large weights, which tend to be models that overfit.
However, in simulation, you can definitely construct a model that have large weights and generate data from it. Regularization may not work well with this kind of data. Regularization will create a large bias in this case, I guess.
But this kind of data is rare in real world.
The intuition what I think of is that not much weight is given to a particular feature. But isn't it sometimes necessary to focus on one feature (like in the above example x1)?
Edit: You can make the classifier give more weight to $x1$ by scaling $x1$ back by a factor of, say, 10. Then the black boundary will get very close to the magenta boundary. This is equivalent to discounting the component on the $x1$ dimension in the calculation of the Euclidean distance, but what makes the red cross on the left an outlier is largely because it is distant from the rest of the datapoints in the same class, in $x1$ dimension. By scaling $x1$ down, we are reducing the contribution of the outlier in the total loss.
Actually, we cannot tell from this example whether the magenta boundary is an overfit. This is the same as saying that we cannot be sure if the red cross on the bottom left is an outlier. We have to see other samples (like a test set) to be more certain.
Just for the sake of discussion, suppose we are sure that the red cross on the left is indeed an outlier. The problem, I believe, if not that we are not giving enough weight on one particular feature, it is that with a hard margin classifier, the decision boundary is sensitive to outliers, because most of the cost comes from one single datapoint. Regularization does not really help in this situation. A soft-margin classifier, which involves more datapoints in determining the decision boundary, will result in a boundary close to the one marked in black. | Regularized parameter overfitting the data (example) | Regularization does not guarantee to reduce overfit.
Regularization reduces overfit in many cases because in these cases the real data model (e.g., physics models) have small weights. Reguarlization | Regularized parameter overfitting the data (example)
Regularization does not guarantee to reduce overfit.
Regularization reduces overfit in many cases because in these cases the real data model (e.g., physics models) have small weights. Reguarlization is a way to inject this knowledge in our model. It weeds out those models that have large weights, which tend to be models that overfit.
However, in simulation, you can definitely construct a model that have large weights and generate data from it. Regularization may not work well with this kind of data. Regularization will create a large bias in this case, I guess.
But this kind of data is rare in real world.
The intuition what I think of is that not much weight is given to a particular feature. But isn't it sometimes necessary to focus on one feature (like in the above example x1)?
Edit: You can make the classifier give more weight to $x1$ by scaling $x1$ back by a factor of, say, 10. Then the black boundary will get very close to the magenta boundary. This is equivalent to discounting the component on the $x1$ dimension in the calculation of the Euclidean distance, but what makes the red cross on the left an outlier is largely because it is distant from the rest of the datapoints in the same class, in $x1$ dimension. By scaling $x1$ down, we are reducing the contribution of the outlier in the total loss.
Actually, we cannot tell from this example whether the magenta boundary is an overfit. This is the same as saying that we cannot be sure if the red cross on the bottom left is an outlier. We have to see other samples (like a test set) to be more certain.
Just for the sake of discussion, suppose we are sure that the red cross on the left is indeed an outlier. The problem, I believe, if not that we are not giving enough weight on one particular feature, it is that with a hard margin classifier, the decision boundary is sensitive to outliers, because most of the cost comes from one single datapoint. Regularization does not really help in this situation. A soft-margin classifier, which involves more datapoints in determining the decision boundary, will result in a boundary close to the one marked in black. | Regularized parameter overfitting the data (example)
Regularization does not guarantee to reduce overfit.
Regularization reduces overfit in many cases because in these cases the real data model (e.g., physics models) have small weights. Reguarlization |
55,781 | Regularized parameter overfitting the data (example) | Regularization cost for magenta classifier is low but still it seems to overfit the data and vice-versa for the black classifier. What's going on? L2 regularization tend to make the coefficients close to zero. But how does that helps in reducing overfitting?
Overfitting isn't about what happens to the training data alone. It's about the comparison of the training data and out-of-sample. If your training loss is low, but your out-of-sample loss is large, you're overfitting. If your training loss is low, and your out-of-sample loss is low, congrats! You have some evidence that your model generalizes well.
So if we apply that definition here, it’s obvious that we can’t say anything about whether the model is over- or under-fit because there’s no comparison to out of sample data.
Regularization can help with overfitting by discouraging the model from estimating too-complex a decision boundary. The diagonal/magenta decision boundary could be "too-complex" (as measured by $L^2$ regularization omitting the intercept) if the X far away from the other Xs and near the Os is not representative of the process overall (i.e. it is a quirk).
The intuition what I think of is that not much weight is given to a particular feature. But isn't it sometimes necessary to focus on one feature (like in the above example x1)?
Preventing "too much weight given to a particular feature" isn't what $L^2$ regularization does, but does sound more like $L^\infty$ regularization (which tends to result in weights that are more evenly distributed).
$L^2$ regularization penalizes large coefficients (or encourages coefficients to be nearer to zero).
The distinction I'm making is subtle, but the point is that in $L^2$ regularization, a model can "put its eggs all in one basket" and have a small number of large coefficients and many near-zero coefficients. This is desirable when there are only a few highly relevant features.
It's unusual to apply regularization to the intercept. If you omit the intercept regularization, the black classifier has lower regularization penalty. This isn't sufficient to permit us to draw any conclusions about which model, though, since we do not have information about out-of-sample generalization (or even about in-sample loss). | Regularized parameter overfitting the data (example) | Regularization cost for magenta classifier is low but still it seems to overfit the data and vice-versa for the black classifier. What's going on? L2 regularization tend to make the coefficients close | Regularized parameter overfitting the data (example)
Regularization cost for magenta classifier is low but still it seems to overfit the data and vice-versa for the black classifier. What's going on? L2 regularization tend to make the coefficients close to zero. But how does that helps in reducing overfitting?
Overfitting isn't about what happens to the training data alone. It's about the comparison of the training data and out-of-sample. If your training loss is low, but your out-of-sample loss is large, you're overfitting. If your training loss is low, and your out-of-sample loss is low, congrats! You have some evidence that your model generalizes well.
So if we apply that definition here, it’s obvious that we can’t say anything about whether the model is over- or under-fit because there’s no comparison to out of sample data.
Regularization can help with overfitting by discouraging the model from estimating too-complex a decision boundary. The diagonal/magenta decision boundary could be "too-complex" (as measured by $L^2$ regularization omitting the intercept) if the X far away from the other Xs and near the Os is not representative of the process overall (i.e. it is a quirk).
The intuition what I think of is that not much weight is given to a particular feature. But isn't it sometimes necessary to focus on one feature (like in the above example x1)?
Preventing "too much weight given to a particular feature" isn't what $L^2$ regularization does, but does sound more like $L^\infty$ regularization (which tends to result in weights that are more evenly distributed).
$L^2$ regularization penalizes large coefficients (or encourages coefficients to be nearer to zero).
The distinction I'm making is subtle, but the point is that in $L^2$ regularization, a model can "put its eggs all in one basket" and have a small number of large coefficients and many near-zero coefficients. This is desirable when there are only a few highly relevant features.
It's unusual to apply regularization to the intercept. If you omit the intercept regularization, the black classifier has lower regularization penalty. This isn't sufficient to permit us to draw any conclusions about which model, though, since we do not have information about out-of-sample generalization (or even about in-sample loss). | Regularized parameter overfitting the data (example)
Regularization cost for magenta classifier is low but still it seems to overfit the data and vice-versa for the black classifier. What's going on? L2 regularization tend to make the coefficients close |
55,782 | In Bayesian statistics, what does this notation formally mean? | You are correct to say that the statement is not mathematically precise; it is a shorthand that is sometimes used to set out a hierarchical model. I'm not a fan of this notation personally, since it is an unnecessary abuse of notation. (I prefer to specify independence separately.)
Presumably the intention of such statements is to set up mutually independent random variables with specified conditional distributions. Implicitly, the argument variable is taken to be independent of any other iteration of the conditioning variable not mentioned in the conditioning statement. So I would think that the shorthand statement,
$$Y_i|v_i \overset{ind}{\sim} f_i(y_i|v_i),$$
would formally mean that $\{ Y_k \}$ are mutually independent conditional on $\{ v_k \}$, and each element of the former has conditional distribution $Y_i | \{ v_k \} \sim f_i(v_i)$. So your second interpretation is the one I would use. The reason I would think that this is the interpretation is simply that you cannot proceed with your analysis with weaker assumptions. If what was intended by the notation was certain kinds of pairwise independence, or independence conditional only on subsets of $\{ v_k \}$ then there is not enough specification to set up a hierarchical model with a fixed likelihood function. | In Bayesian statistics, what does this notation formally mean? | You are correct to say that the statement is not mathematically precise; it is a shorthand that is sometimes used to set out a hierarchical model. I'm not a fan of this notation personally, since it | In Bayesian statistics, what does this notation formally mean?
You are correct to say that the statement is not mathematically precise; it is a shorthand that is sometimes used to set out a hierarchical model. I'm not a fan of this notation personally, since it is an unnecessary abuse of notation. (I prefer to specify independence separately.)
Presumably the intention of such statements is to set up mutually independent random variables with specified conditional distributions. Implicitly, the argument variable is taken to be independent of any other iteration of the conditioning variable not mentioned in the conditioning statement. So I would think that the shorthand statement,
$$Y_i|v_i \overset{ind}{\sim} f_i(y_i|v_i),$$
would formally mean that $\{ Y_k \}$ are mutually independent conditional on $\{ v_k \}$, and each element of the former has conditional distribution $Y_i | \{ v_k \} \sim f_i(v_i)$. So your second interpretation is the one I would use. The reason I would think that this is the interpretation is simply that you cannot proceed with your analysis with weaker assumptions. If what was intended by the notation was certain kinds of pairwise independence, or independence conditional only on subsets of $\{ v_k \}$ then there is not enough specification to set up a hierarchical model with a fixed likelihood function. | In Bayesian statistics, what does this notation formally mean?
You are correct to say that the statement is not mathematically precise; it is a shorthand that is sometimes used to set out a hierarchical model. I'm not a fan of this notation personally, since it |
55,783 | In Bayesian statistics, what does this notation formally mean? | Taking the simpler case
$$
v_i \overset{ind}{\sim} g_i(v_i)
$$
presumably means that "$v_i$'s are independently distributed according to $g_i$ distributions each". The notation is a shortcut for describing each $i$-th variable as separate case. There's often trade-off between simplicity and formality.
But the notation you quote is strange, e.g. it puts $v_i$ on both sides of $\sim$, while usually you'd see variable on left hand side and distribution with parameters on right hand side, so it is redundant if used like this. | In Bayesian statistics, what does this notation formally mean? | Taking the simpler case
$$
v_i \overset{ind}{\sim} g_i(v_i)
$$
presumably means that "$v_i$'s are independently distributed according to $g_i$ distributions each". The notation is a shortcut for descr | In Bayesian statistics, what does this notation formally mean?
Taking the simpler case
$$
v_i \overset{ind}{\sim} g_i(v_i)
$$
presumably means that "$v_i$'s are independently distributed according to $g_i$ distributions each". The notation is a shortcut for describing each $i$-th variable as separate case. There's often trade-off between simplicity and formality.
But the notation you quote is strange, e.g. it puts $v_i$ on both sides of $\sim$, while usually you'd see variable on left hand side and distribution with parameters on right hand side, so it is redundant if used like this. | In Bayesian statistics, what does this notation formally mean?
Taking the simpler case
$$
v_i \overset{ind}{\sim} g_i(v_i)
$$
presumably means that "$v_i$'s are independently distributed according to $g_i$ distributions each". The notation is a shortcut for descr |
55,784 | Modelling nonnegative integer time series | Use a model for integer time series, not ARIMA or Holt-Winters' method. You could try an INAR for example, or an autoregressive conditional Poisson model, or a GLARMA model. See Fokianos (2012) for a comprehensive review. Some possibilities in R are described in Liboschik et al (2017). | Modelling nonnegative integer time series | Use a model for integer time series, not ARIMA or Holt-Winters' method. You could try an INAR for example, or an autoregressive conditional Poisson model, or a GLARMA model. See Fokianos (2012) for a | Modelling nonnegative integer time series
Use a model for integer time series, not ARIMA or Holt-Winters' method. You could try an INAR for example, or an autoregressive conditional Poisson model, or a GLARMA model. See Fokianos (2012) for a comprehensive review. Some possibilities in R are described in Liboschik et al (2017). | Modelling nonnegative integer time series
Use a model for integer time series, not ARIMA or Holt-Winters' method. You could try an INAR for example, or an autoregressive conditional Poisson model, or a GLARMA model. See Fokianos (2012) for a |
55,785 | Teaching (very elementary) statistical modelling | Not shure, whether this is worth an answer or just a comment, but I want the room this forum gives for answers only.
Have look at this golf putting example, that Andrew Gelman gave in a webinar about stan. Forget about the Bayes-aspect of it. It just shows a standard model compared with an informed model and how the result improves when knowledge about the subject (not prior data!) is brought into model choice: https://youtu.be/T1gYvX5c2sM?t=2844
That should be worth noting, no matter of the mathematical knowledge.
The second source is a youtube talk by Andrew Gelman as well. In his talk "crimes on data" he describes a number of published -yet very problematic- models. The examples are "real world" in the sense of "actually publshed, problems that you may be confronted with as a mathematician". You can learn a lot from that without lots of mathematics. I, for one, know, that small studies are unlikely to give significant results. It was not before this talk that I understood, how a significant result in a small study is even more of a problem. Start with this example, and if you like it, listen to the whole talk
https://youtu.be/fc1hkFC2c1E?t=735 | Teaching (very elementary) statistical modelling | Not shure, whether this is worth an answer or just a comment, but I want the room this forum gives for answers only.
Have look at this golf putting example, that Andrew Gelman gave in a webinar about | Teaching (very elementary) statistical modelling
Not shure, whether this is worth an answer or just a comment, but I want the room this forum gives for answers only.
Have look at this golf putting example, that Andrew Gelman gave in a webinar about stan. Forget about the Bayes-aspect of it. It just shows a standard model compared with an informed model and how the result improves when knowledge about the subject (not prior data!) is brought into model choice: https://youtu.be/T1gYvX5c2sM?t=2844
That should be worth noting, no matter of the mathematical knowledge.
The second source is a youtube talk by Andrew Gelman as well. In his talk "crimes on data" he describes a number of published -yet very problematic- models. The examples are "real world" in the sense of "actually publshed, problems that you may be confronted with as a mathematician". You can learn a lot from that without lots of mathematics. I, for one, know, that small studies are unlikely to give significant results. It was not before this talk that I understood, how a significant result in a small study is even more of a problem. Start with this example, and if you like it, listen to the whole talk
https://youtu.be/fc1hkFC2c1E?t=735 | Teaching (very elementary) statistical modelling
Not shure, whether this is worth an answer or just a comment, but I want the room this forum gives for answers only.
Have look at this golf putting example, that Andrew Gelman gave in a webinar about |
55,786 | Teaching (very elementary) statistical modelling | But missing is seemingly why we might choose the Poisson distribution over other choices
This is indeed a frequent trait of probability textbooks – and even of research works in statistics. Somewhat of a taboo it seems. It was pointed out by the statistician A. P. Dawid (1982, § 4, p. 220):
Where do probability models come from? To judge by the resounding silence over this question on the part of most statisticians, it seems highly embarrassing. In general, the theoretician is happy to accept that his abstract probability triple $(\Omega, \mathcal{A}, \mathcal{P})$ was found under a gooseberry bush, while the applied statistician's model "just growed".
From a physicist's (or biologist's etc.) point of view this is a pity and a sin, because it doesn't help explaining why and how things happen. I even fear that today there's a tendency in the opposite direction: to see the probability model itself as an explanation. For example I've seen a couple of research articles stating that the explanation behind particular phenomena in economics was "luck" – which of course isn't an explanation at all, but just a declaration that we don't know the causes or mechanisms.
Well, sorry for this initial rant. It's really comforting to see someone asking why a Poisson model is used, rather than saying "because Poisson"!
Back to your question. It's difficult to find the kinds of examples you're asking about. You most likely will find them in physicists' writings; but their use of probability is often rather poor, and the maths may be too much for your purpose. The works of Jaynes's are an exception, so you might find examples there. Skim through his Probability Theory: The Logic of Science. In particular check §§ 6.10–6.11 for an example of how the Poisson distribution appears from physical reasoning.
An absolutely fantastic work where each probability model used is motivated by reasoning about the particular real context is Mosteller & Wallace's Inference in an authorship problem (1963). Really recommended. Maybe you can build mathematically simplified examples from it.
If you've found any useful examples in the literature, please do answer your own question and share them with us!
References
A. P. Dawid (1982): Intersubjective statistical models, in G. Koch, F. Spizzichino: Exchangeability in Probability and Statistics (North-Holland), pp. 217--232
E. T. Jaynes (2003): Probability Theory: The Logic of Science (Cambridge). Check out this link and this link
F. Mosteller, D. L. Wallace (1963): Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed Federalist papers, J. Am. Stat. Assoc. 58/302, pp. 275–309. Also at this link. | Teaching (very elementary) statistical modelling | But missing is seemingly why we might choose the Poisson distribution over other choices
This is indeed a frequent trait of probability textbooks – and even of research works in statistics. Somewhat | Teaching (very elementary) statistical modelling
But missing is seemingly why we might choose the Poisson distribution over other choices
This is indeed a frequent trait of probability textbooks – and even of research works in statistics. Somewhat of a taboo it seems. It was pointed out by the statistician A. P. Dawid (1982, § 4, p. 220):
Where do probability models come from? To judge by the resounding silence over this question on the part of most statisticians, it seems highly embarrassing. In general, the theoretician is happy to accept that his abstract probability triple $(\Omega, \mathcal{A}, \mathcal{P})$ was found under a gooseberry bush, while the applied statistician's model "just growed".
From a physicist's (or biologist's etc.) point of view this is a pity and a sin, because it doesn't help explaining why and how things happen. I even fear that today there's a tendency in the opposite direction: to see the probability model itself as an explanation. For example I've seen a couple of research articles stating that the explanation behind particular phenomena in economics was "luck" – which of course isn't an explanation at all, but just a declaration that we don't know the causes or mechanisms.
Well, sorry for this initial rant. It's really comforting to see someone asking why a Poisson model is used, rather than saying "because Poisson"!
Back to your question. It's difficult to find the kinds of examples you're asking about. You most likely will find them in physicists' writings; but their use of probability is often rather poor, and the maths may be too much for your purpose. The works of Jaynes's are an exception, so you might find examples there. Skim through his Probability Theory: The Logic of Science. In particular check §§ 6.10–6.11 for an example of how the Poisson distribution appears from physical reasoning.
An absolutely fantastic work where each probability model used is motivated by reasoning about the particular real context is Mosteller & Wallace's Inference in an authorship problem (1963). Really recommended. Maybe you can build mathematically simplified examples from it.
If you've found any useful examples in the literature, please do answer your own question and share them with us!
References
A. P. Dawid (1982): Intersubjective statistical models, in G. Koch, F. Spizzichino: Exchangeability in Probability and Statistics (North-Holland), pp. 217--232
E. T. Jaynes (2003): Probability Theory: The Logic of Science (Cambridge). Check out this link and this link
F. Mosteller, D. L. Wallace (1963): Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed Federalist papers, J. Am. Stat. Assoc. 58/302, pp. 275–309. Also at this link. | Teaching (very elementary) statistical modelling
But missing is seemingly why we might choose the Poisson distribution over other choices
This is indeed a frequent trait of probability textbooks – and even of research works in statistics. Somewhat |
55,787 | Find the PDF from quantiles | Your approach is valid, if you use the integral of a cubic B-spline to interpolate the quantiles with the condition that all B-spline coefficients are nonnegative.
https://en.m.wikipedia.org/wiki/B-spline
This preserves the monotonicty of the quantiles in the interpolating function, while normal cubic splines do not.
There is no need then to numerically differentiate the spline, you get the PDF from the normal evaluation of the B-spline. The integral of the B-spline curve is also available without numerical computation. You get it from any serious library that implements splines, for example, in Python's scipy module.
The remaining question is how to find the B-spline coefficients in this case. I don't have perfect answer right now, although an efficient algorithm should be possible. A simple brute-force way would be to minimize the squared residuals between the integrated spline and the quantiles with a numerical minimizer, while constraining the coefficients to be nonnegative.
I can add a numerical example in a few days if desired. | Find the PDF from quantiles | Your approach is valid, if you use the integral of a cubic B-spline to interpolate the quantiles with the condition that all B-spline coefficients are nonnegative.
https://en.m.wikipedia.org/wiki/B-sp | Find the PDF from quantiles
Your approach is valid, if you use the integral of a cubic B-spline to interpolate the quantiles with the condition that all B-spline coefficients are nonnegative.
https://en.m.wikipedia.org/wiki/B-spline
This preserves the monotonicty of the quantiles in the interpolating function, while normal cubic splines do not.
There is no need then to numerically differentiate the spline, you get the PDF from the normal evaluation of the B-spline. The integral of the B-spline curve is also available without numerical computation. You get it from any serious library that implements splines, for example, in Python's scipy module.
The remaining question is how to find the B-spline coefficients in this case. I don't have perfect answer right now, although an efficient algorithm should be possible. A simple brute-force way would be to minimize the squared residuals between the integrated spline and the quantiles with a numerical minimizer, while constraining the coefficients to be nonnegative.
I can add a numerical example in a few days if desired. | Find the PDF from quantiles
Your approach is valid, if you use the integral of a cubic B-spline to interpolate the quantiles with the condition that all B-spline coefficients are nonnegative.
https://en.m.wikipedia.org/wiki/B-sp |
55,788 | Prove that $ \mathbb{E}[XY] - \mathbb{E}[X] \mathbb{E}[Y] = \int_{- \infty}^\infty \int_{- \infty}^\infty (F(x,y)-F_X(x) F_Y(y)) \,dx\,dy,$ | For the second equality, think about the integral $$\int_{- \infty}^{\infty} \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]du dv $$ which is a random variable. The structure of the integrand allows the integral to be rewritten as $$ \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)]du \int_{- \infty}^{\infty}[I(v,Y_1)-I(v,Y_2)]dv $$
Consider the integrand involving $u$. Its value is either $1$, $0$ or $-1$ depending on how $u$ relates to $X_1$ and $X_2$. By considering when these three possibilities occur, you should be able to convince yourself that the $u$ integral is just $X_1-X_2$. Similarly for the $v$ integral.
The later section of the proof hinges on evaluating $$\mathbb{E}[[I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]]$$ This expectation can be written as a sum of four terms. One such term is $\mathbb{E}[I(u,X_1)I(v,Y_1)]$. Using standard indicator function notation, this is simply $$\mathbb{E}[1_{u\leq X_1}1_{v\leq Y_1}]=P(u\leq X_1,v\leq Y_1)=1-F_Y (v)-F_X(u)+F(u,v)$$ (best seen by drawing a diagram of $u$,$v$ space). Similar calculations with the other terms should give you the desired result.
Note: the final equation assumes that all the distribution functions involved are continuous. The equation may not be true at discontinuity points of $F_Y$, $F_X$. However, the set of such points is countable so the value of the double integral will not be affected. | Prove that $ \mathbb{E}[XY] - \mathbb{E}[X] \mathbb{E}[Y] = \int_{- \infty}^\infty \int_{- \infty}^\ | For the second equality, think about the integral $$\int_{- \infty}^{\infty} \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]du dv $$ which is a random variable. The structure of the in | Prove that $ \mathbb{E}[XY] - \mathbb{E}[X] \mathbb{E}[Y] = \int_{- \infty}^\infty \int_{- \infty}^\infty (F(x,y)-F_X(x) F_Y(y)) \,dx\,dy,$
For the second equality, think about the integral $$\int_{- \infty}^{\infty} \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]du dv $$ which is a random variable. The structure of the integrand allows the integral to be rewritten as $$ \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)]du \int_{- \infty}^{\infty}[I(v,Y_1)-I(v,Y_2)]dv $$
Consider the integrand involving $u$. Its value is either $1$, $0$ or $-1$ depending on how $u$ relates to $X_1$ and $X_2$. By considering when these three possibilities occur, you should be able to convince yourself that the $u$ integral is just $X_1-X_2$. Similarly for the $v$ integral.
The later section of the proof hinges on evaluating $$\mathbb{E}[[I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]]$$ This expectation can be written as a sum of four terms. One such term is $\mathbb{E}[I(u,X_1)I(v,Y_1)]$. Using standard indicator function notation, this is simply $$\mathbb{E}[1_{u\leq X_1}1_{v\leq Y_1}]=P(u\leq X_1,v\leq Y_1)=1-F_Y (v)-F_X(u)+F(u,v)$$ (best seen by drawing a diagram of $u$,$v$ space). Similar calculations with the other terms should give you the desired result.
Note: the final equation assumes that all the distribution functions involved are continuous. The equation may not be true at discontinuity points of $F_Y$, $F_X$. However, the set of such points is countable so the value of the double integral will not be affected. | Prove that $ \mathbb{E}[XY] - \mathbb{E}[X] \mathbb{E}[Y] = \int_{- \infty}^\infty \int_{- \infty}^\
For the second equality, think about the integral $$\int_{- \infty}^{\infty} \int_{- \infty}^{\infty} [I(u,X_1)-I(u,X_2)][I(v,Y_1)-I(v,Y_2)]du dv $$ which is a random variable. The structure of the in |
55,789 | Do "splits" in scatterplots indicate anything? | When I see a plot like that, my thinking immediately goes toward sample subgroups. It seems like you have two different sample groups, each with a visually apparent trend line. Are all your samples similar, or are there some categorical differences between them? Suppose one of your divergent graphs represents price vs. capacity of hard drives. The two sample groups might represent different manufacturers, one with cheap products and a low dollar/GB slope, and one with more expensive products and a high dollar/GB slope.
To characterize this further, you can label the samples on one of the divergent lines and plot out those sample labels on the other graphs as well. If you consistently see the same samples falling onto the separate trend lines across several variables, that suggests there's some sort of categorical difference that's driving the behavior across several of your observed variables. | Do "splits" in scatterplots indicate anything? | When I see a plot like that, my thinking immediately goes toward sample subgroups. It seems like you have two different sample groups, each with a visually apparent trend line. Are all your samples si | Do "splits" in scatterplots indicate anything?
When I see a plot like that, my thinking immediately goes toward sample subgroups. It seems like you have two different sample groups, each with a visually apparent trend line. Are all your samples similar, or are there some categorical differences between them? Suppose one of your divergent graphs represents price vs. capacity of hard drives. The two sample groups might represent different manufacturers, one with cheap products and a low dollar/GB slope, and one with more expensive products and a high dollar/GB slope.
To characterize this further, you can label the samples on one of the divergent lines and plot out those sample labels on the other graphs as well. If you consistently see the same samples falling onto the separate trend lines across several variables, that suggests there's some sort of categorical difference that's driving the behavior across several of your observed variables. | Do "splits" in scatterplots indicate anything?
When I see a plot like that, my thinking immediately goes toward sample subgroups. It seems like you have two different sample groups, each with a visually apparent trend line. Are all your samples si |
55,790 | Do "splits" in scatterplots indicate anything? | There is at least one discrete variable, parm3 and it's possible that there are other un-labeled groupings. I'd start by redo that graphic while labeling the parm3 values with color coding. Ten you can quickly see whether one or two colors fall into the "splits" that are apparent.
That appears to be a "pairs" plot. In R you could do something like:
pairs( data , bg= rainbow(10)[parm3], other parameters)
That call to rainbow function would give you 10 colors across the visible spectrum.and the values of parm3 would get individually "painted". | Do "splits" in scatterplots indicate anything? | There is at least one discrete variable, parm3 and it's possible that there are other un-labeled groupings. I'd start by redo that graphic while labeling the parm3 values with color coding. Ten you c | Do "splits" in scatterplots indicate anything?
There is at least one discrete variable, parm3 and it's possible that there are other un-labeled groupings. I'd start by redo that graphic while labeling the parm3 values with color coding. Ten you can quickly see whether one or two colors fall into the "splits" that are apparent.
That appears to be a "pairs" plot. In R you could do something like:
pairs( data , bg= rainbow(10)[parm3], other parameters)
That call to rainbow function would give you 10 colors across the visible spectrum.and the values of parm3 would get individually "painted". | Do "splits" in scatterplots indicate anything?
There is at least one discrete variable, parm3 and it's possible that there are other un-labeled groupings. I'd start by redo that graphic while labeling the parm3 values with color coding. Ten you c |
55,791 | One-sided McNemar's test | I have described the gist of McNemar's test rather extensively here and here, it may help you to read those. Briefly, McNemar's test assesses the balance of the off-diagonal counts. If people were as likely to transition from approval to disapproval as from disapproval to approval, then the off-diagonal values should be approximately the same. The question then is how to test that they are. Assuming a 2x2 table with the cells labeled "a", "b", "c", "d" (from left to right, from top to bottom), the actual test McNemar came up with is:
$$
Q_{\chi^2} = \frac{(b-c)^2}{(b+c)}
$$
The test statistic, which I've called $Q_{\chi^2}$ here, is approximately distributed as $\chi^2_1$, but not quite, especially with smaller counts. The approximation can be improved using a 'continuity correction':
$$
Q_{\chi^2c} = \frac{(|b-c|-1)^2}{(b+c)}
$$
This will work better, and realistically, it should be considered fine, but it can't be quite right. That's because the test statistic will necessarily have a discrete sampling distribution, as counts are necessarily discrete, but the chi-squared distribution is continuous (cf., Comparing and contrasting, p-values, significance levels and type I error).
Presumably, McNemar went with the above version due to the computational limitations of his time. Tables of critical chi-squared values were to be had, but computers weren't. Nonetheless, the actual relationship at issue can be perfectly modeled as a binomial:
$$
Q_b = \frac{b}{b+c}
$$
This can be tested via a two-tailed test, a one-tailed 'greater than' version, or a one-tailed 'less than' version in a very straightforward way. Each of those will be an exact test.
With smaller counts, the two-tailed binomial version and McNemar's version that compares the quotient to a chi-squared distribution, will differ slightly. 'At infinity', they should be the same.
The reason R cannot really offer a one-tailed version of the standard implementation of McNemar's test is that by its nature, chi-squared is essentially always a one-tailed test (cf., Is chi-squared always a one-sided test?).
If you really want the one-tailed version, you don't need any special package, it's straightforward to code from scratch:
Performance
# 2nd Survey
# 1st Survey Approve Disapprove
# Approve 794 150
# Disapprove 86 570
pbinom(q=(150-1), size=(86+150), prob=.5, lower.tail=FALSE)
# [1] 1.857968e-05
## or:
binom.test(x=150, n=(86+150), p=0.5, alternative="greater")
# Exact binomial test
#
# data: 150 and (86 + 150)
# number of successes = 150, number of trials = 236, p-value = 1.858e-05
# alternative hypothesis: true probability of success is greater than 0.5
# 95 percent confidence interval:
# 0.5808727 1.0000000
# sample estimates:
# probability of success
# 0.6355932
Edit:
@mkla25 pointed out (now deleted) that the original pbinom() call above was incorrect. (It has now been corrected; see revision history for original.) The binomial CDF is defined as the proportion $≤$ the specified value, so the complement is strictly $>$. To use the binomial CDF directly for a "greater than" test, you need to use $(x−1)$ to include the specified value. (To be explicit: this is not necessary to do for a "less than" test.) A simpler approach that wouldn't require you to remember this nuance would be to use binom.test(), which does that for you. | One-sided McNemar's test | I have described the gist of McNemar's test rather extensively here and here, it may help you to read those. Briefly, McNemar's test assesses the balance of the off-diagonal counts. If people were a | One-sided McNemar's test
I have described the gist of McNemar's test rather extensively here and here, it may help you to read those. Briefly, McNemar's test assesses the balance of the off-diagonal counts. If people were as likely to transition from approval to disapproval as from disapproval to approval, then the off-diagonal values should be approximately the same. The question then is how to test that they are. Assuming a 2x2 table with the cells labeled "a", "b", "c", "d" (from left to right, from top to bottom), the actual test McNemar came up with is:
$$
Q_{\chi^2} = \frac{(b-c)^2}{(b+c)}
$$
The test statistic, which I've called $Q_{\chi^2}$ here, is approximately distributed as $\chi^2_1$, but not quite, especially with smaller counts. The approximation can be improved using a 'continuity correction':
$$
Q_{\chi^2c} = \frac{(|b-c|-1)^2}{(b+c)}
$$
This will work better, and realistically, it should be considered fine, but it can't be quite right. That's because the test statistic will necessarily have a discrete sampling distribution, as counts are necessarily discrete, but the chi-squared distribution is continuous (cf., Comparing and contrasting, p-values, significance levels and type I error).
Presumably, McNemar went with the above version due to the computational limitations of his time. Tables of critical chi-squared values were to be had, but computers weren't. Nonetheless, the actual relationship at issue can be perfectly modeled as a binomial:
$$
Q_b = \frac{b}{b+c}
$$
This can be tested via a two-tailed test, a one-tailed 'greater than' version, or a one-tailed 'less than' version in a very straightforward way. Each of those will be an exact test.
With smaller counts, the two-tailed binomial version and McNemar's version that compares the quotient to a chi-squared distribution, will differ slightly. 'At infinity', they should be the same.
The reason R cannot really offer a one-tailed version of the standard implementation of McNemar's test is that by its nature, chi-squared is essentially always a one-tailed test (cf., Is chi-squared always a one-sided test?).
If you really want the one-tailed version, you don't need any special package, it's straightforward to code from scratch:
Performance
# 2nd Survey
# 1st Survey Approve Disapprove
# Approve 794 150
# Disapprove 86 570
pbinom(q=(150-1), size=(86+150), prob=.5, lower.tail=FALSE)
# [1] 1.857968e-05
## or:
binom.test(x=150, n=(86+150), p=0.5, alternative="greater")
# Exact binomial test
#
# data: 150 and (86 + 150)
# number of successes = 150, number of trials = 236, p-value = 1.858e-05
# alternative hypothesis: true probability of success is greater than 0.5
# 95 percent confidence interval:
# 0.5808727 1.0000000
# sample estimates:
# probability of success
# 0.6355932
Edit:
@mkla25 pointed out (now deleted) that the original pbinom() call above was incorrect. (It has now been corrected; see revision history for original.) The binomial CDF is defined as the proportion $≤$ the specified value, so the complement is strictly $>$. To use the binomial CDF directly for a "greater than" test, you need to use $(x−1)$ to include the specified value. (To be explicit: this is not necessary to do for a "less than" test.) A simpler approach that wouldn't require you to remember this nuance would be to use binom.test(), which does that for you. | One-sided McNemar's test
I have described the gist of McNemar's test rather extensively here and here, it may help you to read those. Briefly, McNemar's test assesses the balance of the off-diagonal counts. If people were a |
55,792 | Residual diagnostics for negative binomial regression | Check out the DHARMa package in R. It uses a simulation based approach with quantile residuals to generate the type of residuals you may be interested in. And it works with glm.nb from MASS.
The essential idea is explained here and goes in three steps:
Simulate plausible responses for each case. You can use the distribution of each regression coefficient (coefficient together with standard error) to generate several sets of coefficients. You can multiply each set of coefficients by the observed predictors for each case to obtain multiple simulated response values for each case.
from the multiple response values for each case, generate the empirical cumulative density function (cdf)
find the value of the empirical cdf that corresponds to the observed response for each case. This is your residual and it ranges from 0 to 1.
If there is no systematic misspecification in your model, all values from the empirical cdf are equally likely. And a histogram of these residuals should be uniformly distributed between 0 and 1.
The package contains additional checks.
Edit:
The steps above are not exactly correct. The biggest difference between my description and what DHARMa does is DHARMa uses the simulate() function in base R, which ignores the variability in the estimated regression coefficients. The Gelman and Hill regression text recommends taking the variability in regression coefficients into account.
A crucial step I forgot to include is: once one has generated the responses, we should place them on the response scale. For example, the predicted variables from logistic regression are logits, so one should place them on the probability scale. Next step would be to randomly generate observed scores using the predicted probabilities. Continuing with the logistic regression example, one can use rbinom() to generate 0-1 outcomes given predicted probabilities.
Additionally, when the responses are integers, such as with binary outcomes or count data models, DHARMa adds random noise to the simulated responses prior to computing the empirical cdf. This step ensures the residuals behave as one would expect in OLS if the model was not misspecified. Without it, you could have a pile up of residuals at 1 if your outcome is binary outcome.
The code for the simulate residuals function in DHARMa is relatively easy to follow for anyone looking to roll their own implementation. | Residual diagnostics for negative binomial regression | Check out the DHARMa package in R. It uses a simulation based approach with quantile residuals to generate the type of residuals you may be interested in. And it works with glm.nb from MASS.
The essen | Residual diagnostics for negative binomial regression
Check out the DHARMa package in R. It uses a simulation based approach with quantile residuals to generate the type of residuals you may be interested in. And it works with glm.nb from MASS.
The essential idea is explained here and goes in three steps:
Simulate plausible responses for each case. You can use the distribution of each regression coefficient (coefficient together with standard error) to generate several sets of coefficients. You can multiply each set of coefficients by the observed predictors for each case to obtain multiple simulated response values for each case.
from the multiple response values for each case, generate the empirical cumulative density function (cdf)
find the value of the empirical cdf that corresponds to the observed response for each case. This is your residual and it ranges from 0 to 1.
If there is no systematic misspecification in your model, all values from the empirical cdf are equally likely. And a histogram of these residuals should be uniformly distributed between 0 and 1.
The package contains additional checks.
Edit:
The steps above are not exactly correct. The biggest difference between my description and what DHARMa does is DHARMa uses the simulate() function in base R, which ignores the variability in the estimated regression coefficients. The Gelman and Hill regression text recommends taking the variability in regression coefficients into account.
A crucial step I forgot to include is: once one has generated the responses, we should place them on the response scale. For example, the predicted variables from logistic regression are logits, so one should place them on the probability scale. Next step would be to randomly generate observed scores using the predicted probabilities. Continuing with the logistic regression example, one can use rbinom() to generate 0-1 outcomes given predicted probabilities.
Additionally, when the responses are integers, such as with binary outcomes or count data models, DHARMa adds random noise to the simulated responses prior to computing the empirical cdf. This step ensures the residuals behave as one would expect in OLS if the model was not misspecified. Without it, you could have a pile up of residuals at 1 if your outcome is binary outcome.
The code for the simulate residuals function in DHARMa is relatively easy to follow for anyone looking to roll their own implementation. | Residual diagnostics for negative binomial regression
Check out the DHARMa package in R. It uses a simulation based approach with quantile residuals to generate the type of residuals you may be interested in. And it works with glm.nb from MASS.
The essen |
55,793 | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value? | You are correct, in the sense that the uniformly most powerful (UMP) test for equivalence for the one-sample, paired, and two-sample $T$-test does involve a non-central $t$-distribution.
The two one-sided tests (TOST) procedure with both rejection regions with size $\alpha$, or equivalently the $1 - 2 \alpha$ confidence-interval inclusion procedure, has size $\alpha$ when the null hypothesis is true. However, it is not as powerful as the UMP test for small sample sizes. One can show that, as the sample size grows, the TOST procedure is asymptotically equivalent to the UMP test. But a great deal of power can be lost for small sample sizes.
This code simulates from a standard Normal distribution and performs the test for equivalence in the one-sample setting using both the TOST test (via Daniel Lakens TOSTER package) and the UMP test, where the P-value for the UMP test is given by
$$P(|T| < |t_{\text{obs}}|; \texttt{df} = n - 1, \texttt{ncp} = \sqrt{n}\cdot\delta/\sigma)$$
where $T$ is a non-central $t$-random variable and $t_{\text{obs}} = \frac{\bar{x}}{s/\sqrt{n}}$.
n <- 10
alpha <- 0.05
theta <- 0.5 # delta / sigma
ncp <- theta*sqrt(n) # delta/sigma*sqrt(n)
# Simulate 10000 samples, and compute power.
S <- 10000
# theta.true <- theta # Null is true
theta.true <- 0 # Alternative is true
Pvals.tost <- rep(NA, length(S))
Pvals.ump <- rep(NA, length(S))
for (s in 1:S){
Xs <- rnorm(n, mean = theta.true)
xbar <- mean(Xs)
tobs <- xbar/(sd(Xs)/sqrt(n))
Pvals.ump[s] <- pt(abs(tobs), df = n-1, ncp = ncp) - pt(-abs(tobs), df = n-1, ncp = ncp)
out <- TOSTER::TOSTone(m = xbar,
mu = 0,
sd = 1,
n = n,
low_eqbound_d = -theta,
high_eqbound_d = theta,
alpha = 0.05,
plot = FALSE,
verbose = FALSE)
Pvals.tost[s] <- max(out$TOST_p1, out$TOST_p2)
}
mean(Pvals.tost <= 0.05)
mean(Pvals.ump <= 0.05)
With $n = 10$, the TOST procedure has 0 power to detect the discrepancy from the null, while the UMP test has power ~ 0.17 to detect it.
Reference: Stefan Wellek, Testing Statistical Hypotheses of Equivalence and Noninferiority, pages 64 and 92. | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the | You are correct, in the sense that the uniformly most powerful (UMP) test for equivalence for the one-sample, paired, and two-sample $T$-test does involve a non-central $t$-distribution.
The two one-s | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value?
You are correct, in the sense that the uniformly most powerful (UMP) test for equivalence for the one-sample, paired, and two-sample $T$-test does involve a non-central $t$-distribution.
The two one-sided tests (TOST) procedure with both rejection regions with size $\alpha$, or equivalently the $1 - 2 \alpha$ confidence-interval inclusion procedure, has size $\alpha$ when the null hypothesis is true. However, it is not as powerful as the UMP test for small sample sizes. One can show that, as the sample size grows, the TOST procedure is asymptotically equivalent to the UMP test. But a great deal of power can be lost for small sample sizes.
This code simulates from a standard Normal distribution and performs the test for equivalence in the one-sample setting using both the TOST test (via Daniel Lakens TOSTER package) and the UMP test, where the P-value for the UMP test is given by
$$P(|T| < |t_{\text{obs}}|; \texttt{df} = n - 1, \texttt{ncp} = \sqrt{n}\cdot\delta/\sigma)$$
where $T$ is a non-central $t$-random variable and $t_{\text{obs}} = \frac{\bar{x}}{s/\sqrt{n}}$.
n <- 10
alpha <- 0.05
theta <- 0.5 # delta / sigma
ncp <- theta*sqrt(n) # delta/sigma*sqrt(n)
# Simulate 10000 samples, and compute power.
S <- 10000
# theta.true <- theta # Null is true
theta.true <- 0 # Alternative is true
Pvals.tost <- rep(NA, length(S))
Pvals.ump <- rep(NA, length(S))
for (s in 1:S){
Xs <- rnorm(n, mean = theta.true)
xbar <- mean(Xs)
tobs <- xbar/(sd(Xs)/sqrt(n))
Pvals.ump[s] <- pt(abs(tobs), df = n-1, ncp = ncp) - pt(-abs(tobs), df = n-1, ncp = ncp)
out <- TOSTER::TOSTone(m = xbar,
mu = 0,
sd = 1,
n = n,
low_eqbound_d = -theta,
high_eqbound_d = theta,
alpha = 0.05,
plot = FALSE,
verbose = FALSE)
Pvals.tost[s] <- max(out$TOST_p1, out$TOST_p2)
}
mean(Pvals.tost <= 0.05)
mean(Pvals.ump <= 0.05)
With $n = 10$, the TOST procedure has 0 power to detect the discrepancy from the null, while the UMP test has power ~ 0.17 to detect it.
Reference: Stefan Wellek, Testing Statistical Hypotheses of Equivalence and Noninferiority, pages 64 and 92. | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the
You are correct, in the sense that the uniformly most powerful (UMP) test for equivalence for the one-sample, paired, and two-sample $T$-test does involve a non-central $t$-distribution.
The two one-s |
55,794 | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value? | What you are proposing does not appear to be any different from what actually occurs in this test. Remember that in a classical hypothesis test, the p-value is calculated using the null distribution of the test statistic. This calculation assumes that the null hypothesis is true. You suggest that the calculation of the p-value should use the non-central T-distribution with non-centrality parameter $\Delta$. However, the null hypothesis for this test is that $\Delta = 0$, so under that condition the non-central T-distribution simplifies down to the standard (centralised) T-distribution. | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the | What you are proposing does not appear to be any different from what actually occurs in this test. Remember that in a classical hypothesis test, the p-value is calculated using the null distribution | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value?
What you are proposing does not appear to be any different from what actually occurs in this test. Remember that in a classical hypothesis test, the p-value is calculated using the null distribution of the test statistic. This calculation assumes that the null hypothesis is true. You suggest that the calculation of the p-value should use the non-central T-distribution with non-centrality parameter $\Delta$. However, the null hypothesis for this test is that $\Delta = 0$, so under that condition the non-central T-distribution simplifies down to the standard (centralised) T-distribution. | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the
What you are proposing does not appear to be any different from what actually occurs in this test. Remember that in a classical hypothesis test, the p-value is calculated using the null distribution |
55,795 | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value? | I want to expand upon / comment on the other answers.
First, note that if you are using the noncentral t-distribution, you need to specify your equivalence bounds as a standardized mean difference $\delta$, since the noncentrality parameter will be $\delta\sqrt{n}$. If you are using TOST (i.e. central t-distribution), you need to specify it on the raw scale. In either case, if the equivalence bound is not on the right scale, you'll have to estimate it using the sample standard deviation, which will affect the error rates of your test.
Second, I do not follow David how using the noncentral t-distribution yields more power than TOST. It seems to me that (for a given sample size), this depends on the equivalence bounds (which the shape of the noncentral t depends on).
Here is the result of a simulation:
x-axis shows the population effect size, y-axis is power
solid curves are for TOST, dashed curves for noncentral t
the colors correspond to different equivalence bounds of 0.5 (navy), 1 (purple), 1.5 (red), 2 (orange)
vertical dotted lines show the corresponding equivalence bounds
horizontal dotted line is at $\alpha=0.05$
note that you can see David's result in the plot: the dashed navy line crosses the point (0,0.17) and the solid navy line (0,0)
In this simulation, equivalence bounds are specified as a standardized mean difference. For the TOST, the raw bounds are estimated using the sample SD. Hence, the max false positive error rate is not actually equal to $\alpha$ (solid curves do not necessarily go through where the dotted lines cross). However, if I didn't make an error somewhere, power for the noncnetral t is not necessarily higher than for TOST.
Code:
# sample size
n <- 10
# point null on raw scale
mu0 <- 0
# population sd
sigma <- 1.5
# equivalence bounds (magnitudes)
delta_eq <- c(0.5, 1, 1.5, 2)
# population ES = (mu-mu0)/sigma
delta <- seq(-max(delta_eq)*1.1, max(delta_eq)*1.1, length.out=200)
colors <- c("navy", "purple", "red", "orange")
par(mfrow=c(1,1))
nreps <- 10000
power.ump <- matrix(nrow = length(delta), ncol = length(delta_eq))
power.tost <- matrix(nrow = length(delta), ncol = length(delta_eq))
for (i in seq_along(delta)) {
# population mean
mu <- mu0 + delta[i]*sigma
# each col hold p-values for a given delta_eq
p.tost <- matrix(nrow = nreps, ncol = length(delta_eq))
p.ump <- matrix(nrow = nreps, ncol = length(delta_eq))
for (j in 1:nreps) {
# sample
x <- rnorm(n, mu, sigma)
m <- mean(x)
s <- sd(x)
# UMP
# non-centrality parameters (note: do not depend on s)
ncp.lower <- -delta_eq * sqrt(n)
ncp.upper <- +delta_eq * sqrt(n)
# observed t-statistic (two-sided test)
t <- abs((m-mu0) * sqrt(n) / s)
# p-value UMP
p.ump.lower <- pt(-t, n-1, ncp.lower, lower.tail=FALSE) - pt(t, n-1, ncp.lower, lower.tail=FALSE)
p.ump.upper <- pt(+t, n-1, ncp.upper, lower.tail=TRUE) - pt(-t, n-1, ncp.upper, lower.tail=TRUE)
p.ump[j, ] <- pmax(p.ump.lower, p.ump.upper)
# TOST
# lower and upper equivalence bounds TOST (raw scale, estimated using s)
mu0.tost.lower <- mu0 - delta_eq * s
mu0.tost.upper <- mu0 + delta_eq * s
# observed t-statistic TOST
t.tost.lower <- (m - mu0.tost.lower) * sqrt(n) / s
t.tost.upper <- (m - mu0.tost.upper) * sqrt(n) / s
# p-value tost
p.tost.lower <- pt(t.tost.lower, n-1, ncp=0, lower.tail=FALSE)
p.tost.upper <- pt(t.tost.upper, n-1, ncp=0, lower.tail=TRUE)
p.tost[j, ] <- pmax(p.tost.lower, p.tost.upper)
}
power.ump[i, ] <- colMeans(p.ump < 0.05)
power.tost[i, ] <- colMeans(p.tost < 0.05)
}
plot(0, 0, type="n", xlim=range(delta), ylim=c(0,1), xlab=expression(delta), ylab="Power")
for (i in seq_along(delta_eq)) {
lines(delta, power.tost[ ,i], lty="solid", col=colors[i])
lines(delta, power.ump[ ,i], lty="dashed", col=colors[i])
abline(v = -delta_eq[i], col=colors[i], lty="dotted")
abline(v = delta_eq[i], col=colors[i], lty="dotted")
}
abline(h=0.05, lty="dotted") | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the | I want to expand upon / comment on the other answers.
First, note that if you are using the noncentral t-distribution, you need to specify your equivalence bounds as a standardized mean difference $\d | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the $p$ value?
I want to expand upon / comment on the other answers.
First, note that if you are using the noncentral t-distribution, you need to specify your equivalence bounds as a standardized mean difference $\delta$, since the noncentrality parameter will be $\delta\sqrt{n}$. If you are using TOST (i.e. central t-distribution), you need to specify it on the raw scale. In either case, if the equivalence bound is not on the right scale, you'll have to estimate it using the sample standard deviation, which will affect the error rates of your test.
Second, I do not follow David how using the noncentral t-distribution yields more power than TOST. It seems to me that (for a given sample size), this depends on the equivalence bounds (which the shape of the noncentral t depends on).
Here is the result of a simulation:
x-axis shows the population effect size, y-axis is power
solid curves are for TOST, dashed curves for noncentral t
the colors correspond to different equivalence bounds of 0.5 (navy), 1 (purple), 1.5 (red), 2 (orange)
vertical dotted lines show the corresponding equivalence bounds
horizontal dotted line is at $\alpha=0.05$
note that you can see David's result in the plot: the dashed navy line crosses the point (0,0.17) and the solid navy line (0,0)
In this simulation, equivalence bounds are specified as a standardized mean difference. For the TOST, the raw bounds are estimated using the sample SD. Hence, the max false positive error rate is not actually equal to $\alpha$ (solid curves do not necessarily go through where the dotted lines cross). However, if I didn't make an error somewhere, power for the noncnetral t is not necessarily higher than for TOST.
Code:
# sample size
n <- 10
# point null on raw scale
mu0 <- 0
# population sd
sigma <- 1.5
# equivalence bounds (magnitudes)
delta_eq <- c(0.5, 1, 1.5, 2)
# population ES = (mu-mu0)/sigma
delta <- seq(-max(delta_eq)*1.1, max(delta_eq)*1.1, length.out=200)
colors <- c("navy", "purple", "red", "orange")
par(mfrow=c(1,1))
nreps <- 10000
power.ump <- matrix(nrow = length(delta), ncol = length(delta_eq))
power.tost <- matrix(nrow = length(delta), ncol = length(delta_eq))
for (i in seq_along(delta)) {
# population mean
mu <- mu0 + delta[i]*sigma
# each col hold p-values for a given delta_eq
p.tost <- matrix(nrow = nreps, ncol = length(delta_eq))
p.ump <- matrix(nrow = nreps, ncol = length(delta_eq))
for (j in 1:nreps) {
# sample
x <- rnorm(n, mu, sigma)
m <- mean(x)
s <- sd(x)
# UMP
# non-centrality parameters (note: do not depend on s)
ncp.lower <- -delta_eq * sqrt(n)
ncp.upper <- +delta_eq * sqrt(n)
# observed t-statistic (two-sided test)
t <- abs((m-mu0) * sqrt(n) / s)
# p-value UMP
p.ump.lower <- pt(-t, n-1, ncp.lower, lower.tail=FALSE) - pt(t, n-1, ncp.lower, lower.tail=FALSE)
p.ump.upper <- pt(+t, n-1, ncp.upper, lower.tail=TRUE) - pt(-t, n-1, ncp.upper, lower.tail=TRUE)
p.ump[j, ] <- pmax(p.ump.lower, p.ump.upper)
# TOST
# lower and upper equivalence bounds TOST (raw scale, estimated using s)
mu0.tost.lower <- mu0 - delta_eq * s
mu0.tost.upper <- mu0 + delta_eq * s
# observed t-statistic TOST
t.tost.lower <- (m - mu0.tost.lower) * sqrt(n) / s
t.tost.upper <- (m - mu0.tost.upper) * sqrt(n) / s
# p-value tost
p.tost.lower <- pt(t.tost.lower, n-1, ncp=0, lower.tail=FALSE)
p.tost.upper <- pt(t.tost.upper, n-1, ncp=0, lower.tail=TRUE)
p.tost[j, ] <- pmax(p.tost.lower, p.tost.upper)
}
power.ump[i, ] <- colMeans(p.ump < 0.05)
power.tost[i, ] <- colMeans(p.tost < 0.05)
}
plot(0, 0, type="n", xlim=range(delta), ylim=c(0,1), xlab=expression(delta), ylab="Power")
for (i in seq_along(delta_eq)) {
lines(delta, power.tost[ ,i], lty="solid", col=colors[i])
lines(delta, power.ump[ ,i], lty="dashed", col=colors[i])
abline(v = -delta_eq[i], col=colors[i], lty="dotted")
abline(v = delta_eq[i], col=colors[i], lty="dotted")
}
abline(h=0.05, lty="dotted") | Why doesn't the TOST equivalence testing procedure use non-central $t$ distribution to determine the
I want to expand upon / comment on the other answers.
First, note that if you are using the noncentral t-distribution, you need to specify your equivalence bounds as a standardized mean difference $\d |
55,796 | Is assuming a binomial distribution appropriate when the number of possible successes is fixed? | In general, it's not appropriate.
The remaining question is, wether the practical impact in your application is relevant to your usecase. To assess this, you could for example perturbate the applications in the training set (let people apply in different groups than they did in originally) and check how that imacts your test error. Depending on the impact you can decide if that is acceptable.
If it's not acceptable you should probably restructure the problem. | Is assuming a binomial distribution appropriate when the number of possible successes is fixed? | In general, it's not appropriate.
The remaining question is, wether the practical impact in your application is relevant to your usecase. To assess this, you could for example perturbate the applicati | Is assuming a binomial distribution appropriate when the number of possible successes is fixed?
In general, it's not appropriate.
The remaining question is, wether the practical impact in your application is relevant to your usecase. To assess this, you could for example perturbate the applications in the training set (let people apply in different groups than they did in originally) and check how that imacts your test error. Depending on the impact you can decide if that is acceptable.
If it's not acceptable you should probably restructure the problem. | Is assuming a binomial distribution appropriate when the number of possible successes is fixed?
In general, it's not appropriate.
The remaining question is, wether the practical impact in your application is relevant to your usecase. To assess this, you could for example perturbate the applicati |
55,797 | Is assuming a binomial distribution appropriate when the number of possible successes is fixed? | As mentioned by the existing answer, it is in princple not appropriate to use logistic regression in this kind of situation. It could be that you would still get decent results, but I would go for a different approach.
I assume you will find an extensive literature on this topic, if you look into statistical models for university/college admission.
One idea would be the following: Assume that there is some kind of implicit "qualification score" that is formed by those selecting people. Let's assume that it's a continuous number (e.g. for a university entrance some kind of academic aptitude score plus bonus points for extracuricular activities) and the score everyone gets is unchanged depending on who else is applying (i.e. this would assume that there's no attempt to get the number of people with certain backgrounds balanced in any other way than giving some groups bonus points on the overall score). In that case you could look at this as observing that the successful applicants had higher qualification scores than those that were not successful. However, you do not know how the successful and unsuccessful ones were ordered - although you might learn a little bit about this if e.g. some successful applicants do accept the offer and the next most suitable ones get an offer. | Is assuming a binomial distribution appropriate when the number of possible successes is fixed? | As mentioned by the existing answer, it is in princple not appropriate to use logistic regression in this kind of situation. It could be that you would still get decent results, but I would go for a d | Is assuming a binomial distribution appropriate when the number of possible successes is fixed?
As mentioned by the existing answer, it is in princple not appropriate to use logistic regression in this kind of situation. It could be that you would still get decent results, but I would go for a different approach.
I assume you will find an extensive literature on this topic, if you look into statistical models for university/college admission.
One idea would be the following: Assume that there is some kind of implicit "qualification score" that is formed by those selecting people. Let's assume that it's a continuous number (e.g. for a university entrance some kind of academic aptitude score plus bonus points for extracuricular activities) and the score everyone gets is unchanged depending on who else is applying (i.e. this would assume that there's no attempt to get the number of people with certain backgrounds balanced in any other way than giving some groups bonus points on the overall score). In that case you could look at this as observing that the successful applicants had higher qualification scores than those that were not successful. However, you do not know how the successful and unsuccessful ones were ordered - although you might learn a little bit about this if e.g. some successful applicants do accept the offer and the next most suitable ones get an offer. | Is assuming a binomial distribution appropriate when the number of possible successes is fixed?
As mentioned by the existing answer, it is in princple not appropriate to use logistic regression in this kind of situation. It could be that you would still get decent results, but I would go for a d |
55,798 | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitting? | Using a kernelized SVM is equivalent to mapping the data into feature space, then using a linear SVM in feature space. The feature space mapping is defined implicitly by the kernel function, which computes the inner product between data points in feature space. That is:
$$\kappa(x_i, x_j) = \langle \Phi(x_i), \Phi(x_j) \rangle$$
where $\kappa$ is the kernel function, $x_i$ and $x_j$ are data points, and $\Phi$ is the featre space mapping. The RBF kernel maps points nonlinearly into an infinite dimensional feature space.
Larger RBF kernel bandwidths (i.e. smaller $\gamma$) produce smoother decision boundaries because they produce smoother feature space mappings. Forgetting about RBF kernels for the moment, here's a cartoon showing why smoother mappings produce simpler decision boundaries:
In this example, one-dimensional data points are mapped nonlinearly into a higher dimensional (2d) feature space, and a linear classifier is fit in feature space. The decision boundary in feature space is a plane, but is nonlinear when viewed in the original input space. When the feature space mapping is less smooth, the data can 'poke through' the plane in feature space in more complicated ways, yielding more intricate decision boundaries in input space. | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitti | Using a kernelized SVM is equivalent to mapping the data into feature space, then using a linear SVM in feature space. The feature space mapping is defined implicitly by the kernel function, which com | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitting?
Using a kernelized SVM is equivalent to mapping the data into feature space, then using a linear SVM in feature space. The feature space mapping is defined implicitly by the kernel function, which computes the inner product between data points in feature space. That is:
$$\kappa(x_i, x_j) = \langle \Phi(x_i), \Phi(x_j) \rangle$$
where $\kappa$ is the kernel function, $x_i$ and $x_j$ are data points, and $\Phi$ is the featre space mapping. The RBF kernel maps points nonlinearly into an infinite dimensional feature space.
Larger RBF kernel bandwidths (i.e. smaller $\gamma$) produce smoother decision boundaries because they produce smoother feature space mappings. Forgetting about RBF kernels for the moment, here's a cartoon showing why smoother mappings produce simpler decision boundaries:
In this example, one-dimensional data points are mapped nonlinearly into a higher dimensional (2d) feature space, and a linear classifier is fit in feature space. The decision boundary in feature space is a plane, but is nonlinear when viewed in the original input space. When the feature space mapping is less smooth, the data can 'poke through' the plane in feature space in more complicated ways, yielding more intricate decision boundaries in input space. | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitti
Using a kernelized SVM is equivalent to mapping the data into feature space, then using a linear SVM in feature space. The feature space mapping is defined implicitly by the kernel function, which com |
55,799 | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitting? | Varying the gamma values indicates how the squared distance between the any 2 observations are changing. In radial kernel functions nearer observations have more effects on test observations. So upon increasing the gamma value we are classifying the nearest observation as farthest. So we are neglecting the effects of most of the observations upon increasing gamma values which inherently we are considering only few values so we tend to get more flexible fit to the models. This is similarly illustrated as KNN hyperparameter value. More the value of gamma is equivalent to lower K_value. | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitti | Varying the gamma values indicates how the squared distance between the any 2 observations are changing. In radial kernel functions nearer observations have more effects on test observations. So upon | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitting?
Varying the gamma values indicates how the squared distance between the any 2 observations are changing. In radial kernel functions nearer observations have more effects on test observations. So upon increasing the gamma value we are classifying the nearest observation as farthest. So we are neglecting the effects of most of the observations upon increasing gamma values which inherently we are considering only few values so we tend to get more flexible fit to the models. This is similarly illustrated as KNN hyperparameter value. More the value of gamma is equivalent to lower K_value. | Why a large gamma in the RBF kernel of SVM leads to a wiggly decision boundary and causes over-fitti
Varying the gamma values indicates how the squared distance between the any 2 observations are changing. In radial kernel functions nearer observations have more effects on test observations. So upon |
55,800 | Is there a specific name for this plot? | This is ultimately just a scatterplot. I don't think there is a special name for it. I don't see this as meaningfully related to correspondence analysis except in that you can make a scatterplot of the results from a correspondence analysis, and this is also a scatterplot. Notably, this does not have much to do with a biplot.
I do see a couple features here that are somewhat different from the way scatterplots are most commonly used. First, in scientific contexts, scatterplots are typically used to help us think about relationships between two variables. In your case, you seem to be more interested in thinking about the individual observations and how they rank (in a multidimensional sense) relative to each other and the joint distribution. Hence, you have your points prominently labeled. That is, given a default data matrix with variables in columns and observations in rows, usually a scatterplot is made to help you think about the columns; here you seem to be making the plot to help you think about the rows.
Second, there is a subtle distinction in statistics between correlation and agreement (cf., Does Spearman's $r=0.38$ indicate agreement?). Most commonly people look at scatterplots from a correlation-ish frame of mind; your scatterplot seems to connote an agreement-ish perspective. For example, your two variables are naturally on the same scale and you have a prominent diagonal line marking perfect agreement. Your "slope plot", which you think of as analogous, is also presenting a kind of agreement-ish information (i.e., the stability of the rankings over time, coupled with the consistency of the increase).
Under the assumption that this is what you hope to glean from your visualization, the current plot is not optimal. It is harder by nature to compare positions to a diagonal—the human visual system just isn't designed for that. Instead, it would be better to 'turn' the plot so that points can be compared to a horizontal line (and also possibly their right to left horizontal position). That is what Nolan did in another plot you find similar. This line of reasoning leads us to a different plot, which, while also a type of scatterplot, does have a special name, viz., a Bland-Altman plot (also called 'Tukey's mean-difference plot'; see Wikipedia and Creating and interpreting Bland-Altman plot). You can still label your points in a BA plot. In that version of your plot, the vertical position tells you if its use is dominated by vue (above the midline) or react (below). The left to right position will provide a sense of how commonly that technology is used overall. Furthermore, if you wondered about the explicit agreement attributes of vue and react as variables (e.g., is there a bias towards one or the other), those could be incorporated naturally. | Is there a specific name for this plot? | This is ultimately just a scatterplot. I don't think there is a special name for it. I don't see this as meaningfully related to correspondence analysis except in that you can make a scatterplot of | Is there a specific name for this plot?
This is ultimately just a scatterplot. I don't think there is a special name for it. I don't see this as meaningfully related to correspondence analysis except in that you can make a scatterplot of the results from a correspondence analysis, and this is also a scatterplot. Notably, this does not have much to do with a biplot.
I do see a couple features here that are somewhat different from the way scatterplots are most commonly used. First, in scientific contexts, scatterplots are typically used to help us think about relationships between two variables. In your case, you seem to be more interested in thinking about the individual observations and how they rank (in a multidimensional sense) relative to each other and the joint distribution. Hence, you have your points prominently labeled. That is, given a default data matrix with variables in columns and observations in rows, usually a scatterplot is made to help you think about the columns; here you seem to be making the plot to help you think about the rows.
Second, there is a subtle distinction in statistics between correlation and agreement (cf., Does Spearman's $r=0.38$ indicate agreement?). Most commonly people look at scatterplots from a correlation-ish frame of mind; your scatterplot seems to connote an agreement-ish perspective. For example, your two variables are naturally on the same scale and you have a prominent diagonal line marking perfect agreement. Your "slope plot", which you think of as analogous, is also presenting a kind of agreement-ish information (i.e., the stability of the rankings over time, coupled with the consistency of the increase).
Under the assumption that this is what you hope to glean from your visualization, the current plot is not optimal. It is harder by nature to compare positions to a diagonal—the human visual system just isn't designed for that. Instead, it would be better to 'turn' the plot so that points can be compared to a horizontal line (and also possibly their right to left horizontal position). That is what Nolan did in another plot you find similar. This line of reasoning leads us to a different plot, which, while also a type of scatterplot, does have a special name, viz., a Bland-Altman plot (also called 'Tukey's mean-difference plot'; see Wikipedia and Creating and interpreting Bland-Altman plot). You can still label your points in a BA plot. In that version of your plot, the vertical position tells you if its use is dominated by vue (above the midline) or react (below). The left to right position will provide a sense of how commonly that technology is used overall. Furthermore, if you wondered about the explicit agreement attributes of vue and react as variables (e.g., is there a bias towards one or the other), those could be incorporated naturally. | Is there a specific name for this plot?
This is ultimately just a scatterplot. I don't think there is a special name for it. I don't see this as meaningfully related to correspondence analysis except in that you can make a scatterplot of |