idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
49,801 | Taxonomy/overview of machine learning techniques [duplicate] | You can find a very good taxonomy of the most important ML methods in the table of contents of the book Machine Learning: a Probabilistic Perspective by Kevin Patrick Murphy.
Given your background in statistics, I'm pretty confident that you will find that book resourceful. It has both introductory descriptions and in-depth explanations of almost any kind of ML method. | Taxonomy/overview of machine learning techniques [duplicate] | You can find a very good taxonomy of the most important ML methods in the table of contents of the book Machine Learning: a Probabilistic Perspective by Kevin Patrick Murphy.
Given your background in | Taxonomy/overview of machine learning techniques [duplicate]
You can find a very good taxonomy of the most important ML methods in the table of contents of the book Machine Learning: a Probabilistic Perspective by Kevin Patrick Murphy.
Given your background in statistics, I'm pretty confident that you will find that book resourceful. It has both introductory descriptions and in-depth explanations of almost any kind of ML method. | Taxonomy/overview of machine learning techniques [duplicate]
You can find a very good taxonomy of the most important ML methods in the table of contents of the book Machine Learning: a Probabilistic Perspective by Kevin Patrick Murphy.
Given your background in |
49,802 | Taxonomy/overview of machine learning techniques [duplicate] | Tree-based methods
A group of regression and classification methods built around decision trees. In a decision tree, data is recursively partitioned based on its predictors, and new predictions are generated by averaging data points at the relevant tips ('leaves') of each tree. Weaknesses of standard decision trees (such as overfitting) have been substantially overcome by bagging (in random forests) and boosting (in gradient boosting machines).
Includes: CART, C4.5, random forests, gradient boosting trees
Advantages: Flexible and relatively easy to interpret (importance scores, partial effects)
Support vector machines (SVMs)
Originally an algorithm for binary classification that identifies the hyperplane best separating two groups of data points. Subsequent extensions include multiclass SVMs (which work by reducing multiple class problems to a series of 2-class problems) and support vector regression (which uses the hyperplane to predict continuous values)
Advantages:
Artificial Neural Networks
Computing systems whose network structure is inspired by (and theoretically allows for the flexibility of) biological systems. Composed of a net of artificial neurons that can transmit signals to each other in a defined, usually hierarchical structure. Each artificial neuron takes multiple inputs, sums them based on their separate weights, and produces output based on some activation function. The weights of each input are learnt during the network's training process. Neurons are organised in layers that tend to abstract different features of the system they are trained upon.
Advantages:
Includes: Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Learning | Taxonomy/overview of machine learning techniques [duplicate] | Tree-based methods
A group of regression and classification methods built around decision trees. In a decision tree, data is recursively partitioned based on its predictors, and new predictions are ge | Taxonomy/overview of machine learning techniques [duplicate]
Tree-based methods
A group of regression and classification methods built around decision trees. In a decision tree, data is recursively partitioned based on its predictors, and new predictions are generated by averaging data points at the relevant tips ('leaves') of each tree. Weaknesses of standard decision trees (such as overfitting) have been substantially overcome by bagging (in random forests) and boosting (in gradient boosting machines).
Includes: CART, C4.5, random forests, gradient boosting trees
Advantages: Flexible and relatively easy to interpret (importance scores, partial effects)
Support vector machines (SVMs)
Originally an algorithm for binary classification that identifies the hyperplane best separating two groups of data points. Subsequent extensions include multiclass SVMs (which work by reducing multiple class problems to a series of 2-class problems) and support vector regression (which uses the hyperplane to predict continuous values)
Advantages:
Artificial Neural Networks
Computing systems whose network structure is inspired by (and theoretically allows for the flexibility of) biological systems. Composed of a net of artificial neurons that can transmit signals to each other in a defined, usually hierarchical structure. Each artificial neuron takes multiple inputs, sums them based on their separate weights, and produces output based on some activation function. The weights of each input are learnt during the network's training process. Neurons are organised in layers that tend to abstract different features of the system they are trained upon.
Advantages:
Includes: Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Learning | Taxonomy/overview of machine learning techniques [duplicate]
Tree-based methods
A group of regression and classification methods built around decision trees. In a decision tree, data is recursively partitioned based on its predictors, and new predictions are ge |
49,803 | What is pseudomedian in R function `wilcox.test`? | A good clue is to look at the actual code of wilcox.test:
https://github.com/SurajGupta/r-source/blob/master/src/library/stats/R/wilcox.test.R
Specifically the bit concerning the estimate of the pseudo median for a 1 sample test is line 91-122:
x <- x + mu # we want a conf.int for the median
alpha <- 1 - conf.level
diffs <- outer(x, x, "+")
diffs <- sort(diffs[!lower.tri(diffs)]) / 2
...
ESTIMATE <- c("(pseudo)median" = median(diffs))
Your code is typically missing the pairs of twice the same elements when you do all_pairs <- t(combn(distr_data, 2))
Instead try:
set.seed(910401)
distr_data <- rgamma(100,0.1,1000)
wilcox.test(distr_data, conf.int = T, exact = T)
# 6.788116e-06
all_pairs <- rbind(t(combn(distr_data, 2)),cbind(distr_data,distr_data))
all_pair_means <- (all_pairs[,1] + all_pairs[,2]) / 2
median(all_pair_means)
# 6.788116e-06
Notice that I also added exact==T which you would not want to do with large dataset but actually matters in this case, otherwise your estimate is slightly off. | What is pseudomedian in R function `wilcox.test`? | A good clue is to look at the actual code of wilcox.test:
https://github.com/SurajGupta/r-source/blob/master/src/library/stats/R/wilcox.test.R
Specifically the bit concerning the estimate of the pseud | What is pseudomedian in R function `wilcox.test`?
A good clue is to look at the actual code of wilcox.test:
https://github.com/SurajGupta/r-source/blob/master/src/library/stats/R/wilcox.test.R
Specifically the bit concerning the estimate of the pseudo median for a 1 sample test is line 91-122:
x <- x + mu # we want a conf.int for the median
alpha <- 1 - conf.level
diffs <- outer(x, x, "+")
diffs <- sort(diffs[!lower.tri(diffs)]) / 2
...
ESTIMATE <- c("(pseudo)median" = median(diffs))
Your code is typically missing the pairs of twice the same elements when you do all_pairs <- t(combn(distr_data, 2))
Instead try:
set.seed(910401)
distr_data <- rgamma(100,0.1,1000)
wilcox.test(distr_data, conf.int = T, exact = T)
# 6.788116e-06
all_pairs <- rbind(t(combn(distr_data, 2)),cbind(distr_data,distr_data))
all_pair_means <- (all_pairs[,1] + all_pairs[,2]) / 2
median(all_pair_means)
# 6.788116e-06
Notice that I also added exact==T which you would not want to do with large dataset but actually matters in this case, otherwise your estimate is slightly off. | What is pseudomedian in R function `wilcox.test`?
A good clue is to look at the actual code of wilcox.test:
https://github.com/SurajGupta/r-source/blob/master/src/library/stats/R/wilcox.test.R
Specifically the bit concerning the estimate of the pseud |
49,804 | How can I calculate the number of degrees of freedom in the Elastic Net regularization, specifically in R? | The answer to your question is here:
This is from a talk by Hui Zou. You can find the full talk here.
This should be relatively easy to implement in R. If you need some guidance, you can check the lassovar package by A. Kock and L. Callot; see the command .ridge.df in lassovar-ada.R.
Related reference:
Zou, H., Hastie, T., & Tibshirani, R. (2007). On the "degrees of freedom" of the lasso. Ann. Statist., 35(5), 2173–2192. https://doi.org/10.1214/009053607000000127 -- This is the formal proof that df of the lasso = number of non-zero coefficients of the lasso. | How can I calculate the number of degrees of freedom in the Elastic Net regularization, specifically | The answer to your question is here:
This is from a talk by Hui Zou. You can find the full talk here.
This should be relatively easy to implement in R. If you need some guidance, you can check the | How can I calculate the number of degrees of freedom in the Elastic Net regularization, specifically in R?
The answer to your question is here:
This is from a talk by Hui Zou. You can find the full talk here.
This should be relatively easy to implement in R. If you need some guidance, you can check the lassovar package by A. Kock and L. Callot; see the command .ridge.df in lassovar-ada.R.
Related reference:
Zou, H., Hastie, T., & Tibshirani, R. (2007). On the "degrees of freedom" of the lasso. Ann. Statist., 35(5), 2173–2192. https://doi.org/10.1214/009053607000000127 -- This is the formal proof that df of the lasso = number of non-zero coefficients of the lasso. | How can I calculate the number of degrees of freedom in the Elastic Net regularization, specifically
The answer to your question is here:
This is from a talk by Hui Zou. You can find the full talk here.
This should be relatively easy to implement in R. If you need some guidance, you can check the |
49,805 | Bias caused by optional stopping | Some relevant posts here and here.
According to an answer from the second post, it seems that as the number of flips goes to infinity, at some point the significance test will be positive (almost surely), which is to say, there exists some finite number of samples after which it will almost surely happen.
According to the first post, the number of flips it will actually take has a finite median but infinite expectation. The median grows very quickly as a function of the required $z$-score to pass the test, so it may be that this sort of bias can be effectively mitigated by demanding a lower $\alpha$ threshold. | Bias caused by optional stopping | Some relevant posts here and here.
According to an answer from the second post, it seems that as the number of flips goes to infinity, at some point the significance test will be positive (almost sur | Bias caused by optional stopping
Some relevant posts here and here.
According to an answer from the second post, it seems that as the number of flips goes to infinity, at some point the significance test will be positive (almost surely), which is to say, there exists some finite number of samples after which it will almost surely happen.
According to the first post, the number of flips it will actually take has a finite median but infinite expectation. The median grows very quickly as a function of the required $z$-score to pass the test, so it may be that this sort of bias can be effectively mitigated by demanding a lower $\alpha$ threshold. | Bias caused by optional stopping
Some relevant posts here and here.
According to an answer from the second post, it seems that as the number of flips goes to infinity, at some point the significance test will be positive (almost sur |
49,806 | Bias caused by optional stopping | I did some simulations (under $H_0$: a fair coin $p=0.5$). I limited the number of flips to $n_\max$ because the raw stopping time $T$ has such a huge tail that sometimes the computer wouldn't stop in a reasonable time. Anyway it's more realistic with a limit.
The experiment is:
do some first $100$ flips to initialize
do a z-test with $\alpha=5$%. If it is significant or you flipped more than $n_\max$, stop.
otherwise flip one more time and go back to the previous step
The false discovery rate (type I error) differs a lot from $\alpha$:
For $n_\max=1000$ : 26%
For $n_\max=10000$ : 40%
However something happens because of the optional stopping theorem: when "meta-anlyszing" several of these experiments (simply merging them into one big flipping session and do a z-test), the bias on false discovery rate tends to disappear:
It might sound a bit paradoxical: we have many experiments where 26% are falsely significant on average, but the global experiment still has the right type I error of 5%. And the global estimator $\hat p$ is still (asymptotically) unbiased. It can be explained by the fact that the longest experiments, having more weight, are the least favourable to rejecting $H_0$.
As a conclusion, optional stopping can cause a strong bias for tests on each single experiment, but the bias tends to disappear when doing several experiments. | Bias caused by optional stopping | I did some simulations (under $H_0$: a fair coin $p=0.5$). I limited the number of flips to $n_\max$ because the raw stopping time $T$ has such a huge tail that sometimes the computer wouldn't stop in | Bias caused by optional stopping
I did some simulations (under $H_0$: a fair coin $p=0.5$). I limited the number of flips to $n_\max$ because the raw stopping time $T$ has such a huge tail that sometimes the computer wouldn't stop in a reasonable time. Anyway it's more realistic with a limit.
The experiment is:
do some first $100$ flips to initialize
do a z-test with $\alpha=5$%. If it is significant or you flipped more than $n_\max$, stop.
otherwise flip one more time and go back to the previous step
The false discovery rate (type I error) differs a lot from $\alpha$:
For $n_\max=1000$ : 26%
For $n_\max=10000$ : 40%
However something happens because of the optional stopping theorem: when "meta-anlyszing" several of these experiments (simply merging them into one big flipping session and do a z-test), the bias on false discovery rate tends to disappear:
It might sound a bit paradoxical: we have many experiments where 26% are falsely significant on average, but the global experiment still has the right type I error of 5%. And the global estimator $\hat p$ is still (asymptotically) unbiased. It can be explained by the fact that the longest experiments, having more weight, are the least favourable to rejecting $H_0$.
As a conclusion, optional stopping can cause a strong bias for tests on each single experiment, but the bias tends to disappear when doing several experiments. | Bias caused by optional stopping
I did some simulations (under $H_0$: a fair coin $p=0.5$). I limited the number of flips to $n_\max$ because the raw stopping time $T$ has such a huge tail that sometimes the computer wouldn't stop in |
49,807 | How to calculate CV performance? | As you said, the second method is preferred for small hold out sets (or large values of $k$ compared to the $n$ of your data-set if you will). At the extreme end, you will almost always use the second for leave one out. (Think twice before scoring on $R^2$ though.)
The first has the advantage that you can use the multiple performance metrics, one per fold, for t-tests and such. (But you must correct for the pseudo-replication with corrected resampled standard errors.)
Both methods are easy to parallelize if you have enough RAM to run as many instances of the algorithm in parallel as you have CPU cores. The only difference between both methods is the time at which the performance metrics are computed. That part is a very small effort for your computer anyway. | How to calculate CV performance? | As you said, the second method is preferred for small hold out sets (or large values of $k$ compared to the $n$ of your data-set if you will). At the extreme end, you will almost always use the second | How to calculate CV performance?
As you said, the second method is preferred for small hold out sets (or large values of $k$ compared to the $n$ of your data-set if you will). At the extreme end, you will almost always use the second for leave one out. (Think twice before scoring on $R^2$ though.)
The first has the advantage that you can use the multiple performance metrics, one per fold, for t-tests and such. (But you must correct for the pseudo-replication with corrected resampled standard errors.)
Both methods are easy to parallelize if you have enough RAM to run as many instances of the algorithm in parallel as you have CPU cores. The only difference between both methods is the time at which the performance metrics are computed. That part is a very small effort for your computer anyway. | How to calculate CV performance?
As you said, the second method is preferred for small hold out sets (or large values of $k$ compared to the $n$ of your data-set if you will). At the extreme end, you will almost always use the second |
49,808 | In GLMs, why do we solve score(beta)=0 instead of just minimizing the negative log-likelihood? | The solutions to the score equation are critical points of the objective function in your optimisation, so generally the fitted coefficient estimator should solve the score equation. This is not really a "step further" than using numerical techniques; it is just a way of characterising the fitted value, which is the point that numerical solutions should be moving towards.
Optimisation via IRLS: As a practical matter, numerical algorithms to fit GLMs generally use the default method of iteratively-reweighted least squares optimisation, which I think is equivalent to Fisher scoring for this class of models. This is the default fitting-method in the R function glm (see comments on the method variable of the function in the documentation). The estimation method can be derived via an argument involving linear approximation to the score equation, and linear approximation to the mean parameter in each iteration.
The GLM is characteristed by having some exponential family distribution with scale parameter $\phi$ and mean parameter $\mu = h (\eta) = h(\boldsymbol{\text{x}} \beta)$ where the function $h$ is the canonical inverse-link and $\eta$ is a linear function of the coefficients of interest. Using this distribution and taking a linear approximation to the score function yields:
$$s(\beta) = \frac{\partial \ell}{\partial \beta} (\beta) \approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu).$$
For some iteration value $\beta_{(k)}$ we can form the corresponding mean $\mu_{(k)} = h(\eta_{(k)}) = h(\boldsymbol{\text{x}} \beta_{(k)})$ and a linear approximation gives $\mu \approx \mu_{(k)} + \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}} (\beta - \beta_{(k)})$ with weight matrix $\boldsymbol{\text{W}}_{(k)} = \nabla_\eta h (\eta_{(k)})$. Now, if we substitute this latter approximation into the approximation for the score equation we get:
$$\begin{equation} \begin{aligned}
s(\beta) = \frac{\partial \ell}{\partial \beta} (\beta) \approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu)
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu_{(k)} - (\mu - \mu_{(k)})) \\[8pt]
&\approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu_{(k)} - \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}} (\beta - \beta_{(k)})) \\[8pt]
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}} \boldsymbol{\text{W}}_{(k)} ( \boldsymbol{\text{W}}_{(k)} ^{-1} (\boldsymbol{\text{y}} - \mu_{(k)}) + \boldsymbol{\text{x}} \beta_{(k)} - \boldsymbol{\text{x}} \beta ) \\[8pt]
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}} \boldsymbol{\text{W}}_{(k)} ( \boldsymbol{\text{z}}_{(k)} - \boldsymbol{\text{x}} \beta ), \\[8pt]
\end{aligned} \end{equation}$$
where $\boldsymbol{\text{z}}_{(k)} = \boldsymbol{\text{W}}_{(k)} ^{-1} (\boldsymbol{\text{y}} - \mu_{(k)}) + \boldsymbol{\text{x}} \beta_{(k)}$ is the adjusted response, which is calculable from the $k$th iteration. This approximation to the score function corresponds to a weighted linear regression. Solving the score equation we obtain the next iteration for the coefficient vector, which has the form of a weighted-least-squares estimator:
$$\hat{\beta}_{(k+1)} = (\boldsymbol{\text{x}}^\text{T} \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}})^{-1} (\boldsymbol{\text{x}}^\text{T} \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{z}}_{(k)}).$$
This method is a simple variation on the method of least-squares estimation used in multiple linear regression with Gaussian errors. Generally these algorithms start with a base iteration using least-squares estimation and then iterate the IRLS algorithm to within some tolerance of the true fitted value. Note that even once the fitted value is obtained via IRLS, the score function is still useful, since it is related to the variance of the coefficient vector. | In GLMs, why do we solve score(beta)=0 instead of just minimizing the negative log-likelihood? | The solutions to the score equation are critical points of the objective function in your optimisation, so generally the fitted coefficient estimator should solve the score equation. This is not real | In GLMs, why do we solve score(beta)=0 instead of just minimizing the negative log-likelihood?
The solutions to the score equation are critical points of the objective function in your optimisation, so generally the fitted coefficient estimator should solve the score equation. This is not really a "step further" than using numerical techniques; it is just a way of characterising the fitted value, which is the point that numerical solutions should be moving towards.
Optimisation via IRLS: As a practical matter, numerical algorithms to fit GLMs generally use the default method of iteratively-reweighted least squares optimisation, which I think is equivalent to Fisher scoring for this class of models. This is the default fitting-method in the R function glm (see comments on the method variable of the function in the documentation). The estimation method can be derived via an argument involving linear approximation to the score equation, and linear approximation to the mean parameter in each iteration.
The GLM is characteristed by having some exponential family distribution with scale parameter $\phi$ and mean parameter $\mu = h (\eta) = h(\boldsymbol{\text{x}} \beta)$ where the function $h$ is the canonical inverse-link and $\eta$ is a linear function of the coefficients of interest. Using this distribution and taking a linear approximation to the score function yields:
$$s(\beta) = \frac{\partial \ell}{\partial \beta} (\beta) \approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu).$$
For some iteration value $\beta_{(k)}$ we can form the corresponding mean $\mu_{(k)} = h(\eta_{(k)}) = h(\boldsymbol{\text{x}} \beta_{(k)})$ and a linear approximation gives $\mu \approx \mu_{(k)} + \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}} (\beta - \beta_{(k)})$ with weight matrix $\boldsymbol{\text{W}}_{(k)} = \nabla_\eta h (\eta_{(k)})$. Now, if we substitute this latter approximation into the approximation for the score equation we get:
$$\begin{equation} \begin{aligned}
s(\beta) = \frac{\partial \ell}{\partial \beta} (\beta) \approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu)
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu_{(k)} - (\mu - \mu_{(k)})) \\[8pt]
&\approx \phi^{-1} \boldsymbol{\text{x}}^{\text{T}}(\boldsymbol{\text{y}} - \mu_{(k)} - \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}} (\beta - \beta_{(k)})) \\[8pt]
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}} \boldsymbol{\text{W}}_{(k)} ( \boldsymbol{\text{W}}_{(k)} ^{-1} (\boldsymbol{\text{y}} - \mu_{(k)}) + \boldsymbol{\text{x}} \beta_{(k)} - \boldsymbol{\text{x}} \beta ) \\[8pt]
&= \phi^{-1} \boldsymbol{\text{x}}^{\text{T}} \boldsymbol{\text{W}}_{(k)} ( \boldsymbol{\text{z}}_{(k)} - \boldsymbol{\text{x}} \beta ), \\[8pt]
\end{aligned} \end{equation}$$
where $\boldsymbol{\text{z}}_{(k)} = \boldsymbol{\text{W}}_{(k)} ^{-1} (\boldsymbol{\text{y}} - \mu_{(k)}) + \boldsymbol{\text{x}} \beta_{(k)}$ is the adjusted response, which is calculable from the $k$th iteration. This approximation to the score function corresponds to a weighted linear regression. Solving the score equation we obtain the next iteration for the coefficient vector, which has the form of a weighted-least-squares estimator:
$$\hat{\beta}_{(k+1)} = (\boldsymbol{\text{x}}^\text{T} \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{x}})^{-1} (\boldsymbol{\text{x}}^\text{T} \boldsymbol{\text{W}}_{(k)} \boldsymbol{\text{z}}_{(k)}).$$
This method is a simple variation on the method of least-squares estimation used in multiple linear regression with Gaussian errors. Generally these algorithms start with a base iteration using least-squares estimation and then iterate the IRLS algorithm to within some tolerance of the true fitted value. Note that even once the fitted value is obtained via IRLS, the score function is still useful, since it is related to the variance of the coefficient vector. | In GLMs, why do we solve score(beta)=0 instead of just minimizing the negative log-likelihood?
The solutions to the score equation are critical points of the objective function in your optimisation, so generally the fitted coefficient estimator should solve the score equation. This is not real |
49,809 | The relation between least-square estimation in two seemingly related problems | In the case of b, the question being asked is "What scalar is closest to my data points (in the least squares sense)?" In the case of c, the question being asked is "How can I scale my data points to make each as close as possible to 1 (in the least squares sese)?" These objective functions are different. Imagine the case if your data had mean of 1 with extremely high variance. The optimal value of b would be 1 (up to noise), but c would not be 1 - it would necessarily be large to counter the data's high variance. | The relation between least-square estimation in two seemingly related problems | In the case of b, the question being asked is "What scalar is closest to my data points (in the least squares sense)?" In the case of c, the question being asked is "How can I scale my data points to | The relation between least-square estimation in two seemingly related problems
In the case of b, the question being asked is "What scalar is closest to my data points (in the least squares sense)?" In the case of c, the question being asked is "How can I scale my data points to make each as close as possible to 1 (in the least squares sese)?" These objective functions are different. Imagine the case if your data had mean of 1 with extremely high variance. The optimal value of b would be 1 (up to noise), but c would not be 1 - it would necessarily be large to counter the data's high variance. | The relation between least-square estimation in two seemingly related problems
In the case of b, the question being asked is "What scalar is closest to my data points (in the least squares sense)?" In the case of c, the question being asked is "How can I scale my data points to |
49,810 | How to perform CCA with block design in R | I believe the issue is that the permutation test you are using is too liberal; it assumes a Null hypothesis in which all observations are exchangeable. From what you say, observations within a cage are exchangeable, but are not exchangeable between cages.
To use the more restrictive null in the permutation test we can use a restricted permutation design, in which we ask vegan to only permute observations within cages and never between cages. This is most easily done using a blocking factor.
Create the blocking factor:
m_char <- transform(m_char,
cage = factor(substring(rownames(m_char), 1, 1)))
Next, define the restricted permutation design (the defaults for how() mean we get randomisation within the levels of the factor passed to blocks)
ctrl <- how(nperm = 1000, blocks = m_char$cage)
Now fit the model and remove the between-cage variation, which is also required for these analyses with blocks
my.cca <- cca(grooming ~ x1 + x2 + x3 + x4 + x5 + Condition(cage),
data = m_char)
Now do the restricted permutation test
set.seed(10)
anova(my.cca, by = "terms", permutations = ctrl)
This produces:
> anova(my.cca, by = "terms", permutations = ctrl)
Permutation test for cca under reduced model
Terms added sequentially (first to last)
Blocks: m_char$cage
Permutation: free
Number of permutations: 1000
Model: cca(formula = grooming ~ x1 + x2 + x3 + x4 + x5 + Condition(cage), data = m_char)
Df ChiSquare F Pr(>F)
x1 1 0.52470 1.4119 0.15185
x2 1 0.60901 1.6388 0.03397 *
x3 1 0.21703 0.5840 0.96503
x4 1 0.36739 0.9886 0.36563
x5 1 0.83984 2.2600 0.03497 *
Residual 7 2.60129
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
And the triplot shows different behaviour to that you obtained
Does this make sense now? | How to perform CCA with block design in R | I believe the issue is that the permutation test you are using is too liberal; it assumes a Null hypothesis in which all observations are exchangeable. From what you say, observations within a cage ar | How to perform CCA with block design in R
I believe the issue is that the permutation test you are using is too liberal; it assumes a Null hypothesis in which all observations are exchangeable. From what you say, observations within a cage are exchangeable, but are not exchangeable between cages.
To use the more restrictive null in the permutation test we can use a restricted permutation design, in which we ask vegan to only permute observations within cages and never between cages. This is most easily done using a blocking factor.
Create the blocking factor:
m_char <- transform(m_char,
cage = factor(substring(rownames(m_char), 1, 1)))
Next, define the restricted permutation design (the defaults for how() mean we get randomisation within the levels of the factor passed to blocks)
ctrl <- how(nperm = 1000, blocks = m_char$cage)
Now fit the model and remove the between-cage variation, which is also required for these analyses with blocks
my.cca <- cca(grooming ~ x1 + x2 + x3 + x4 + x5 + Condition(cage),
data = m_char)
Now do the restricted permutation test
set.seed(10)
anova(my.cca, by = "terms", permutations = ctrl)
This produces:
> anova(my.cca, by = "terms", permutations = ctrl)
Permutation test for cca under reduced model
Terms added sequentially (first to last)
Blocks: m_char$cage
Permutation: free
Number of permutations: 1000
Model: cca(formula = grooming ~ x1 + x2 + x3 + x4 + x5 + Condition(cage), data = m_char)
Df ChiSquare F Pr(>F)
x1 1 0.52470 1.4119 0.15185
x2 1 0.60901 1.6388 0.03397 *
x3 1 0.21703 0.5840 0.96503
x4 1 0.36739 0.9886 0.36563
x5 1 0.83984 2.2600 0.03497 *
Residual 7 2.60129
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
And the triplot shows different behaviour to that you obtained
Does this make sense now? | How to perform CCA with block design in R
I believe the issue is that the permutation test you are using is too liberal; it assumes a Null hypothesis in which all observations are exchangeable. From what you say, observations within a cage ar |
49,811 | How to set the priors for Bayesian estimation of Multivariate Normal Distribution when the correlation matrix has small values? | Looking at the sampling of the $\Sigma$, I think you might have forgotten to replace the mean $\mu$ with the sampled $\mu(t)$ from your posterior distribution of the mean.
So $\eta$ should be determined as
$\eta=\Phi+\sum_n^N(x_n-\mu(t))(x_n-\mu(t))^T$
Gibbs sampling is an iterative procedure--when you sample from the posterior distribution of a single variable, you need to replace the other variables on which the sampled variable is dependent with their previously sampled value.
see https://en.wikipedia.org/wiki/Gibbs_sampling
please | How to set the priors for Bayesian estimation of Multivariate Normal Distribution when the correlati | Looking at the sampling of the $\Sigma$, I think you might have forgotten to replace the mean $\mu$ with the sampled $\mu(t)$ from your posterior distribution of the mean.
So $\eta$ should be determin | How to set the priors for Bayesian estimation of Multivariate Normal Distribution when the correlation matrix has small values?
Looking at the sampling of the $\Sigma$, I think you might have forgotten to replace the mean $\mu$ with the sampled $\mu(t)$ from your posterior distribution of the mean.
So $\eta$ should be determined as
$\eta=\Phi+\sum_n^N(x_n-\mu(t))(x_n-\mu(t))^T$
Gibbs sampling is an iterative procedure--when you sample from the posterior distribution of a single variable, you need to replace the other variables on which the sampled variable is dependent with their previously sampled value.
see https://en.wikipedia.org/wiki/Gibbs_sampling
please | How to set the priors for Bayesian estimation of Multivariate Normal Distribution when the correlati
Looking at the sampling of the $\Sigma$, I think you might have forgotten to replace the mean $\mu$ with the sampled $\mu(t)$ from your posterior distribution of the mean.
So $\eta$ should be determin |
49,812 | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7} | The argument is fine. I think you can distill it to something simpler, though.
A sequence of rolls determines a branching probability tree with six branches at each node. Because the rolls are independent and each outcome has probability $6^{-1}$, the chance of reaching a given node at level $n$ (that is, a particular sequence of $n$ rolls) is $6^{-n}$. Let $N$ be any bound for the number of rolls. Then the chance of any event whatsoever is a sum of numbers of the form $6^{-n}$ where $0 \le n \le N$. Such a sum obviously is a multiple of $6^{-N}$. Since $1/7$ is not such a multiple, it cannot be realized as the probability of any such event, no matter how large $N$ may be.
Here's an alternative exposition of the same idea. The chance of any event after $N$ rolls, when written in base $6$, can be written using at most $N$ digits after the "seximal" point in base $6$. Since $1/7 = 0.050505\ldots_{[6]}$ requires an infinite expansion, it cannot arise as the chance of any such event.
If you're uncomfortable using base $6$, then (by analogy) contemplate a (hypothetical) ten-sided die, each outcome with a chance of $1/10$. After $N$ rolls, all probabilities can be expressed as decimals with exactly $N$ digits. Numbers like $1/3 = 0.333\ldots$, $1/7=0.142857\,142857\ldots$, etc., cannot arise as any such probability. | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7} | The argument is fine. I think you can distill it to something simpler, though.
A sequence of rolls determines a branching probability tree with six branches at each node. Because the rolls are inde | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7}
The argument is fine. I think you can distill it to something simpler, though.
A sequence of rolls determines a branching probability tree with six branches at each node. Because the rolls are independent and each outcome has probability $6^{-1}$, the chance of reaching a given node at level $n$ (that is, a particular sequence of $n$ rolls) is $6^{-n}$. Let $N$ be any bound for the number of rolls. Then the chance of any event whatsoever is a sum of numbers of the form $6^{-n}$ where $0 \le n \le N$. Such a sum obviously is a multiple of $6^{-N}$. Since $1/7$ is not such a multiple, it cannot be realized as the probability of any such event, no matter how large $N$ may be.
Here's an alternative exposition of the same idea. The chance of any event after $N$ rolls, when written in base $6$, can be written using at most $N$ digits after the "seximal" point in base $6$. Since $1/7 = 0.050505\ldots_{[6]}$ requires an infinite expansion, it cannot arise as the chance of any such event.
If you're uncomfortable using base $6$, then (by analogy) contemplate a (hypothetical) ten-sided die, each outcome with a chance of $1/10$. After $N$ rolls, all probabilities can be expressed as decimals with exactly $N$ digits. Numbers like $1/3 = 0.333\ldots$, $1/7=0.142857\,142857\ldots$, etc., cannot arise as any such probability. | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7}
The argument is fine. I think you can distill it to something simpler, though.
A sequence of rolls determines a branching probability tree with six branches at each node. Because the rolls are inde |
49,813 | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7} | If I follow your line, we can note that $\frac{log_2{7}}{log_2{6}}=log_6{7}$ which is bound to be irrational, so we'll always lose some information (which is true, as 5 different results are represented by one number and the 36th result is discarded).
I do think, however, that by defining the event of a re-roll and its probability, you can get a bound on the number of re-rolls using Markov's inequality. The probability of having k re-rolls decays quickly to 0 so for some large k to your liking, you can start using zero-probability theorems. | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7} | If I follow your line, we can note that $\frac{log_2{7}}{log_2{6}}=log_6{7}$ which is bound to be irrational, so we'll always lose some information (which is true, as 5 different results are represent | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7}
If I follow your line, we can note that $\frac{log_2{7}}{log_2{6}}=log_6{7}$ which is bound to be irrational, so we'll always lose some information (which is true, as 5 different results are represented by one number and the 36th result is discarded).
I do think, however, that by defining the event of a re-roll and its probability, you can get a bound on the number of re-rolls using Markov's inequality. The probability of having k re-rolls decays quickly to 0 so for some large k to your liking, you can start using zero-probability theorems. | Given a bounded number of die rolls following unif{1,6}, produce unif{1,7}
If I follow your line, we can note that $\frac{log_2{7}}{log_2{6}}=log_6{7}$ which is bound to be irrational, so we'll always lose some information (which is true, as 5 different results are represent |
49,814 | Are predictive distributions supposed to be distributions of future data? | You almost wholly and correctly answered your blog question. I created a few simulations to show you where the differences are.
Also, I have a personal issue with the term of art, “overdispersed.” Predictive distributions are not overdispersed, they are correctly dispersed, but the true distribution in nature doesn’t match it. Instead, it is contained within it.
The two types of predictions differ in a number of subtle ways. Which type of prediction to use should depend wholly on the actual problem you face and the losses you face from unfortunate sampling or other issues.
The first difference comes from the existence of prior information. In the absence of prior information, Frequentist methods minimize the maximum possible loss you could face from a bad estimator. That is very advantageous when you have no framework to estimate an average loss instead. Bayesian methods minimize the average loss experienced, but the prior does matter. In the presence of prior information, all Frequentist solutions are inadmissible. Furthermore, Frequentist methods do not give rise to coherent probabilities and so should never be used in gambling situations such as estimating inventory needs, portfolio allocations or budgets.
The question of whether you should use a Bayesian density or a Frequentist interval should depend only on the problem you are solving.
I decided to exaggerate your example a little by expanding a metaphor I use a lot to teach the differences in prediction methods. I modified the game as well to fit the Reverand Bayes original example because it had both a Bayesian and a strong Frequentist interpretation. Although the Reverand Bayes used a billiards table to generate a uniformly distributed random number, I used the R language function runif().
Using seed 9817, I drew a value of the parameter of 0.5171191. Based on this parameter value eight Bernoulli trials were performed. There were four successes. The question is now to predict the number of successes, based on the eight observed trials, over the next 10,000 observations. Knowing the conditions of the generation of the parameter, I used a uniform prior density for the Bayesian solution.
The Frequentist solution is an interval, but it does generate a density from which to create an interval. For a binomial likelihood, the Frequentist prediction interval is constructed from the hypergeometric distribution. For the Bayesian method, the prediction is a one-dimensional Polya distribution. They are very close, but they are importantly different.
The Frequentist density from which to construct intervals is shown here,
while the Bayesian density is shown here,
if the true value of the parameter were known, then the prediction would be this one instead.
To get a better feel, also consider the cases where the true value were 2/3, and there were five successes in eight trials. The Frequentist is,
the Bayesian is,
the true prediction is.
Now as to the differences, imagine that the parameter space was discrete. Let us consider the case where the only three possibilities were a parameter space of $\{1/3,1/2,2/3\}.$ There isn’t a known way to construct predictive intervals on discrete parameter spaces. If the prior were uniform, then the Bayesian predictive distribution would be like this.
Of course, this is not the only difference. There is only one way to construct a Bayesian predictive density, but there is an infinite number of predictive intervals because predictive intervals are built on top of confidence intervals. They depend entirely on the cost function chosen. As such, the Frequentist interval that is usually taught as “the” Frequentist interval is the one that minimizes the average loss under the Kullback-Leibler Divergence. It is not uniquely the Frequentist prediction interval method. Change the loss function, and you will change the boundaries.
The reason this is usually used is that the Bayesian predictive density automatically minimizes the Kullback-Leibler Divergence. Note I did not say it minimized the average divergence, but the actual divergence. To illustrate the differences, I drew two samples from $$\frac{1}{\pi}\frac{1}{1+(x-\mu)^2}.$$
The samples were $\{-1.5,-1,-.5\}$ and $\{-6,-1,3\}.$ Because there are issues with using the maximum likelihood estimator on small samples from this distribution, I used the median instead. At this sample size, the information loss is trivial for the accuracy gained. One of the nice things about the Frequentist interval method is that it can be created from any statistic with a sampling distribution, although if you change the statistic, then you change the predictions. As it is known there is no analytic solution to the above problem, I created the prediction by drawing 10,000,000 samples of size four. I used the first three observations to create the statistic, and the fourth was to be predicted.
Note that because both samples share the same median and because both samples have the same known scale parameter, the distribution from which to construct the Frequentist prediction intervals is the same for both. It is an averaging process — the Frequentist method controls for worst-case samples. It minimizes average loss, but not a specific loss as the Bayesian method would.
There are a few other minor differences. A Frequentist prediction interval is uniformly distributed over the interval. There are no dense areas. Its interpretation would be that upon repetition, and $\alpha$ percent prediction interval will cover the prediction no less than $\alpha$ percent of the time.
A Bayesian interval would be interpreted as having an $\alpha$ percent probability of containing the future sample. Of course, for both, it is contingent upon the prior sample actually seen. Furthermore, there is an infinite number of intervals from a Bayesian density because there is no restriction that you use the highest density region. You could choose the lowest density region if you wished. Any subset of the density that adds to $\alpha$ percent is a valid $\alpha$ prediction interval. The highest density region is usually used because it has other optimality properties. | Are predictive distributions supposed to be distributions of future data? | You almost wholly and correctly answered your blog question. I created a few simulations to show you where the differences are.
Also, I have a personal issue with the term of art, “overdispersed.” P | Are predictive distributions supposed to be distributions of future data?
You almost wholly and correctly answered your blog question. I created a few simulations to show you where the differences are.
Also, I have a personal issue with the term of art, “overdispersed.” Predictive distributions are not overdispersed, they are correctly dispersed, but the true distribution in nature doesn’t match it. Instead, it is contained within it.
The two types of predictions differ in a number of subtle ways. Which type of prediction to use should depend wholly on the actual problem you face and the losses you face from unfortunate sampling or other issues.
The first difference comes from the existence of prior information. In the absence of prior information, Frequentist methods minimize the maximum possible loss you could face from a bad estimator. That is very advantageous when you have no framework to estimate an average loss instead. Bayesian methods minimize the average loss experienced, but the prior does matter. In the presence of prior information, all Frequentist solutions are inadmissible. Furthermore, Frequentist methods do not give rise to coherent probabilities and so should never be used in gambling situations such as estimating inventory needs, portfolio allocations or budgets.
The question of whether you should use a Bayesian density or a Frequentist interval should depend only on the problem you are solving.
I decided to exaggerate your example a little by expanding a metaphor I use a lot to teach the differences in prediction methods. I modified the game as well to fit the Reverand Bayes original example because it had both a Bayesian and a strong Frequentist interpretation. Although the Reverand Bayes used a billiards table to generate a uniformly distributed random number, I used the R language function runif().
Using seed 9817, I drew a value of the parameter of 0.5171191. Based on this parameter value eight Bernoulli trials were performed. There were four successes. The question is now to predict the number of successes, based on the eight observed trials, over the next 10,000 observations. Knowing the conditions of the generation of the parameter, I used a uniform prior density for the Bayesian solution.
The Frequentist solution is an interval, but it does generate a density from which to create an interval. For a binomial likelihood, the Frequentist prediction interval is constructed from the hypergeometric distribution. For the Bayesian method, the prediction is a one-dimensional Polya distribution. They are very close, but they are importantly different.
The Frequentist density from which to construct intervals is shown here,
while the Bayesian density is shown here,
if the true value of the parameter were known, then the prediction would be this one instead.
To get a better feel, also consider the cases where the true value were 2/3, and there were five successes in eight trials. The Frequentist is,
the Bayesian is,
the true prediction is.
Now as to the differences, imagine that the parameter space was discrete. Let us consider the case where the only three possibilities were a parameter space of $\{1/3,1/2,2/3\}.$ There isn’t a known way to construct predictive intervals on discrete parameter spaces. If the prior were uniform, then the Bayesian predictive distribution would be like this.
Of course, this is not the only difference. There is only one way to construct a Bayesian predictive density, but there is an infinite number of predictive intervals because predictive intervals are built on top of confidence intervals. They depend entirely on the cost function chosen. As such, the Frequentist interval that is usually taught as “the” Frequentist interval is the one that minimizes the average loss under the Kullback-Leibler Divergence. It is not uniquely the Frequentist prediction interval method. Change the loss function, and you will change the boundaries.
The reason this is usually used is that the Bayesian predictive density automatically minimizes the Kullback-Leibler Divergence. Note I did not say it minimized the average divergence, but the actual divergence. To illustrate the differences, I drew two samples from $$\frac{1}{\pi}\frac{1}{1+(x-\mu)^2}.$$
The samples were $\{-1.5,-1,-.5\}$ and $\{-6,-1,3\}.$ Because there are issues with using the maximum likelihood estimator on small samples from this distribution, I used the median instead. At this sample size, the information loss is trivial for the accuracy gained. One of the nice things about the Frequentist interval method is that it can be created from any statistic with a sampling distribution, although if you change the statistic, then you change the predictions. As it is known there is no analytic solution to the above problem, I created the prediction by drawing 10,000,000 samples of size four. I used the first three observations to create the statistic, and the fourth was to be predicted.
Note that because both samples share the same median and because both samples have the same known scale parameter, the distribution from which to construct the Frequentist prediction intervals is the same for both. It is an averaging process — the Frequentist method controls for worst-case samples. It minimizes average loss, but not a specific loss as the Bayesian method would.
There are a few other minor differences. A Frequentist prediction interval is uniformly distributed over the interval. There are no dense areas. Its interpretation would be that upon repetition, and $\alpha$ percent prediction interval will cover the prediction no less than $\alpha$ percent of the time.
A Bayesian interval would be interpreted as having an $\alpha$ percent probability of containing the future sample. Of course, for both, it is contingent upon the prior sample actually seen. Furthermore, there is an infinite number of intervals from a Bayesian density because there is no restriction that you use the highest density region. You could choose the lowest density region if you wished. Any subset of the density that adds to $\alpha$ percent is a valid $\alpha$ prediction interval. The highest density region is usually used because it has other optimality properties. | Are predictive distributions supposed to be distributions of future data?
You almost wholly and correctly answered your blog question. I created a few simulations to show you where the differences are.
Also, I have a personal issue with the term of art, “overdispersed.” P |
49,815 | How do I know whether future samples will remain below a threshold? | You want to assess whether the probability $\Pr(X>1)$ is significantly higher than $99\%$.
To do so, you can derive a confidence interval about $\Pr(X>1)$. For example you can get such a confidence interval using the Bayesian approach with the Jeffreys prior.
Another way is to use a lower tolerance limit. If the lower $(1-\alpha, p=99\%)$-tolerance limit is higher than $1$, then the probability $\Pr(X>1)$ is significantly higher than $99\%$ at the $\alpha$-level of significance.
Example:
> # simulated sample
> set.seed(666)
> y <- rnorm(40, mean=5, sd=1)
> # tolerance limit
> library(tolerance)
> normtol.int(y, alpha=0.05, P=0.99, side=1)
alpha P x.bar 1-sided.lower 1-sided.upper
1 0.05 0.99 4.874011 1.353383 8.394639
The lower tolerance limit is $\approx 1.35 > 1$, then $\Pr(X>1)$ is significantly higher than $99\%$ at the significance level $\alpha=5\%$.
Using the Jeffreys Bayesian approach:
> Jeffreys <- function(y, nsims=100000){
+ n <- length(y)
+ sigma <- sqrt(c(crossprod(y-mean(y)))/rchisq(nsims,n))
+ mu <- rnorm(nsims, mean(y), sigma/sqrt(n))
+ list(mu=mu, sigma=sigma)
+ }
> # posterior sampling of Pr(Y>1)
> nsims <- 100000
> sims_musigma <- Jeffreys(y, nsims)
> sims_pr <- numeric(nsims)
> for(i in 1:nsims){
+ sims_pr[i] <- 1 - pnorm(1, mean=sims_musigma$mu[i], sd=sims_musigma$sigma[i])
+ }
> # lower confidence bound of Pr(Y>1)
> quantile(sims_pr, 0.05)
5%
0.9954999
The lower $95\%$-confidence bound of $\Pr(X>1)$ is $\approx 99.5\%$, then $\Pr(X>1)$ is significantly higher than $99\%$ at the significance level $\alpha=5\%$.
If you don't like the Jeffreys approach, you can use these approximate confidence bounds of $\Pr(X>q)$:
lower bound: $1 - \Phi\left[\frac{q-\hat\mu}{\hat\sigma}\left(1-\Phi^{-1}(1-\alpha)\sqrt{\dfrac{1}{n{\left(\frac{q-\hat\mu}{\hat\sigma}\right)}^2}+\dfrac{1}{2(n-1)}}\right) \right]$
upper bound: $1 - \Phi\left[\frac{q-\hat\mu}{\hat\sigma}\left(1+\Phi^{-1}(1-\alpha)\sqrt{\dfrac{1}{n{\left(\frac{q-\hat\mu}{\hat\sigma}\right)}^2}+\dfrac{1}{2(n-1)}}\right) \right]$
sources:
Bissell, A. F. (1990), "How Reliable Is Your Capability Index?" Applied Statistics, 30, 331 - 340.
Kushler, R. H. and Hurley, P. (1992), "Confidence Bounds for Capability Indices," Journal of Quality Technology, 24, 188 - 195.
The lower bound is similar to the previous one:
> alpha <- 5/100
> n <- length(y)
> q <- 1
> 1 - pnorm((q-mean(y))/sd(y) * (1-qnorm(1-alpha)*sqrt(1/n/((q-mean(y))/sd(y))^2 + 1/2/(n-1))))
[1] 0.9950559 | How do I know whether future samples will remain below a threshold? | You want to assess whether the probability $\Pr(X>1)$ is significantly higher than $99\%$.
To do so, you can derive a confidence interval about $\Pr(X>1)$. For example you can get such a confidence in | How do I know whether future samples will remain below a threshold?
You want to assess whether the probability $\Pr(X>1)$ is significantly higher than $99\%$.
To do so, you can derive a confidence interval about $\Pr(X>1)$. For example you can get such a confidence interval using the Bayesian approach with the Jeffreys prior.
Another way is to use a lower tolerance limit. If the lower $(1-\alpha, p=99\%)$-tolerance limit is higher than $1$, then the probability $\Pr(X>1)$ is significantly higher than $99\%$ at the $\alpha$-level of significance.
Example:
> # simulated sample
> set.seed(666)
> y <- rnorm(40, mean=5, sd=1)
> # tolerance limit
> library(tolerance)
> normtol.int(y, alpha=0.05, P=0.99, side=1)
alpha P x.bar 1-sided.lower 1-sided.upper
1 0.05 0.99 4.874011 1.353383 8.394639
The lower tolerance limit is $\approx 1.35 > 1$, then $\Pr(X>1)$ is significantly higher than $99\%$ at the significance level $\alpha=5\%$.
Using the Jeffreys Bayesian approach:
> Jeffreys <- function(y, nsims=100000){
+ n <- length(y)
+ sigma <- sqrt(c(crossprod(y-mean(y)))/rchisq(nsims,n))
+ mu <- rnorm(nsims, mean(y), sigma/sqrt(n))
+ list(mu=mu, sigma=sigma)
+ }
> # posterior sampling of Pr(Y>1)
> nsims <- 100000
> sims_musigma <- Jeffreys(y, nsims)
> sims_pr <- numeric(nsims)
> for(i in 1:nsims){
+ sims_pr[i] <- 1 - pnorm(1, mean=sims_musigma$mu[i], sd=sims_musigma$sigma[i])
+ }
> # lower confidence bound of Pr(Y>1)
> quantile(sims_pr, 0.05)
5%
0.9954999
The lower $95\%$-confidence bound of $\Pr(X>1)$ is $\approx 99.5\%$, then $\Pr(X>1)$ is significantly higher than $99\%$ at the significance level $\alpha=5\%$.
If you don't like the Jeffreys approach, you can use these approximate confidence bounds of $\Pr(X>q)$:
lower bound: $1 - \Phi\left[\frac{q-\hat\mu}{\hat\sigma}\left(1-\Phi^{-1}(1-\alpha)\sqrt{\dfrac{1}{n{\left(\frac{q-\hat\mu}{\hat\sigma}\right)}^2}+\dfrac{1}{2(n-1)}}\right) \right]$
upper bound: $1 - \Phi\left[\frac{q-\hat\mu}{\hat\sigma}\left(1+\Phi^{-1}(1-\alpha)\sqrt{\dfrac{1}{n{\left(\frac{q-\hat\mu}{\hat\sigma}\right)}^2}+\dfrac{1}{2(n-1)}}\right) \right]$
sources:
Bissell, A. F. (1990), "How Reliable Is Your Capability Index?" Applied Statistics, 30, 331 - 340.
Kushler, R. H. and Hurley, P. (1992), "Confidence Bounds for Capability Indices," Journal of Quality Technology, 24, 188 - 195.
The lower bound is similar to the previous one:
> alpha <- 5/100
> n <- length(y)
> q <- 1
> 1 - pnorm((q-mean(y))/sd(y) * (1-qnorm(1-alpha)*sqrt(1/n/((q-mean(y))/sd(y))^2 + 1/2/(n-1))))
[1] 0.9950559 | How do I know whether future samples will remain below a threshold?
You want to assess whether the probability $\Pr(X>1)$ is significantly higher than $99\%$.
To do so, you can derive a confidence interval about $\Pr(X>1)$. For example you can get such a confidence in |
49,816 | What's the difference between a TSLM forecast and STL forecast? | I think I got my answer.
A TSLM is literally just that. Essentially a data frame is created with 3+ columns. Column 1 is your y value. Column 2 is the numbers 1 to however_many_observations_you_have. Column 3 is a factor variable that corresponds to your seasonal. In my case, 1 to 12. Then it just compiles a regression based off of that.
STL on the other hand will find the trend (using loess). Then break out the seasonal. | What's the difference between a TSLM forecast and STL forecast? | I think I got my answer.
A TSLM is literally just that. Essentially a data frame is created with 3+ columns. Column 1 is your y value. Column 2 is the numbers 1 to however_many_observations_you_have. | What's the difference between a TSLM forecast and STL forecast?
I think I got my answer.
A TSLM is literally just that. Essentially a data frame is created with 3+ columns. Column 1 is your y value. Column 2 is the numbers 1 to however_many_observations_you_have. Column 3 is a factor variable that corresponds to your seasonal. In my case, 1 to 12. Then it just compiles a regression based off of that.
STL on the other hand will find the trend (using loess). Then break out the seasonal. | What's the difference between a TSLM forecast and STL forecast?
I think I got my answer.
A TSLM is literally just that. Essentially a data frame is created with 3+ columns. Column 1 is your y value. Column 2 is the numbers 1 to however_many_observations_you_have. |
49,817 | Can we absorb the partition function into the natural parameter vector? | The definition of the log partition function,
$$A(\theta) = \log \int h(x) \exp\{\phi(\theta)^T T(x) \}dx $$
makes it clear that given fixed functions $h(x)$ and $T(x)$, then $A(\theta)$ is completely determined by $\phi(\theta)$. In your second parameterization where you've absorbed the log-partition function to the natural parameter vector, the dimension of $\theta$ is one less than the dimension of $\phi'(\theta) = [\phi(\theta), -A(\theta)]$. Thus, the possible space of the natural parameter vector $\phi'(\theta)$ (the natural parameter space) is not an open set: it is a curve. As stated in the comment by @user2939212, the identity $\nabla A(\theta) = \mathbb{E}[T(x)]$ only holds if the natural parameter space is an open set (and so the family is regular).
Cases where the dimension of the parameter vector $\theta$ is less than the dimension of the natural parameter vector are called curved exponential families. For example, a Gaussian$(\mu, \sigma^2)$ where $\sigma = |\mu| = \theta$ is a curved exponential family: the natural parameter vector is $$\eta(\theta) = [\theta^{-1},-0.5\theta^{-2}].$$ The space spanned by $\eta(\theta)$ is a parabola, not an open set. | Can we absorb the partition function into the natural parameter vector? | The definition of the log partition function,
$$A(\theta) = \log \int h(x) \exp\{\phi(\theta)^T T(x) \}dx $$
makes it clear that given fixed functions $h(x)$ and $T(x)$, then $A(\theta)$ is completely | Can we absorb the partition function into the natural parameter vector?
The definition of the log partition function,
$$A(\theta) = \log \int h(x) \exp\{\phi(\theta)^T T(x) \}dx $$
makes it clear that given fixed functions $h(x)$ and $T(x)$, then $A(\theta)$ is completely determined by $\phi(\theta)$. In your second parameterization where you've absorbed the log-partition function to the natural parameter vector, the dimension of $\theta$ is one less than the dimension of $\phi'(\theta) = [\phi(\theta), -A(\theta)]$. Thus, the possible space of the natural parameter vector $\phi'(\theta)$ (the natural parameter space) is not an open set: it is a curve. As stated in the comment by @user2939212, the identity $\nabla A(\theta) = \mathbb{E}[T(x)]$ only holds if the natural parameter space is an open set (and so the family is regular).
Cases where the dimension of the parameter vector $\theta$ is less than the dimension of the natural parameter vector are called curved exponential families. For example, a Gaussian$(\mu, \sigma^2)$ where $\sigma = |\mu| = \theta$ is a curved exponential family: the natural parameter vector is $$\eta(\theta) = [\theta^{-1},-0.5\theta^{-2}].$$ The space spanned by $\eta(\theta)$ is a parabola, not an open set. | Can we absorb the partition function into the natural parameter vector?
The definition of the log partition function,
$$A(\theta) = \log \int h(x) \exp\{\phi(\theta)^T T(x) \}dx $$
makes it clear that given fixed functions $h(x)$ and $T(x)$, then $A(\theta)$ is completely |
49,818 | What's the relationship between Laplace approximation and Variational Bayes methods? | As already stated in the comment section, both the Laplace Method and a certain class of Variational Inference Methods (convex-type representations) are based on locally approximating a (non-Gaussian) density.
Chris Bishop's book 'Pattern recognition and machine learning' has a chapter on this (Chapter 10.5. Local Variational Methods).
I hope that helps. | What's the relationship between Laplace approximation and Variational Bayes methods? | As already stated in the comment section, both the Laplace Method and a certain class of Variational Inference Methods (convex-type representations) are based on locally approximating a (non-Gaussian) | What's the relationship between Laplace approximation and Variational Bayes methods?
As already stated in the comment section, both the Laplace Method and a certain class of Variational Inference Methods (convex-type representations) are based on locally approximating a (non-Gaussian) density.
Chris Bishop's book 'Pattern recognition and machine learning' has a chapter on this (Chapter 10.5. Local Variational Methods).
I hope that helps. | What's the relationship between Laplace approximation and Variational Bayes methods?
As already stated in the comment section, both the Laplace Method and a certain class of Variational Inference Methods (convex-type representations) are based on locally approximating a (non-Gaussian) |
49,819 | Are the Boltzmann distributions an exponential family? | Could you elaborate on those three issues in your question? It looks like something completely different what you are asking there. Regarding the Boltzman distribution or Maxwell Boltzman statistics: The distribution $p(a) = \frac{g_a}{Z(T)} e^{\tfrac{-E(a)}{kT}}$, with $Z(T) = \int_a g_a e^{\tfrac{-E(a)}{kT}}$ (or a sum instead of integral) shows clearly that the Boltzman distributions are exponential families. | Are the Boltzmann distributions an exponential family? | Could you elaborate on those three issues in your question? It looks like something completely different what you are asking there. Regarding the Boltzman distribution or Maxwell Boltzman statistics: | Are the Boltzmann distributions an exponential family?
Could you elaborate on those three issues in your question? It looks like something completely different what you are asking there. Regarding the Boltzman distribution or Maxwell Boltzman statistics: The distribution $p(a) = \frac{g_a}{Z(T)} e^{\tfrac{-E(a)}{kT}}$, with $Z(T) = \int_a g_a e^{\tfrac{-E(a)}{kT}}$ (or a sum instead of integral) shows clearly that the Boltzman distributions are exponential families. | Are the Boltzmann distributions an exponential family?
Could you elaborate on those three issues in your question? It looks like something completely different what you are asking there. Regarding the Boltzman distribution or Maxwell Boltzman statistics: |
49,820 | Why are larger samples required to estimate higher moments than when estimating the mean? | To answer your second question first - "quality" means "accuracy", and accuracy can be defined in many ways, hence the lack of mathematical precision in the definition.
Higher moments are harder to estimate in many situations, easier in others. If the probability distribution of the data is such that the mean equals 0 and all the data lies in $(-1, 1)$, the higher moments will be more accurately estimated, as they will in general converge to zero as the index of the moment goes to infinity. Estimating the 101st moment of a variate that is distributed uniformly over (-0.5, 0.5) with high accuracy is really easy:
x1 <- x101 <- rep(0, 10000)
for (i in 1:length(x)) {
u <- runif(5)-0.5
x1[i] <- mean(u)
x101[i] <- mean((u-x1[i])^101)
}
> sqrt(mean(x1*x1))
[1] 0.1289411
> sqrt(mean(x101*x101))
[1] 1.880887e-16
The RMSE of the estimate of the mean based on a sample size of 5 is $0.129$, give or take a little sampling error over our 10,000 samples, but the RMSE of the 101st moment is $1.9\text{x}10^{-16}$, far smaller.
However, in cases where there is substantial probability of values somewhat greater than 1, the story changes. Now, because we are raising the larger sample values (those $>1$) to higher powers, they become bigger, rather than smaller. Consider the same experiment, but with the variate distributed uniformly over (-5, 5) (skipping the trivial rewrite of the code):
> sqrt(mean(x1*x1))
[1] 1.290788
> sqrt(mean(x101*x101))
[1] 4.029381e+85
You can imagine that it will take a LOT of data to get that $4\text{x}10^{85}$ RMSE for the 101th power down to roughly the same accuracy as the estimate of the first moment (1.3). Here's what happens when we increase the sample size from 5 to 5000:
> sqrt(mean(x101*x101))
[1] 4.596604e+68
A big reduction, to be sure, but still a long way to go.
As alluded to above, the reason for this is that when we calculate the sample estimates of higher level moments (by calculating the corresponding moment of the sample data), we are raising the observed numbers to higher and higher powers. When they are $>1$, this makes them larger and larger. Consequently, the numerator of the moment calculation gets larger and larger, so you need a larger denominator (which is the sample size) to compensate.
Note also that if you make assumptions about the distribution of the data, the Wikipedia statement need not hold. For example, if we assume the data is Normally distributed, our "estimate" of all the odd moments will equal 0 regardless of sample size or how large the moment is. | Why are larger samples required to estimate higher moments than when estimating the mean? | To answer your second question first - "quality" means "accuracy", and accuracy can be defined in many ways, hence the lack of mathematical precision in the definition.
Higher moments are harder to es | Why are larger samples required to estimate higher moments than when estimating the mean?
To answer your second question first - "quality" means "accuracy", and accuracy can be defined in many ways, hence the lack of mathematical precision in the definition.
Higher moments are harder to estimate in many situations, easier in others. If the probability distribution of the data is such that the mean equals 0 and all the data lies in $(-1, 1)$, the higher moments will be more accurately estimated, as they will in general converge to zero as the index of the moment goes to infinity. Estimating the 101st moment of a variate that is distributed uniformly over (-0.5, 0.5) with high accuracy is really easy:
x1 <- x101 <- rep(0, 10000)
for (i in 1:length(x)) {
u <- runif(5)-0.5
x1[i] <- mean(u)
x101[i] <- mean((u-x1[i])^101)
}
> sqrt(mean(x1*x1))
[1] 0.1289411
> sqrt(mean(x101*x101))
[1] 1.880887e-16
The RMSE of the estimate of the mean based on a sample size of 5 is $0.129$, give or take a little sampling error over our 10,000 samples, but the RMSE of the 101st moment is $1.9\text{x}10^{-16}$, far smaller.
However, in cases where there is substantial probability of values somewhat greater than 1, the story changes. Now, because we are raising the larger sample values (those $>1$) to higher powers, they become bigger, rather than smaller. Consider the same experiment, but with the variate distributed uniformly over (-5, 5) (skipping the trivial rewrite of the code):
> sqrt(mean(x1*x1))
[1] 1.290788
> sqrt(mean(x101*x101))
[1] 4.029381e+85
You can imagine that it will take a LOT of data to get that $4\text{x}10^{85}$ RMSE for the 101th power down to roughly the same accuracy as the estimate of the first moment (1.3). Here's what happens when we increase the sample size from 5 to 5000:
> sqrt(mean(x101*x101))
[1] 4.596604e+68
A big reduction, to be sure, but still a long way to go.
As alluded to above, the reason for this is that when we calculate the sample estimates of higher level moments (by calculating the corresponding moment of the sample data), we are raising the observed numbers to higher and higher powers. When they are $>1$, this makes them larger and larger. Consequently, the numerator of the moment calculation gets larger and larger, so you need a larger denominator (which is the sample size) to compensate.
Note also that if you make assumptions about the distribution of the data, the Wikipedia statement need not hold. For example, if we assume the data is Normally distributed, our "estimate" of all the odd moments will equal 0 regardless of sample size or how large the moment is. | Why are larger samples required to estimate higher moments than when estimating the mean?
To answer your second question first - "quality" means "accuracy", and accuracy can be defined in many ways, hence the lack of mathematical precision in the definition.
Higher moments are harder to es |
49,821 | Combining image and scalar inputs into a neural network | There are many ways to combine scalar and image inputs. In this particular paper, a diagram on the top of page 5 should explain everything. At some point in the convolutional network there are 64 feature maps, which matches the 64 scalar values to be input. The 64 scalar values are essentially treated as bias terms so that the $i$th scalar value is added on to the $i$th feature map.
Other popular methods of pulling this off usually inject scalar outputs after all the convolutional layers, when the last feature map has been flattened and the fully connected layers start, it is easy to concatenate in some auxiliary scalar inputs. | Combining image and scalar inputs into a neural network | There are many ways to combine scalar and image inputs. In this particular paper, a diagram on the top of page 5 should explain everything. At some point in the convolutional network there are 64 feat | Combining image and scalar inputs into a neural network
There are many ways to combine scalar and image inputs. In this particular paper, a diagram on the top of page 5 should explain everything. At some point in the convolutional network there are 64 feature maps, which matches the 64 scalar values to be input. The 64 scalar values are essentially treated as bias terms so that the $i$th scalar value is added on to the $i$th feature map.
Other popular methods of pulling this off usually inject scalar outputs after all the convolutional layers, when the last feature map has been flattened and the fully connected layers start, it is easy to concatenate in some auxiliary scalar inputs. | Combining image and scalar inputs into a neural network
There are many ways to combine scalar and image inputs. In this particular paper, a diagram on the top of page 5 should explain everything. At some point in the convolutional network there are 64 feat |
49,822 | Combining image and scalar inputs into a neural network | The features vector can be combined to an image by -
Adjusting the features shape by using tf.reshape and tf.tile
Combining the features and image by performing concatenation, add (as described in Research document) or other merge operators
Here is a code example for creating a Custom Keras Layer that merge features and image, by using tile and concatenation -
class FeatureConcatLayer(tf.keras.layers.Layer):
def build(self, input_shape):
self.image_shape = input_shape[0][1:]
self.num_features = input_shape[1][1]
def call(self, inputs):
image, features = inputs
features = tf.reshape(features, (-1, 1, 1, self.num_features))
features = tf.tile(features, [1, self.image_shape[0], self.image_shape[1], 1])
return tf.concat([image, features], axis=-1) | Combining image and scalar inputs into a neural network | The features vector can be combined to an image by -
Adjusting the features shape by using tf.reshape and tf.tile
Combining the features and image by performing concatenation, add (as described in Re | Combining image and scalar inputs into a neural network
The features vector can be combined to an image by -
Adjusting the features shape by using tf.reshape and tf.tile
Combining the features and image by performing concatenation, add (as described in Research document) or other merge operators
Here is a code example for creating a Custom Keras Layer that merge features and image, by using tile and concatenation -
class FeatureConcatLayer(tf.keras.layers.Layer):
def build(self, input_shape):
self.image_shape = input_shape[0][1:]
self.num_features = input_shape[1][1]
def call(self, inputs):
image, features = inputs
features = tf.reshape(features, (-1, 1, 1, self.num_features))
features = tf.tile(features, [1, self.image_shape[0], self.image_shape[1], 1])
return tf.concat([image, features], axis=-1) | Combining image and scalar inputs into a neural network
The features vector can be combined to an image by -
Adjusting the features shape by using tf.reshape and tf.tile
Combining the features and image by performing concatenation, add (as described in Re |
49,823 | combining text and non-text features in a classification model | I am not aware of a standard way as such but here's a one thing I'll try. This will contain two models in the pipeline.
Train on the textual data to predict a class (like Fiance, Hardware) and get the model's prediction as a one categorical variable.
Append that categorical variable to the existing metadata featues, and train a new model
I could also replace the step 1 with: rather than outputting a one class (one that has the highest probability), using the whole set of probabilities that the first model predicted for each class and append those numeric features to the metadata featues to use in the step 2. | combining text and non-text features in a classification model | I am not aware of a standard way as such but here's a one thing I'll try. This will contain two models in the pipeline.
Train on the textual data to predict a class (like Fiance, Hardware) and get th | combining text and non-text features in a classification model
I am not aware of a standard way as such but here's a one thing I'll try. This will contain two models in the pipeline.
Train on the textual data to predict a class (like Fiance, Hardware) and get the model's prediction as a one categorical variable.
Append that categorical variable to the existing metadata featues, and train a new model
I could also replace the step 1 with: rather than outputting a one class (one that has the highest probability), using the whole set of probabilities that the first model predicted for each class and append those numeric features to the metadata featues to use in the step 2. | combining text and non-text features in a classification model
I am not aware of a standard way as such but here's a one thing I'll try. This will contain two models in the pipeline.
Train on the textual data to predict a class (like Fiance, Hardware) and get th |
49,824 | A point of view on Central Limit Theorem | Formally speaking, the CLT is in principle, indeed concerned with the limiting behavior of standardized sums of random variables, say $S_n=(1/n)\sum X_i$,
$$\frac{S_n - \mathbb{E}\left[S_n\right]}{\sqrt{\text{Var}(S_n)}}$$
So indeed the $\sqrt n$ term comes from the standard deviation of the sum.
Still, thinking of it as an "inflater", as the OP says, is not without conceptual merits (although it is a "derived" inflater, not one we determine separately). | A point of view on Central Limit Theorem | Formally speaking, the CLT is in principle, indeed concerned with the limiting behavior of standardized sums of random variables, say $S_n=(1/n)\sum X_i$,
$$\frac{S_n - \mathbb{E}\left[S_n\right]}{\sq | A point of view on Central Limit Theorem
Formally speaking, the CLT is in principle, indeed concerned with the limiting behavior of standardized sums of random variables, say $S_n=(1/n)\sum X_i$,
$$\frac{S_n - \mathbb{E}\left[S_n\right]}{\sqrt{\text{Var}(S_n)}}$$
So indeed the $\sqrt n$ term comes from the standard deviation of the sum.
Still, thinking of it as an "inflater", as the OP says, is not without conceptual merits (although it is a "derived" inflater, not one we determine separately). | A point of view on Central Limit Theorem
Formally speaking, the CLT is in principle, indeed concerned with the limiting behavior of standardized sums of random variables, say $S_n=(1/n)\sum X_i$,
$$\frac{S_n - \mathbb{E}\left[S_n\right]}{\sq |
49,825 | ARIMA model has trouble forecasting next month | Whenever possible, it is best to develop one equation that effectively characterizes the data see “Joint estimation of all parameters is preferred.” from lecture 3 http://faculty.chicagobooth.edu/ruey.tsay/teaching/bs41202/sp2011/.
As you suggested, there are fairly strong deterministic factors in your data. We have seen this “problem/opportunity” while studying the demand for cash. Different days of the month, weeks-of-the-month, holiday effects (both lead and lag) etc. can have an important role. I took your 558 historical values and used AUTOBOX my tool of choice. This is the model that was automatically formed.
The statistics for this model are here
The Actual and Forecasts (next 81 days) are here
while the Actual, Fit and Forecast are shown here
A separate plot of the forecasts are here
with values here:
In summary there is a strong dependency on month-of-the-year, day-of-the-week (weekend effect) and week-of-the-month. In addition, AUTOBOX detected three time trends in the data and some unusual values. The timing of these pulses should/might be examined in order to suggest additional/omitted variables.
I consider this as an exercise in EDA (with apologies to Tukey) where the data is examined to flush out suggested assignable causes and a potentially useful model. In my long experience in dealing with daily data ARIMA structure is often an imperfect solution due to the fact that we are creatures of habit that often perform repeated functions based upon the hour, the day, the week and the month.
Finally, all models are wrong BUT some are useful (G.E.P. Box) the “BOX” in AUTOBOX. Hope this helps. If you wish to chat about the model/approach set up a chat room or contact me any way that you wish. | ARIMA model has trouble forecasting next month | Whenever possible, it is best to develop one equation that effectively characterizes the data see “Joint estimation of all parameters is preferred.” from lecture 3 http://faculty.chicagobooth.edu/ruey | ARIMA model has trouble forecasting next month
Whenever possible, it is best to develop one equation that effectively characterizes the data see “Joint estimation of all parameters is preferred.” from lecture 3 http://faculty.chicagobooth.edu/ruey.tsay/teaching/bs41202/sp2011/.
As you suggested, there are fairly strong deterministic factors in your data. We have seen this “problem/opportunity” while studying the demand for cash. Different days of the month, weeks-of-the-month, holiday effects (both lead and lag) etc. can have an important role. I took your 558 historical values and used AUTOBOX my tool of choice. This is the model that was automatically formed.
The statistics for this model are here
The Actual and Forecasts (next 81 days) are here
while the Actual, Fit and Forecast are shown here
A separate plot of the forecasts are here
with values here:
In summary there is a strong dependency on month-of-the-year, day-of-the-week (weekend effect) and week-of-the-month. In addition, AUTOBOX detected three time trends in the data and some unusual values. The timing of these pulses should/might be examined in order to suggest additional/omitted variables.
I consider this as an exercise in EDA (with apologies to Tukey) where the data is examined to flush out suggested assignable causes and a potentially useful model. In my long experience in dealing with daily data ARIMA structure is often an imperfect solution due to the fact that we are creatures of habit that often perform repeated functions based upon the hour, the day, the week and the month.
Finally, all models are wrong BUT some are useful (G.E.P. Box) the “BOX” in AUTOBOX. Hope this helps. If you wish to chat about the model/approach set up a chat room or contact me any way that you wish. | ARIMA model has trouble forecasting next month
Whenever possible, it is best to develop one equation that effectively characterizes the data see “Joint estimation of all parameters is preferred.” from lecture 3 http://faculty.chicagobooth.edu/ruey |
49,826 | ARIMA model has trouble forecasting next month | For this you will want to split the date into 3 parts (day, month, year) and then do a seasonal time series potentially.
Chicaco booth have some good uni lecture notes available here (Try week 3): http://faculty.chicagobooth.edu/ruey.tsay/teaching/bs41202/sp2011/
R bloggers also have a brief section on this:
https://www.r-bloggers.com/seasonal-or-periodic-time-series/ | ARIMA model has trouble forecasting next month | For this you will want to split the date into 3 parts (day, month, year) and then do a seasonal time series potentially.
Chicaco booth have some good uni lecture notes available here (Try week 3): ht | ARIMA model has trouble forecasting next month
For this you will want to split the date into 3 parts (day, month, year) and then do a seasonal time series potentially.
Chicaco booth have some good uni lecture notes available here (Try week 3): http://faculty.chicagobooth.edu/ruey.tsay/teaching/bs41202/sp2011/
R bloggers also have a brief section on this:
https://www.r-bloggers.com/seasonal-or-periodic-time-series/ | ARIMA model has trouble forecasting next month
For this you will want to split the date into 3 parts (day, month, year) and then do a seasonal time series potentially.
Chicaco booth have some good uni lecture notes available here (Try week 3): ht |
49,827 | How to get approximative confidence interval for Gini and AUC? | Assumptions addressed
The paper that proposed your formula (Hanely and MacNeil 1982) explicitly states that a key assumption is that the ratings are derived from a continuous scale that does not produce ‘ties’.
A typical explanation of "AUC is the probability that a sample randomly taken from the positive cases will rank higher than a randomly chosen negative case". If you have a tie then the positive case is not ranked higher and so S should = 0 to fit this explanation. If you include 0.5 for ties then the definition should be "AUC is the probability that a sample randomly taken from the positive cases will rank equal or higher than a randomly chosen negative case"
This means that 1-AUC is the probability that a sample randomly taken from the positive cases will rank lower than a randomly chosen negative case (for 0.5 when tied) or equal or lower for 0 when tied case.
If you compare tie behaviour of S= 0 with S = 0.5 then AUC and 1-AUC will swap values everytime a tie occurs, so if there are a large proportion of ties the SE will diverge significantly between the two definitions. Since using S=0.5 when tied increases AUC it will therefore lead to a decrease in the calculated SE.
I can see why Hanley and MacNeil ignored ties, maybe other readers will know of another source that explicitly details how ties impact on the SE calculation and can fill in that gap better.
Meaning of Elements in the equation
I'll include all elements for completeness, even the obvious ones.
AUC(1-AUC) is self explanatory, the AUC times its inverse. It is at a maximum when AUC = 0.5, i.e. it is a squared value and becomes smaller as AUC deviates from 0.5. Since AUC = 0.5 is equivalent to random chance then you would expect there to be large uncertainty at this value and for it to diminish as the value increases.
M is the number of positive cases
Hanley and MacNeil had compared gaussian, gamma and negative exponential distribution assumptions and chose the latter as it was the most conservative of that set and also provided the easiest terms to use in the equation. This assumption is the basis of using AUC/(2-AUC) and 2AUC^2 / (1+AUC) in the formula.
AUC/(2-AUC) is the probability of ranking two randomly chosen positive sample higher than a negative one based on an underlying assumption of a negative exponential distribution
Thus $$ \frac{AUC}{2-AUC}- AUC^2 $$ is the difference between the probability of two positive samples being ranked higher and the square of the probability of one sample being ranked higher than a random negative one.
N is the number of negative cases
2AUC^2 / (1+AUC) is the probability of ranking one randomly chosen positive samples higher than two randomly chosen negative ones based on an underlying assumption of negative exponential distribution
Thus $$ \frac{2AUC^2}{1+AUC}-AUC^2\ $$ is the difference between the probability of one positive samples being ranked higher than two negative ones and the square of the probability of one positive sample being ranked higher than one random negative sample.
Basically the equation is taking second order effects into account as well as first order effects (the SE for a Bernoulli only uses first order effects)
The SE equation you quote assumes negative exponential rather than standard normal distribution. I’ll add a comment to the question to get more details on what you mean and what sources. Without understanding the background to this I can’t really address it. | How to get approximative confidence interval for Gini and AUC? | Assumptions addressed
The paper that proposed your formula (Hanely and MacNeil 1982) explicitly states that a key assumption is that the ratings are derived from a continuous scale that does not produ | How to get approximative confidence interval for Gini and AUC?
Assumptions addressed
The paper that proposed your formula (Hanely and MacNeil 1982) explicitly states that a key assumption is that the ratings are derived from a continuous scale that does not produce ‘ties’.
A typical explanation of "AUC is the probability that a sample randomly taken from the positive cases will rank higher than a randomly chosen negative case". If you have a tie then the positive case is not ranked higher and so S should = 0 to fit this explanation. If you include 0.5 for ties then the definition should be "AUC is the probability that a sample randomly taken from the positive cases will rank equal or higher than a randomly chosen negative case"
This means that 1-AUC is the probability that a sample randomly taken from the positive cases will rank lower than a randomly chosen negative case (for 0.5 when tied) or equal or lower for 0 when tied case.
If you compare tie behaviour of S= 0 with S = 0.5 then AUC and 1-AUC will swap values everytime a tie occurs, so if there are a large proportion of ties the SE will diverge significantly between the two definitions. Since using S=0.5 when tied increases AUC it will therefore lead to a decrease in the calculated SE.
I can see why Hanley and MacNeil ignored ties, maybe other readers will know of another source that explicitly details how ties impact on the SE calculation and can fill in that gap better.
Meaning of Elements in the equation
I'll include all elements for completeness, even the obvious ones.
AUC(1-AUC) is self explanatory, the AUC times its inverse. It is at a maximum when AUC = 0.5, i.e. it is a squared value and becomes smaller as AUC deviates from 0.5. Since AUC = 0.5 is equivalent to random chance then you would expect there to be large uncertainty at this value and for it to diminish as the value increases.
M is the number of positive cases
Hanley and MacNeil had compared gaussian, gamma and negative exponential distribution assumptions and chose the latter as it was the most conservative of that set and also provided the easiest terms to use in the equation. This assumption is the basis of using AUC/(2-AUC) and 2AUC^2 / (1+AUC) in the formula.
AUC/(2-AUC) is the probability of ranking two randomly chosen positive sample higher than a negative one based on an underlying assumption of a negative exponential distribution
Thus $$ \frac{AUC}{2-AUC}- AUC^2 $$ is the difference between the probability of two positive samples being ranked higher and the square of the probability of one sample being ranked higher than a random negative one.
N is the number of negative cases
2AUC^2 / (1+AUC) is the probability of ranking one randomly chosen positive samples higher than two randomly chosen negative ones based on an underlying assumption of negative exponential distribution
Thus $$ \frac{2AUC^2}{1+AUC}-AUC^2\ $$ is the difference between the probability of one positive samples being ranked higher than two negative ones and the square of the probability of one positive sample being ranked higher than one random negative sample.
Basically the equation is taking second order effects into account as well as first order effects (the SE for a Bernoulli only uses first order effects)
The SE equation you quote assumes negative exponential rather than standard normal distribution. I’ll add a comment to the question to get more details on what you mean and what sources. Without understanding the background to this I can’t really address it. | How to get approximative confidence interval for Gini and AUC?
Assumptions addressed
The paper that proposed your formula (Hanely and MacNeil 1982) explicitly states that a key assumption is that the ratings are derived from a continuous scale that does not produ |
49,828 | Weighting for stratified random sample with non-proportionally allocated sample | A stratified design effectively means that separate surveys are designed within each stratum - units selected within one stratum are independent of all selections within other strata. Estimates of total are made within each stratum, and then combined to come up with the estimate of total across the population:
$$
\hat{Y} = \sum_{\text{strata}}\hat{Y}_{\text{stratum}}\\
= \sum_{\text{strata}}\sum_{\text{units}}y_iw_i
$$
The design weight for a unit in the survey should only weight the unit within the stratum that the unit belongs to. There is no need to modify design weights, assuming you have them.
I recommend reading Model Assisted Survey Sampling (Sarndel, Swenson, Wretman) or Practical Sampling Techniques (K Som) | Weighting for stratified random sample with non-proportionally allocated sample | A stratified design effectively means that separate surveys are designed within each stratum - units selected within one stratum are independent of all selections within other strata. Estimates of tot | Weighting for stratified random sample with non-proportionally allocated sample
A stratified design effectively means that separate surveys are designed within each stratum - units selected within one stratum are independent of all selections within other strata. Estimates of total are made within each stratum, and then combined to come up with the estimate of total across the population:
$$
\hat{Y} = \sum_{\text{strata}}\hat{Y}_{\text{stratum}}\\
= \sum_{\text{strata}}\sum_{\text{units}}y_iw_i
$$
The design weight for a unit in the survey should only weight the unit within the stratum that the unit belongs to. There is no need to modify design weights, assuming you have them.
I recommend reading Model Assisted Survey Sampling (Sarndel, Swenson, Wretman) or Practical Sampling Techniques (K Som) | Weighting for stratified random sample with non-proportionally allocated sample
A stratified design effectively means that separate surveys are designed within each stratum - units selected within one stratum are independent of all selections within other strata. Estimates of tot |
49,829 | What is cross-validation for? | Cross-validation is, in my opinion, a method to estimate performance of your model AND its parameters. It is also a good measure of how robust your model with its parameters is.
Let's say you decided two methods are appropriate for your data: ordinary least squares(OLS) and ridge regression.
For ridge regression case, there is a parameter called lambda used in regularization in terms of sum of coefficents. How can you decide what value of lambda provides the best model? This is where the CV comes in. You can now apply cross-validation to calculate the MSE with different lambdas and select the lambda value where increasing it further doesn't improve your model. Thus, CV can be used for parameter optimization.
The other use is to compare models i.e. OLS and ridge, so after optimizing parameters, you can compare the models by their CV errors. This is, however, quite risky and I wouldn't recommend it because even though CV provides some insight about the model's success, there is no way to be sure how exactly good your model is. An example of possible risk is selecting a model parameter that overfits to your training set and fail on new data.
I didn't full understand your pseudo code so here is an example for leave-one-out-cross-validation(k-fold CV where k = number of samples):
for i=1:number of samples
leave ith sample out so that you now have n-1 samples
build model with remaining samples
predict the ith sample (left out sample)
calculate its error
put the ith sample back in to have the original matrices again
end
calculate total error (MSE,RMSE or whichever measure you think is appropriate)
Edit: The answer to the comment is no. CV models shouldn't be used. Instead, one should build the final model with whole training set. | What is cross-validation for? | Cross-validation is, in my opinion, a method to estimate performance of your model AND its parameters. It is also a good measure of how robust your model with its parameters is.
Let's say you decided | What is cross-validation for?
Cross-validation is, in my opinion, a method to estimate performance of your model AND its parameters. It is also a good measure of how robust your model with its parameters is.
Let's say you decided two methods are appropriate for your data: ordinary least squares(OLS) and ridge regression.
For ridge regression case, there is a parameter called lambda used in regularization in terms of sum of coefficents. How can you decide what value of lambda provides the best model? This is where the CV comes in. You can now apply cross-validation to calculate the MSE with different lambdas and select the lambda value where increasing it further doesn't improve your model. Thus, CV can be used for parameter optimization.
The other use is to compare models i.e. OLS and ridge, so after optimizing parameters, you can compare the models by their CV errors. This is, however, quite risky and I wouldn't recommend it because even though CV provides some insight about the model's success, there is no way to be sure how exactly good your model is. An example of possible risk is selecting a model parameter that overfits to your training set and fail on new data.
I didn't full understand your pseudo code so here is an example for leave-one-out-cross-validation(k-fold CV where k = number of samples):
for i=1:number of samples
leave ith sample out so that you now have n-1 samples
build model with remaining samples
predict the ith sample (left out sample)
calculate its error
put the ith sample back in to have the original matrices again
end
calculate total error (MSE,RMSE or whichever measure you think is appropriate)
Edit: The answer to the comment is no. CV models shouldn't be used. Instead, one should build the final model with whole training set. | What is cross-validation for?
Cross-validation is, in my opinion, a method to estimate performance of your model AND its parameters. It is also a good measure of how robust your model with its parameters is.
Let's say you decided |
49,830 | What is cross-validation for? | Cross-validation is, obviously, for validation.
It results with some measure of how good your model is (by the way, I'd use MeanSquaredError instead of MinimumSquaredError, whatever it is). So it enables you to assess model's future performance and to compare models.
Of, course, you can assess MSE without cross-validation, but this would be biased, since you'll have to use the same sample to train and to validate your model.
As the model is re-trained everytime this is only usefull to check if
the model is well chosen and not the parameters of the model, right?
Right. Almost.
Imagine, your train procedure contains some feature selection method. Then you can not compare model with pre-selected features via cross-validation (e.g. you can not choose between Y~X+Z and Y~X+V), beacuse your train method will select different features for different fold.
On the other hand, imagine your train procedure estimates all the parameters instead of one (say, $\beta_0$). Then you can compare models with $\beta_0=1$ and with $\beta_0=2$ via cross-validation.
So, to sum up: you can compare models that differ in parameters that are not estimated inside train method (have to be inputed to it). | What is cross-validation for? | Cross-validation is, obviously, for validation.
It results with some measure of how good your model is (by the way, I'd use MeanSquaredError instead of MinimumSquaredError, whatever it is). So it enab | What is cross-validation for?
Cross-validation is, obviously, for validation.
It results with some measure of how good your model is (by the way, I'd use MeanSquaredError instead of MinimumSquaredError, whatever it is). So it enables you to assess model's future performance and to compare models.
Of, course, you can assess MSE without cross-validation, but this would be biased, since you'll have to use the same sample to train and to validate your model.
As the model is re-trained everytime this is only usefull to check if
the model is well chosen and not the parameters of the model, right?
Right. Almost.
Imagine, your train procedure contains some feature selection method. Then you can not compare model with pre-selected features via cross-validation (e.g. you can not choose between Y~X+Z and Y~X+V), beacuse your train method will select different features for different fold.
On the other hand, imagine your train procedure estimates all the parameters instead of one (say, $\beta_0$). Then you can compare models with $\beta_0=1$ and with $\beta_0=2$ via cross-validation.
So, to sum up: you can compare models that differ in parameters that are not estimated inside train method (have to be inputed to it). | What is cross-validation for?
Cross-validation is, obviously, for validation.
It results with some measure of how good your model is (by the way, I'd use MeanSquaredError instead of MinimumSquaredError, whatever it is). So it enab |
49,831 | What is cross-validation for? | Cross-validation is a method to validate a model, which is used mostly in cases when you have a very limited amount of data available.
You never want to train on data on which you are validating. On the other hand, sometimes it is costly to totally remove part of the training set (for validation). Cross-validation is a middle-ground here. We use some part of training set for validation, but never train and validate on the same data in the same time. Cross-validation is an elegant solution to achieve that. | What is cross-validation for? | Cross-validation is a method to validate a model, which is used mostly in cases when you have a very limited amount of data available.
You never want to train on data on which you are validating. On | What is cross-validation for?
Cross-validation is a method to validate a model, which is used mostly in cases when you have a very limited amount of data available.
You never want to train on data on which you are validating. On the other hand, sometimes it is costly to totally remove part of the training set (for validation). Cross-validation is a middle-ground here. We use some part of training set for validation, but never train and validate on the same data in the same time. Cross-validation is an elegant solution to achieve that. | What is cross-validation for?
Cross-validation is a method to validate a model, which is used mostly in cases when you have a very limited amount of data available.
You never want to train on data on which you are validating. On |
49,832 | Interpretation of p-value histogram for differential methylation analysis: what can explain prevalence of large p-values? | Edit: As Amoeba pointed out there was an error in my code, and the first plot is from an unpaired t-test. I re-ran with paired t-tests and different alpha and beta, and the results are at the bottom.
I simulated it in Matlab with the code below. I generated 1000 random alphas (1x3 vectors); corresponding betas are made by adding a smaller-magnitude, random 1x3 vector to each alpha. The result is that alpha and beta are correlated. The resulting p-value distribution is given here (unpaired t-test). As you can see it's skewed to near-1 values.
Unpaired t-test
pp = zeros(1000,1);
for iter = 1:1000
alpha = rand(1,3);
beta = alpha + rand(1,3)*.1;
[~,pp(iter)] = ttest2(alpha,beta);
end
Edit: Paired t-test
pp = zeros(10000,1);
for iter = 1:10000
alpha = randn(1,3);
beta = alpha + [rand 0.1 -1*rand];
[~,pp(iter)] = ttest(alpha,beta);
end
To achieve the right-skewed distribution with a paired t-test I had to make different assumptions about beta: beta = alpha + [r1 0.1 r2], where $r1$ and $r2$ are uniform random numbers, and $r1$ is in the range [0 1] and $r2$ is in the range [-1 0]. That is $r1$ is always positive, $r2$ is always negative, and their combined effect on alpha-beta is, on average, 0. The effect is to fairly reproduce your original histogram.
If this actually describes your data, it means that if e.g. gene methylation is high in one replicate, it's low in another replicate. Which is weird. Although it could happen if your data is a ratio and you've swapped numerator and denominator in one replicate. To test that, can you make scatter plots / calculate correlation between each pair of replicates, i.e. between df[1,] and df[3,], df[1,] and df[5,], etc. If your replicates are good they should all have high positive correlation - if any are negatively correlated then that's evidence for swapping the numerator and denominator. | Interpretation of p-value histogram for differential methylation analysis: what can explain prevalen | Edit: As Amoeba pointed out there was an error in my code, and the first plot is from an unpaired t-test. I re-ran with paired t-tests and different alpha and beta, and the results are at the bottom.
| Interpretation of p-value histogram for differential methylation analysis: what can explain prevalence of large p-values?
Edit: As Amoeba pointed out there was an error in my code, and the first plot is from an unpaired t-test. I re-ran with paired t-tests and different alpha and beta, and the results are at the bottom.
I simulated it in Matlab with the code below. I generated 1000 random alphas (1x3 vectors); corresponding betas are made by adding a smaller-magnitude, random 1x3 vector to each alpha. The result is that alpha and beta are correlated. The resulting p-value distribution is given here (unpaired t-test). As you can see it's skewed to near-1 values.
Unpaired t-test
pp = zeros(1000,1);
for iter = 1:1000
alpha = rand(1,3);
beta = alpha + rand(1,3)*.1;
[~,pp(iter)] = ttest2(alpha,beta);
end
Edit: Paired t-test
pp = zeros(10000,1);
for iter = 1:10000
alpha = randn(1,3);
beta = alpha + [rand 0.1 -1*rand];
[~,pp(iter)] = ttest(alpha,beta);
end
To achieve the right-skewed distribution with a paired t-test I had to make different assumptions about beta: beta = alpha + [r1 0.1 r2], where $r1$ and $r2$ are uniform random numbers, and $r1$ is in the range [0 1] and $r2$ is in the range [-1 0]. That is $r1$ is always positive, $r2$ is always negative, and their combined effect on alpha-beta is, on average, 0. The effect is to fairly reproduce your original histogram.
If this actually describes your data, it means that if e.g. gene methylation is high in one replicate, it's low in another replicate. Which is weird. Although it could happen if your data is a ratio and you've swapped numerator and denominator in one replicate. To test that, can you make scatter plots / calculate correlation between each pair of replicates, i.e. between df[1,] and df[3,], df[1,] and df[5,], etc. If your replicates are good they should all have high positive correlation - if any are negatively correlated then that's evidence for swapping the numerator and denominator. | Interpretation of p-value histogram for differential methylation analysis: what can explain prevalen
Edit: As Amoeba pointed out there was an error in my code, and the first plot is from an unpaired t-test. I re-ran with paired t-tests and different alpha and beta, and the results are at the bottom.
|
49,833 | lm.ridge returns different results that are from manual calculation | To deal with scaling problem, I suggest you replace lambda with lambda*(n/(n-1)). This will resolve the discrepancies.
In your example, it would be lam*(100/99).
set.seed(1)
x <- rnorm(1000,1,2)
x <- matrix(x,ncol=10,nrow=100)
y <- rnorm(100,2,5)
xs <- scale(x,TRUE,TRUE)
ys <- scale(y,TRUE,TRUE)
p <- dim(x)[2]
lam <- 2
# manual Calculation
bh <- solve(t(xs) %*% xs + lam * diag(p), t(xs) %*% ys)
# lm..ridge
fit <- lm.ridge(ys~xs-1, lambda=lam*(100/99))
coef_fit <- as.matrix(coef(fit),nco1)
cbind(bh, coef_fit)
This produces exactly the same two columns as below.
[,1] [,2]
xs1 -0.144767582 -0.144767582
xs2 -0.114627840 -0.114627840
xs3 -0.019612430 -0.019612430
xs4 0.007292303 0.007292303
xs5 0.044335298 0.044335298
xs6 -0.034135483 -0.034135483
xs7 0.020260806 0.020260806
xs8 0.058511001 0.058511001
xs9 -0.124643955 -0.124643955
xs10 0.060076729 0.060076729
edit1 How does it work?
Figuring out how this lm.ridge() helps us to understand it.
It is assumed that x has been scaled and centered
The manual calculation of coefficients is
$\hat \beta = ( X'X+\lambda I)^{-1} X'Y$.
lm.ridge() uses scaled X $X_s=\sqrt{\frac{n}{n-1}}X$
fit$coef returns the vector :
$\begin{align} \hat \beta &= ( X_s'X_s+\lambda I)^{-1} X_s'Y\\
&= \left( X'X+\frac{n-1}{n}\lambda I\right)^{-1} X'Y \sqrt{\frac{n-1}{n}}
\end{align}$.
fit$scale returns the vector whose all elements are $\sqrt{\frac{n-1}{n}}$ and has the same length with $\hat \beta$
And coef(fit) returns the value of fit$coef/fit$scale .
So to get the same value, you should replace lambda with lambda*(n/(n-1)). | lm.ridge returns different results that are from manual calculation | To deal with scaling problem, I suggest you replace lambda with lambda*(n/(n-1)). This will resolve the discrepancies.
In your example, it would be lam*(100/99).
set.seed(1)
x <- rnorm(1000,1,2)
x <- | lm.ridge returns different results that are from manual calculation
To deal with scaling problem, I suggest you replace lambda with lambda*(n/(n-1)). This will resolve the discrepancies.
In your example, it would be lam*(100/99).
set.seed(1)
x <- rnorm(1000,1,2)
x <- matrix(x,ncol=10,nrow=100)
y <- rnorm(100,2,5)
xs <- scale(x,TRUE,TRUE)
ys <- scale(y,TRUE,TRUE)
p <- dim(x)[2]
lam <- 2
# manual Calculation
bh <- solve(t(xs) %*% xs + lam * diag(p), t(xs) %*% ys)
# lm..ridge
fit <- lm.ridge(ys~xs-1, lambda=lam*(100/99))
coef_fit <- as.matrix(coef(fit),nco1)
cbind(bh, coef_fit)
This produces exactly the same two columns as below.
[,1] [,2]
xs1 -0.144767582 -0.144767582
xs2 -0.114627840 -0.114627840
xs3 -0.019612430 -0.019612430
xs4 0.007292303 0.007292303
xs5 0.044335298 0.044335298
xs6 -0.034135483 -0.034135483
xs7 0.020260806 0.020260806
xs8 0.058511001 0.058511001
xs9 -0.124643955 -0.124643955
xs10 0.060076729 0.060076729
edit1 How does it work?
Figuring out how this lm.ridge() helps us to understand it.
It is assumed that x has been scaled and centered
The manual calculation of coefficients is
$\hat \beta = ( X'X+\lambda I)^{-1} X'Y$.
lm.ridge() uses scaled X $X_s=\sqrt{\frac{n}{n-1}}X$
fit$coef returns the vector :
$\begin{align} \hat \beta &= ( X_s'X_s+\lambda I)^{-1} X_s'Y\\
&= \left( X'X+\frac{n-1}{n}\lambda I\right)^{-1} X'Y \sqrt{\frac{n-1}{n}}
\end{align}$.
fit$scale returns the vector whose all elements are $\sqrt{\frac{n-1}{n}}$ and has the same length with $\hat \beta$
And coef(fit) returns the value of fit$coef/fit$scale .
So to get the same value, you should replace lambda with lambda*(n/(n-1)). | lm.ridge returns different results that are from manual calculation
To deal with scaling problem, I suggest you replace lambda with lambda*(n/(n-1)). This will resolve the discrepancies.
In your example, it would be lam*(100/99).
set.seed(1)
x <- rnorm(1000,1,2)
x <- |
49,834 | Bayesian A/B testing with uniform prior | You seem to have answered the new formulation of your question yourself, in the comments. A primary appeal of Bayesian analysis of an experiment (which is what marketers call an A/B test) is that it lets you answer probabilistic questions about population values, such as "What is the probability that treatment A is better than treatment B?". This works even if you don't have rich prior information about the effects of A and B. (You still need to choose some prior distribution, though, and your choice between various so-called uninformative priors can be surprisingly consequential.) In a frequentist approach, by contrast, the true effects of the treatments are fixed, not random, so it makes no sense to ask about the probability that one is greater than the other. | Bayesian A/B testing with uniform prior | You seem to have answered the new formulation of your question yourself, in the comments. A primary appeal of Bayesian analysis of an experiment (which is what marketers call an A/B test) is that it l | Bayesian A/B testing with uniform prior
You seem to have answered the new formulation of your question yourself, in the comments. A primary appeal of Bayesian analysis of an experiment (which is what marketers call an A/B test) is that it lets you answer probabilistic questions about population values, such as "What is the probability that treatment A is better than treatment B?". This works even if you don't have rich prior information about the effects of A and B. (You still need to choose some prior distribution, though, and your choice between various so-called uninformative priors can be surprisingly consequential.) In a frequentist approach, by contrast, the true effects of the treatments are fixed, not random, so it makes no sense to ask about the probability that one is greater than the other. | Bayesian A/B testing with uniform prior
You seem to have answered the new formulation of your question yourself, in the comments. A primary appeal of Bayesian analysis of an experiment (which is what marketers call an A/B test) is that it l |
49,835 | Bayesian A/B testing with uniform prior | I can think of two reasons that you might want to do this:
The posterior distribution $\mathbb{P}(\theta|D)$ gives us a detailed view of our knowledge about the conversation rate, $\theta$. It allows us to visualize our knowledge about the parameter's value, compare the similarity of two test variants, and use decision theory to estimate the expected value under each test.
If we are interested in maximizing our returns while running the test, we can use methods like Thompson sampling to acquire data about which test variant is better without the test becoming too expensive. This is not always the case, but for some expensive tests it may be preferable. | Bayesian A/B testing with uniform prior | I can think of two reasons that you might want to do this:
The posterior distribution $\mathbb{P}(\theta|D)$ gives us a detailed view of our knowledge about the conversation rate, $\theta$. It allows | Bayesian A/B testing with uniform prior
I can think of two reasons that you might want to do this:
The posterior distribution $\mathbb{P}(\theta|D)$ gives us a detailed view of our knowledge about the conversation rate, $\theta$. It allows us to visualize our knowledge about the parameter's value, compare the similarity of two test variants, and use decision theory to estimate the expected value under each test.
If we are interested in maximizing our returns while running the test, we can use methods like Thompson sampling to acquire data about which test variant is better without the test becoming too expensive. This is not always the case, but for some expensive tests it may be preferable. | Bayesian A/B testing with uniform prior
I can think of two reasons that you might want to do this:
The posterior distribution $\mathbb{P}(\theta|D)$ gives us a detailed view of our knowledge about the conversation rate, $\theta$. It allows |
49,836 | choosing lambda for multi-reponse lasso in glmnet | Short answer for the simplest case (no intercept, no standardization)
library(glmnet)
set.seed(125)
n <- 50
p <- 5
k <- 2
X <- matrix(rnorm(n * p), ncol=p)
y <- matrix(rnorm(n * k), ncol=k)
max(glmnet(X, y, family="mgaussian",
standardize = FALSE,
standardize.response = FALSE,
intercept=FALSE)$lambda)
max(sqrt(rowSums(crossprod(X, y)^2))/n)
If you want to add intercept handling / standardization of $X$ and/or $y$, see discussion elsewhere on this site.
So where does this come from?
Let's review the standard lasso case first:
$$\text{arg min}_{\beta} \frac{1}{2n} \|y - X\beta\|_2^2 + \lambda \|\beta\|_1$$
The stationarity condition from the KKT conditions says that we must have
$$ 0 \in \frac{-X^T}{n}(y-X\beta) + \lambda \partial \|\beta\|_1 $$
The first term on the RHS is just the gradient of the smooth $\ell_2$-loss; the second term is the so-called subdifferential of the $\ell_1$-norm. It arises from an interesting alternate characterization of derivatives for convex functions which can be extended to non-smooth case quite easily. (See below for some details) For now, it's sufficient to know that it's a set of numbers and we need 0 to be in that set at the solution.
We are interested in the case where $\beta = 0$. In this case, the subdifferential of the absolute value function $\partial |\beta|_{\beta = 0}$ is the set $[-1, 1]$, so the subdifferential of the vector $\ell_1$-norm is $\partial \|\beta\|_1 = [-1, 1]^p$. That is, the set of all vectors in the $p$-dimensional hypercube.
Hence, we need to be able to find some $s \in [-1, 1]^p$ satisfying
$$0 = -\frac{X^Ty}{n} + \lambda s$$
for $\beta = 0$ to be a solution. Rearranging, we get
$$\frac{X^Ty}{n} = \lambda s$$
This is a set of vector equations with $p$ elements on each side, so let's look at just the first one:
$$\frac{(X^Ty)_1}{n} = \frac{X^T_1 y}{n} = \lambda s_1$$
We know that $s_1$ is in $[-1, 1]$, so $\lambda s_1$ is in $[-\lambda, \lambda]$. Hence, for this equation to hold, we must have $|X^T_1y/n| \leq \lambda$.
By symmetry, this must hold for all $i$, so we have $\max |X^T_iy/n| = \|X^Ty/n\|_{\infty} \leq \lambda$. Taking the smallest $\lambda$ that satisfies this equation, we get
$$\lambda_{\max} = \left\|\frac{X^Ty}{n}\right\|_{\infty}$$
Now, let's consider the group lasso penalty for the multi-response Gaussian, defined as
$$\text{arg min}_{B} \frac{1}{2n}\|Y - XB\|_2^2 + \lambda \|B\|_{1, 2}$$
where $Y \in \mathbb{R}^{n \times k}, X \in \mathbb{R}^{n \times p}$, and $B \in \mathbb{R}^{p \times k}$ and the penalty $\|B\|_{1, 2}$ is the $\ell_1/\ell_2$-mixed norm given by
$$ \|B\|_{1, 2} = \sum_{i=1}^p \|B_i\|_2 = \sum_{i=1}^p \sqrt{\sum_{j=1}^k B_{ij}^2} $$
We can do an analysis like before, but here we need the subdifferential of the $\ell_1/\ell_2$-mixed norm. It can be shown (below) that it is given by the $p$-fold Cartesian product of unit-or-smaller $k$ vectors.
As before, we get:
$$\frac{(X^TY)_i}{n} = \lambda s_i$$
except here the LHS is a $k$ vector and $s_i$ is a unit-or-smaller $k$ vector. Since $s_i$ is a unit vector, for there to be any solution, we must have $\|(X^TY)_i/n\|_2 < \lambda$. Again, taking the max over all $i$, we get
$$\max_i \|(X^TY)_i/n\|_2 = \|X^TY/n\|_{\infty, 2} \leq \lambda$$ so
$$ \lambda_{\max} = \|X^TY/n\|_{\infty, 2}$$
This is what we calculated above as max(sqrt(rowSums(crossprod(X, y)^2))/n).
So where does this all come from? Let's start by noting an important fact about convex functions: their Taylor series under-estimate them. That is, if $f$ is sufficiently smooth and convex, then the Taylor expansion of $f$ around $x$:
$$\tilde{f}_x(y) = f(x) + f'(x)(y-x)$$
will underestimate $f$:
$$ \tilde{f}_x(y) \leq f(y), \quad \forall x, y $$
This follows from Taylor's remainder theorem which says that the error will be of the form $f''(x)(y-x)^2/2$ - if $f$ is convex, then we have $f''(x) \geq 0$ everywhere, hence the Taylor series underestimates.
If we turn this around, we can say that for a convex function $f$, we can find $c$ such that
$$\tilde{f}_x(y) = f(x) + c(y-x)$$
Any $c$ satisfying this is called a subgradient of $f$ at $x$ and the set of all such $c$ is called the subdifferential of $f$ at $x$. If $f$ is differentiable, then there is only one subgradient which is just the gradient (and the subdifferential is just a set with one element), but if $f$ is not differentiable, then there are multiple possible subgradients (and hence a large subdifferential).
The classic example is $f(x) = |x|$, which is clearly non-differentiable at $0$. It is easy to see that any $c \in [-1, 1]$ is a subgradient:
Any of the blue lines (Taylor-type approximations using a subgradient) are everywhere below the red line (the function).
For the general $\ell_1$ norm on $\mathbb{R}^p$, it's not hard to see that the subdifferential is just the $p$-fold Cartesian product of the univariate subdifferential since the $\ell_1$-norm is separable across entries.
If we want to consider more general norms (e.g., the mixed $\ell_{1, 2}$ norm), we can invoke a more general theorem characterizing the subdifferential of norms.
Let $\|\cdot\|$ be a general norm. Then its subgradient is given by
$$\partial \|x\| = \{v : v^Tx = \|x\|, \|v\|_{*} \leq 1\}$$
where $\|v\|_{*} = \max_{z: \|z\| \leq 1} v^Tz$ is a different norm called the dual norm to $\|\cdot\|$. Evaluated at $x = 0$, we see that the subdifferential is the set of all vectors with dual norm at most 1.
One direction of the proof is easy: suppose $v$ is an element of the RHS above. Then consider the Taylor type expansion of $f(\cdot) = \|\cdot\|$ around $x$ evaluated at $y$ with (potential) subgradient $v$:
$$\begin{align*}
\tilde{f}_x(y) &= \|x\| + v^T(y - x) \\ &= \|x\| + v^Ty - v^Tx \\ & = v^Ty + \underbrace{\|x\| - v^Tx}_{=0 \text{ by assumption on $v$}} \\ &= v^Ty \\ & \leq \|v\|_*\|y\| \quad \text{(Holder's Inequality)} \\ &= \|y\|\end{align*}$$
so $v$ is indeed a subgradient of $f$.
Hence, finding the subdifferential reduces to calculating the dual norm. Fortunately, dual norms are exceptionally useful and hence well-studied. The simplest case is the standard $\ell_p$ norms where it can be shown that the dual of the $\ell_p$-norm is the $\ell_{p^*}$-norm where $p^*$ is the so-called Holder conjugate of $p$ and satisfies $1/p + 1/p^* = 1$.
For the $\ell_1$-case, $p=1$ and so $1/p + 1/p^* = 1 \implies 1/p^* = 1 - 1/1 = 0 \implies p^* = \infty$. Hence the subgradient is just the set of vectors with $\|v\|_{\infty} \leq 1$, which is exactly the set $[-1, 1]^p$ we claimed above.
For mixed-norms, the result is almost as easy. See Lemma 3 of [Sra12] for a proof of the fact that the dual of the $\|\cdot\|_{p, q}$ mixed-norm is simply $\|\cdot\|_{p^*,q^*}$ where $p^*,q^*$ are the Holder conjugates of $p, q$ respectively.
We hence note that the dual of the $\|\cdot\|_{1, 2}$ norm used for the group lasso is $\|\cdot\|_{\infty, 2}$ (because $1/2 + 1/2 = 1$) which has an associated unit ball which is the $p$-fold Cartesian product of vectors with Euclidean ($\ell_2$) norm at most 1, as claimed above.
[Sra12] "Fast projections onto mixed-norm balls with applications"
Suvrit Sra. ArXiv 1205.1437 | choosing lambda for multi-reponse lasso in glmnet | Short answer for the simplest case (no intercept, no standardization)
library(glmnet)
set.seed(125)
n <- 50
p <- 5
k <- 2
X <- matrix(rnorm(n * p), ncol=p)
y <- matrix(rnorm(n * k), ncol=k)
max(glm | choosing lambda for multi-reponse lasso in glmnet
Short answer for the simplest case (no intercept, no standardization)
library(glmnet)
set.seed(125)
n <- 50
p <- 5
k <- 2
X <- matrix(rnorm(n * p), ncol=p)
y <- matrix(rnorm(n * k), ncol=k)
max(glmnet(X, y, family="mgaussian",
standardize = FALSE,
standardize.response = FALSE,
intercept=FALSE)$lambda)
max(sqrt(rowSums(crossprod(X, y)^2))/n)
If you want to add intercept handling / standardization of $X$ and/or $y$, see discussion elsewhere on this site.
So where does this come from?
Let's review the standard lasso case first:
$$\text{arg min}_{\beta} \frac{1}{2n} \|y - X\beta\|_2^2 + \lambda \|\beta\|_1$$
The stationarity condition from the KKT conditions says that we must have
$$ 0 \in \frac{-X^T}{n}(y-X\beta) + \lambda \partial \|\beta\|_1 $$
The first term on the RHS is just the gradient of the smooth $\ell_2$-loss; the second term is the so-called subdifferential of the $\ell_1$-norm. It arises from an interesting alternate characterization of derivatives for convex functions which can be extended to non-smooth case quite easily. (See below for some details) For now, it's sufficient to know that it's a set of numbers and we need 0 to be in that set at the solution.
We are interested in the case where $\beta = 0$. In this case, the subdifferential of the absolute value function $\partial |\beta|_{\beta = 0}$ is the set $[-1, 1]$, so the subdifferential of the vector $\ell_1$-norm is $\partial \|\beta\|_1 = [-1, 1]^p$. That is, the set of all vectors in the $p$-dimensional hypercube.
Hence, we need to be able to find some $s \in [-1, 1]^p$ satisfying
$$0 = -\frac{X^Ty}{n} + \lambda s$$
for $\beta = 0$ to be a solution. Rearranging, we get
$$\frac{X^Ty}{n} = \lambda s$$
This is a set of vector equations with $p$ elements on each side, so let's look at just the first one:
$$\frac{(X^Ty)_1}{n} = \frac{X^T_1 y}{n} = \lambda s_1$$
We know that $s_1$ is in $[-1, 1]$, so $\lambda s_1$ is in $[-\lambda, \lambda]$. Hence, for this equation to hold, we must have $|X^T_1y/n| \leq \lambda$.
By symmetry, this must hold for all $i$, so we have $\max |X^T_iy/n| = \|X^Ty/n\|_{\infty} \leq \lambda$. Taking the smallest $\lambda$ that satisfies this equation, we get
$$\lambda_{\max} = \left\|\frac{X^Ty}{n}\right\|_{\infty}$$
Now, let's consider the group lasso penalty for the multi-response Gaussian, defined as
$$\text{arg min}_{B} \frac{1}{2n}\|Y - XB\|_2^2 + \lambda \|B\|_{1, 2}$$
where $Y \in \mathbb{R}^{n \times k}, X \in \mathbb{R}^{n \times p}$, and $B \in \mathbb{R}^{p \times k}$ and the penalty $\|B\|_{1, 2}$ is the $\ell_1/\ell_2$-mixed norm given by
$$ \|B\|_{1, 2} = \sum_{i=1}^p \|B_i\|_2 = \sum_{i=1}^p \sqrt{\sum_{j=1}^k B_{ij}^2} $$
We can do an analysis like before, but here we need the subdifferential of the $\ell_1/\ell_2$-mixed norm. It can be shown (below) that it is given by the $p$-fold Cartesian product of unit-or-smaller $k$ vectors.
As before, we get:
$$\frac{(X^TY)_i}{n} = \lambda s_i$$
except here the LHS is a $k$ vector and $s_i$ is a unit-or-smaller $k$ vector. Since $s_i$ is a unit vector, for there to be any solution, we must have $\|(X^TY)_i/n\|_2 < \lambda$. Again, taking the max over all $i$, we get
$$\max_i \|(X^TY)_i/n\|_2 = \|X^TY/n\|_{\infty, 2} \leq \lambda$$ so
$$ \lambda_{\max} = \|X^TY/n\|_{\infty, 2}$$
This is what we calculated above as max(sqrt(rowSums(crossprod(X, y)^2))/n).
So where does this all come from? Let's start by noting an important fact about convex functions: their Taylor series under-estimate them. That is, if $f$ is sufficiently smooth and convex, then the Taylor expansion of $f$ around $x$:
$$\tilde{f}_x(y) = f(x) + f'(x)(y-x)$$
will underestimate $f$:
$$ \tilde{f}_x(y) \leq f(y), \quad \forall x, y $$
This follows from Taylor's remainder theorem which says that the error will be of the form $f''(x)(y-x)^2/2$ - if $f$ is convex, then we have $f''(x) \geq 0$ everywhere, hence the Taylor series underestimates.
If we turn this around, we can say that for a convex function $f$, we can find $c$ such that
$$\tilde{f}_x(y) = f(x) + c(y-x)$$
Any $c$ satisfying this is called a subgradient of $f$ at $x$ and the set of all such $c$ is called the subdifferential of $f$ at $x$. If $f$ is differentiable, then there is only one subgradient which is just the gradient (and the subdifferential is just a set with one element), but if $f$ is not differentiable, then there are multiple possible subgradients (and hence a large subdifferential).
The classic example is $f(x) = |x|$, which is clearly non-differentiable at $0$. It is easy to see that any $c \in [-1, 1]$ is a subgradient:
Any of the blue lines (Taylor-type approximations using a subgradient) are everywhere below the red line (the function).
For the general $\ell_1$ norm on $\mathbb{R}^p$, it's not hard to see that the subdifferential is just the $p$-fold Cartesian product of the univariate subdifferential since the $\ell_1$-norm is separable across entries.
If we want to consider more general norms (e.g., the mixed $\ell_{1, 2}$ norm), we can invoke a more general theorem characterizing the subdifferential of norms.
Let $\|\cdot\|$ be a general norm. Then its subgradient is given by
$$\partial \|x\| = \{v : v^Tx = \|x\|, \|v\|_{*} \leq 1\}$$
where $\|v\|_{*} = \max_{z: \|z\| \leq 1} v^Tz$ is a different norm called the dual norm to $\|\cdot\|$. Evaluated at $x = 0$, we see that the subdifferential is the set of all vectors with dual norm at most 1.
One direction of the proof is easy: suppose $v$ is an element of the RHS above. Then consider the Taylor type expansion of $f(\cdot) = \|\cdot\|$ around $x$ evaluated at $y$ with (potential) subgradient $v$:
$$\begin{align*}
\tilde{f}_x(y) &= \|x\| + v^T(y - x) \\ &= \|x\| + v^Ty - v^Tx \\ & = v^Ty + \underbrace{\|x\| - v^Tx}_{=0 \text{ by assumption on $v$}} \\ &= v^Ty \\ & \leq \|v\|_*\|y\| \quad \text{(Holder's Inequality)} \\ &= \|y\|\end{align*}$$
so $v$ is indeed a subgradient of $f$.
Hence, finding the subdifferential reduces to calculating the dual norm. Fortunately, dual norms are exceptionally useful and hence well-studied. The simplest case is the standard $\ell_p$ norms where it can be shown that the dual of the $\ell_p$-norm is the $\ell_{p^*}$-norm where $p^*$ is the so-called Holder conjugate of $p$ and satisfies $1/p + 1/p^* = 1$.
For the $\ell_1$-case, $p=1$ and so $1/p + 1/p^* = 1 \implies 1/p^* = 1 - 1/1 = 0 \implies p^* = \infty$. Hence the subgradient is just the set of vectors with $\|v\|_{\infty} \leq 1$, which is exactly the set $[-1, 1]^p$ we claimed above.
For mixed-norms, the result is almost as easy. See Lemma 3 of [Sra12] for a proof of the fact that the dual of the $\|\cdot\|_{p, q}$ mixed-norm is simply $\|\cdot\|_{p^*,q^*}$ where $p^*,q^*$ are the Holder conjugates of $p, q$ respectively.
We hence note that the dual of the $\|\cdot\|_{1, 2}$ norm used for the group lasso is $\|\cdot\|_{\infty, 2}$ (because $1/2 + 1/2 = 1$) which has an associated unit ball which is the $p$-fold Cartesian product of vectors with Euclidean ($\ell_2$) norm at most 1, as claimed above.
[Sra12] "Fast projections onto mixed-norm balls with applications"
Suvrit Sra. ArXiv 1205.1437 | choosing lambda for multi-reponse lasso in glmnet
Short answer for the simplest case (no intercept, no standardization)
library(glmnet)
set.seed(125)
n <- 50
p <- 5
k <- 2
X <- matrix(rnorm(n * p), ncol=p)
y <- matrix(rnorm(n * k), ncol=k)
max(glm |
49,837 | choosing lambda for multi-reponse lasso in glmnet | sorry I'm new to the community, still trying to get the hang of it! Thank for editing my question @F. Tusell. The link to the original article came from: https://web.stanford.edu/~hastie/Papers/glmnet.pdf.
But the paper did not mention much about what was done regarding multi-variate response. The vignette for the R package glmnet (link: https://web.stanford.edu/~hastie/Papers/Glmnet_Vignette.pdf), on page 17, explained how the objective function looked like, incorporating the group LASSO penalty. But, I can't seem to be able to reproduce value of lambda_max used to form the grids. | choosing lambda for multi-reponse lasso in glmnet | sorry I'm new to the community, still trying to get the hang of it! Thank for editing my question @F. Tusell. The link to the original article came from: https://web.stanford.edu/~hastie/Papers/glmnet | choosing lambda for multi-reponse lasso in glmnet
sorry I'm new to the community, still trying to get the hang of it! Thank for editing my question @F. Tusell. The link to the original article came from: https://web.stanford.edu/~hastie/Papers/glmnet.pdf.
But the paper did not mention much about what was done regarding multi-variate response. The vignette for the R package glmnet (link: https://web.stanford.edu/~hastie/Papers/Glmnet_Vignette.pdf), on page 17, explained how the objective function looked like, incorporating the group LASSO penalty. But, I can't seem to be able to reproduce value of lambda_max used to form the grids. | choosing lambda for multi-reponse lasso in glmnet
sorry I'm new to the community, still trying to get the hang of it! Thank for editing my question @F. Tusell. The link to the original article came from: https://web.stanford.edu/~hastie/Papers/glmnet |
49,838 | Causal model assumptions - regression adjustment to experiments | There are some things that need clarification here.
Is $Y = \beta_1X + \beta_2R + \epsilon$ a structural equation? That is, do you believe the structural relationship between the variables you listed and the outcome is truly linear?
If this is the case, that is, if you truly believe the regression represents the structural equation of the model, then the answer is trivial --- if $R\perp \epsilon$ then you can identify $\beta_2$ regardless of the relationship between $X$ and $\epsilon$ (since you randomized $R$, we also have that $R \perp X$, assuming $X$ is not a collider or mediator --- more on that below).
However, if $Y = \beta_1X + \beta_2R + \epsilon$ is not a structural equation then things are more nuanced.
First you have to define what it is that you want to estimate, since $\beta_2$ is not a structural parameter per se. Usually you want to estimate the average treatment effect (ATE).
The first thing to keep in mind is that, since you performed an experiment, you can obtain the ATE by a simple difference in means, with no need to perform a regression.
Sometimes you want to control for other factors outside the experiment though, in order to reduce the variance of your estimate. When doing regression with experiments, you can stil get a consistent estimate of ATE, even if the true relationship is not linear.
But you have to keep some things in mind. As Freedman (2008) has shown, using a finite sample potential outcomes model:
Regression estimates are biased (though the bias gets small with
large samples);
The effect on asymptotic precision is not unambiguous: it may
improve or make it worse, depending mainly on balance between
treatment and control (if it’s not balanced, it depends on other
things which are hard or impossible to measure);
Usual (homoskedastic) estimated standard errors can overstate precision.
However, as Lin (2013) points out, with sufficiently large samples, these problems can be fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment-covariate interactions is used. Also, asymptotic valid confidence intervals can be obtained using heteroskedastic consistent estimators.
Another big problem is that once you start including covariates and different specifications to your model, you are doing specification searches. As soon as the researcher try different sets of covarites looking for a “preferred” specification, the nominal type I error does not hold anynore. So if you are using frequentist statistics (like p-values) to make judgements about your data you have to keep this in mind.
In short, you can perform multiple regression adjustment to your experiment but: (i) make sure you have enough samples and include full sets of interactions; (ii) use the appropriate standard errors; (iii) always show your reader the unadjusted simple difference in means, which is the more "trustable" "hands of the table" estimate.
One final note, even if your $R$ is randomized, you should be careful about what variables you are controlling for., You should not control for colliders and you should not control for mediators if you're interested in the total effect, for instance. | Causal model assumptions - regression adjustment to experiments | There are some things that need clarification here.
Is $Y = \beta_1X + \beta_2R + \epsilon$ a structural equation? That is, do you believe the structural relationship between the variables you liste | Causal model assumptions - regression adjustment to experiments
There are some things that need clarification here.
Is $Y = \beta_1X + \beta_2R + \epsilon$ a structural equation? That is, do you believe the structural relationship between the variables you listed and the outcome is truly linear?
If this is the case, that is, if you truly believe the regression represents the structural equation of the model, then the answer is trivial --- if $R\perp \epsilon$ then you can identify $\beta_2$ regardless of the relationship between $X$ and $\epsilon$ (since you randomized $R$, we also have that $R \perp X$, assuming $X$ is not a collider or mediator --- more on that below).
However, if $Y = \beta_1X + \beta_2R + \epsilon$ is not a structural equation then things are more nuanced.
First you have to define what it is that you want to estimate, since $\beta_2$ is not a structural parameter per se. Usually you want to estimate the average treatment effect (ATE).
The first thing to keep in mind is that, since you performed an experiment, you can obtain the ATE by a simple difference in means, with no need to perform a regression.
Sometimes you want to control for other factors outside the experiment though, in order to reduce the variance of your estimate. When doing regression with experiments, you can stil get a consistent estimate of ATE, even if the true relationship is not linear.
But you have to keep some things in mind. As Freedman (2008) has shown, using a finite sample potential outcomes model:
Regression estimates are biased (though the bias gets small with
large samples);
The effect on asymptotic precision is not unambiguous: it may
improve or make it worse, depending mainly on balance between
treatment and control (if it’s not balanced, it depends on other
things which are hard or impossible to measure);
Usual (homoskedastic) estimated standard errors can overstate precision.
However, as Lin (2013) points out, with sufficiently large samples, these problems can be fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment-covariate interactions is used. Also, asymptotic valid confidence intervals can be obtained using heteroskedastic consistent estimators.
Another big problem is that once you start including covariates and different specifications to your model, you are doing specification searches. As soon as the researcher try different sets of covarites looking for a “preferred” specification, the nominal type I error does not hold anynore. So if you are using frequentist statistics (like p-values) to make judgements about your data you have to keep this in mind.
In short, you can perform multiple regression adjustment to your experiment but: (i) make sure you have enough samples and include full sets of interactions; (ii) use the appropriate standard errors; (iii) always show your reader the unadjusted simple difference in means, which is the more "trustable" "hands of the table" estimate.
One final note, even if your $R$ is randomized, you should be careful about what variables you are controlling for., You should not control for colliders and you should not control for mediators if you're interested in the total effect, for instance. | Causal model assumptions - regression adjustment to experiments
There are some things that need clarification here.
Is $Y = \beta_1X + \beta_2R + \epsilon$ a structural equation? That is, do you believe the structural relationship between the variables you liste |
49,839 | Is bias a frequentist concept or a Bayesian concept? | Suppose there is a model for the data $Y$ that depends on a parameter $\theta$ and, for a particular experiment, there is a true value of the parameter, $\theta_0$.
You develop an estimator $\hat\theta = \hat\theta(Y)$, i.e. the estimator is a function of the data $Y$. Then the bias is
$$ bias(\hat\theta) = E_{Y|\theta_0}[\hat\theta(Y) - \theta_0] $$
where the expectation is taken with respect to the randomness of the data $Y$ for the given true value of the parameter $\theta_0$ (and the subscript on the expectation attempts to make this explicit). As we are talking about an expectation over possible realizations of data, this is a frequentist concept.
In the description above, I have not mentioned how the estimator arises. This estimator could be a method of moments, maximum likelihood, Bayes, or something else estimator. Thus, the concept of bias of an estimator is frequentist, but the estimator itself could arise from a Bayesian analysis. | Is bias a frequentist concept or a Bayesian concept? | Suppose there is a model for the data $Y$ that depends on a parameter $\theta$ and, for a particular experiment, there is a true value of the parameter, $\theta_0$.
You develop an estimator $\hat\the | Is bias a frequentist concept or a Bayesian concept?
Suppose there is a model for the data $Y$ that depends on a parameter $\theta$ and, for a particular experiment, there is a true value of the parameter, $\theta_0$.
You develop an estimator $\hat\theta = \hat\theta(Y)$, i.e. the estimator is a function of the data $Y$. Then the bias is
$$ bias(\hat\theta) = E_{Y|\theta_0}[\hat\theta(Y) - \theta_0] $$
where the expectation is taken with respect to the randomness of the data $Y$ for the given true value of the parameter $\theta_0$ (and the subscript on the expectation attempts to make this explicit). As we are talking about an expectation over possible realizations of data, this is a frequentist concept.
In the description above, I have not mentioned how the estimator arises. This estimator could be a method of moments, maximum likelihood, Bayes, or something else estimator. Thus, the concept of bias of an estimator is frequentist, but the estimator itself could arise from a Bayesian analysis. | Is bias a frequentist concept or a Bayesian concept?
Suppose there is a model for the data $Y$ that depends on a parameter $\theta$ and, for a particular experiment, there is a true value of the parameter, $\theta_0$.
You develop an estimator $\hat\the |
49,840 | Word2vec that can distinguish words with different meanings | You're right that word2vec can't distinguish between 'palm' the tree and 'palm' the part of a hand, and related problems. More broadly, it struggles to handle polysemy and homonymy.
The typical way to address this is to learn word sense embeddings instead of word embeddings. In general, this requires assigning word tokens to sense categories to learn embeddings for each meaning ("sense") that the word may have. (Basically, you learn separate vectors for 'palm₁' and 'palm₂'.) An excellent survey of word sense embedding techniques is "From Word To Sense Embeddings: A Survey on Vector Representations of Meaning" (Camacho-Collados and Pilehvar, 2018). | Word2vec that can distinguish words with different meanings | You're right that word2vec can't distinguish between 'palm' the tree and 'palm' the part of a hand, and related problems. More broadly, it struggles to handle polysemy and homonymy.
The typical way to | Word2vec that can distinguish words with different meanings
You're right that word2vec can't distinguish between 'palm' the tree and 'palm' the part of a hand, and related problems. More broadly, it struggles to handle polysemy and homonymy.
The typical way to address this is to learn word sense embeddings instead of word embeddings. In general, this requires assigning word tokens to sense categories to learn embeddings for each meaning ("sense") that the word may have. (Basically, you learn separate vectors for 'palm₁' and 'palm₂'.) An excellent survey of word sense embedding techniques is "From Word To Sense Embeddings: A Survey on Vector Representations of Meaning" (Camacho-Collados and Pilehvar, 2018). | Word2vec that can distinguish words with different meanings
You're right that word2vec can't distinguish between 'palm' the tree and 'palm' the part of a hand, and related problems. More broadly, it struggles to handle polysemy and homonymy.
The typical way to |
49,841 | Word2vec that can distinguish words with different meanings | If you wanted to naively encode a word as a vector, the easiest way would be one-hot encoding, where each seperate word becomes a bit entry in a vector. The only thing that word2vec does is that it compresses this one-hot vector into a lower dimension. So the word 'book' will have a vector representation with fewer dimensions than the size of your vocabulary.
The nice thing about word to vec is that when a word changes context based on it's surrounding, you can get the difference through vector addition. For instance vec(king) - vec(man) + vec(woman) = vec(queen). So if you wanted to handle your case, you'd express context as the following:
vec('book') + vec('play') vs. vec('book') + vec('ticket') | Word2vec that can distinguish words with different meanings | If you wanted to naively encode a word as a vector, the easiest way would be one-hot encoding, where each seperate word becomes a bit entry in a vector. The only thing that word2vec does is that it co | Word2vec that can distinguish words with different meanings
If you wanted to naively encode a word as a vector, the easiest way would be one-hot encoding, where each seperate word becomes a bit entry in a vector. The only thing that word2vec does is that it compresses this one-hot vector into a lower dimension. So the word 'book' will have a vector representation with fewer dimensions than the size of your vocabulary.
The nice thing about word to vec is that when a word changes context based on it's surrounding, you can get the difference through vector addition. For instance vec(king) - vec(man) + vec(woman) = vec(queen). So if you wanted to handle your case, you'd express context as the following:
vec('book') + vec('play') vs. vec('book') + vec('ticket') | Word2vec that can distinguish words with different meanings
If you wanted to naively encode a word as a vector, the easiest way would be one-hot encoding, where each seperate word becomes a bit entry in a vector. The only thing that word2vec does is that it co |
49,842 | EM algorithm is always used for mixture copula | There are many varieties of EM algorithms. In some of the problems I have worked on, fancy EM algorithms are absolutely state of the art.
That being said, in regards to mixture models, the "vanilla EM" algorithm, which I am pretty sure you are referring to, has two major advantages; extremely easy to implement and extremely numerically stable. As such, if you are exploring new models, this makes the vanilla EM algorithm extremely attractive. If using this vanilla EM algorithm allows you to fit your model, and you've shown that your new model is much better than previous models, then it makes sense to start designing a fancier algorithm that computes your estimate faster if the basic algorithm is prohibitively slow.
Basically, the simplest EM algorithm is often used when fitting new models because "pre-mature optimization is the root of all evil". My guess is that in the copula mixture model papers you've looked at, the authors are trying to demonstrate the superiority of their model, not the computational efficiency of their algorithm. As such, they provide the simplest algorithm to do the job. | EM algorithm is always used for mixture copula | There are many varieties of EM algorithms. In some of the problems I have worked on, fancy EM algorithms are absolutely state of the art.
That being said, in regards to mixture models, the "vanilla E | EM algorithm is always used for mixture copula
There are many varieties of EM algorithms. In some of the problems I have worked on, fancy EM algorithms are absolutely state of the art.
That being said, in regards to mixture models, the "vanilla EM" algorithm, which I am pretty sure you are referring to, has two major advantages; extremely easy to implement and extremely numerically stable. As such, if you are exploring new models, this makes the vanilla EM algorithm extremely attractive. If using this vanilla EM algorithm allows you to fit your model, and you've shown that your new model is much better than previous models, then it makes sense to start designing a fancier algorithm that computes your estimate faster if the basic algorithm is prohibitively slow.
Basically, the simplest EM algorithm is often used when fitting new models because "pre-mature optimization is the root of all evil". My guess is that in the copula mixture model papers you've looked at, the authors are trying to demonstrate the superiority of their model, not the computational efficiency of their algorithm. As such, they provide the simplest algorithm to do the job. | EM algorithm is always used for mixture copula
There are many varieties of EM algorithms. In some of the problems I have worked on, fancy EM algorithms are absolutely state of the art.
That being said, in regards to mixture models, the "vanilla E |
49,843 | Feed Forward Neural Network - How to Visualize the Weight Matrix? | You're right, and there should be 50 images. You could easily verfiy this by:
[coef.shape for coef in mlp.coefs_[0]]
where mlp is the trained MLP classifer in the example.
So here are the two things caused confusion:
Clearly, the author of the example did not mention anything about why only 16 images
In Python when you zip two things in the for loop that don't have equal length (number of items) in this example zip(mlp.coefs_[0].T, axes.ravel()), Python will automatically ignore the extra items in the bigger lists (arrays, etc.). Here axes.ravel() has only 16 items, therefore, the loop iterates over first 16 vectors in mlp.coefs_[0] | Feed Forward Neural Network - How to Visualize the Weight Matrix? | You're right, and there should be 50 images. You could easily verfiy this by:
[coef.shape for coef in mlp.coefs_[0]]
where mlp is the trained MLP classifer in the example.
So here are the two things | Feed Forward Neural Network - How to Visualize the Weight Matrix?
You're right, and there should be 50 images. You could easily verfiy this by:
[coef.shape for coef in mlp.coefs_[0]]
where mlp is the trained MLP classifer in the example.
So here are the two things caused confusion:
Clearly, the author of the example did not mention anything about why only 16 images
In Python when you zip two things in the for loop that don't have equal length (number of items) in this example zip(mlp.coefs_[0].T, axes.ravel()), Python will automatically ignore the extra items in the bigger lists (arrays, etc.). Here axes.ravel() has only 16 items, therefore, the loop iterates over first 16 vectors in mlp.coefs_[0] | Feed Forward Neural Network - How to Visualize the Weight Matrix?
You're right, and there should be 50 images. You could easily verfiy this by:
[coef.shape for coef in mlp.coefs_[0]]
where mlp is the trained MLP classifer in the example.
So here are the two things |
49,844 | Using Rule of Three to obtain confidence interval for a binomial population | The procedure described in the question is intuitive, clear, and accurate.
Problem Formulation
Formally, this is a hypergeometric sampling problem: in a population of $N=1000$ subjects, of which $K$ are in Class 1 and $N-K$ are in Class 2, a sample of size $n=50$ is taken without replacement and it is observed that all $n$ of them are in Class 1. A $95\%$ lower confidence limit $K_{0.95}$ for $K$ is the smallest value that is consistent with these data in the sense that if $K$ were any less than $K_{0.95}$, then the chance that every member of the sample is in Class 1 (as it turned out to be) would be less than $1 - 0.95 = 0.05 = \alpha$, which would be implausible.
Solution
This chance, as a function of the unknown $K$, is easy to compute. Because the sample of $n$ can be taken one at a time, and each time the values of both $K$ and $N$ decrease by $1$, it is equal to the product of the individual chances of observing a subject in Class 1:
$$P(K,n,N) = \frac{K}{N} \times \frac{K-1}{N-1} \times \cdots \times \frac{K-n+1}{N-n+1}.$$
This is a product of a sequence of decreasing fractions. Since $n\ll N$, the obvious bounds (based on replacing each term by the first fraction $K/N$ on the one hand and the first fraction that has been omitted, $(K-n)/(N-n)$, on the other hand) give an excellent approximation:
$$\left(\frac{K-n}{N-n}\right)^n \lt P(K,n,N) \lt \left(\frac{K}{N}\right)^n.$$
The value of $K_{0.95}$ will therefore lie between the solutions $K$ to
$$n\log\left(\frac{K-n}{N-n}\right) \lt \log(\alpha) \lt n\log\left(\frac{K}{N}\right),$$
given by
$$n + (N-n)(1 - 3/n) \approx n + (N-n)(1 + \log(\alpha)/n) \gt K;\\K \gt N \exp(\log(\alpha)/n) \approx N \exp(-3/n).$$
(The appearance of $3$ as the approximation to $-\log(0.05)= 2.9957\ldots$ is the basis for this "Rule of Three".) With $N=1000$ and $n=50$ we have
$$941.764 \lt K_{0.95} \lt 943.082$$
(and these bounds are not appreciably changed by using $3$ instead of $-\log(0.05)$).
The right hand value (upper bound) is the value proposed in the question. In fact, the precise solution is $K_{0.95} = 943$ because
$$P(943, 50, 1000) = 0.04924 \lt 0.05 \le 0.051099 = P(944, 50, 1000).$$ | Using Rule of Three to obtain confidence interval for a binomial population | The procedure described in the question is intuitive, clear, and accurate.
Problem Formulation
Formally, this is a hypergeometric sampling problem: in a population of $N=1000$ subjects, of which $K$ a | Using Rule of Three to obtain confidence interval for a binomial population
The procedure described in the question is intuitive, clear, and accurate.
Problem Formulation
Formally, this is a hypergeometric sampling problem: in a population of $N=1000$ subjects, of which $K$ are in Class 1 and $N-K$ are in Class 2, a sample of size $n=50$ is taken without replacement and it is observed that all $n$ of them are in Class 1. A $95\%$ lower confidence limit $K_{0.95}$ for $K$ is the smallest value that is consistent with these data in the sense that if $K$ were any less than $K_{0.95}$, then the chance that every member of the sample is in Class 1 (as it turned out to be) would be less than $1 - 0.95 = 0.05 = \alpha$, which would be implausible.
Solution
This chance, as a function of the unknown $K$, is easy to compute. Because the sample of $n$ can be taken one at a time, and each time the values of both $K$ and $N$ decrease by $1$, it is equal to the product of the individual chances of observing a subject in Class 1:
$$P(K,n,N) = \frac{K}{N} \times \frac{K-1}{N-1} \times \cdots \times \frac{K-n+1}{N-n+1}.$$
This is a product of a sequence of decreasing fractions. Since $n\ll N$, the obvious bounds (based on replacing each term by the first fraction $K/N$ on the one hand and the first fraction that has been omitted, $(K-n)/(N-n)$, on the other hand) give an excellent approximation:
$$\left(\frac{K-n}{N-n}\right)^n \lt P(K,n,N) \lt \left(\frac{K}{N}\right)^n.$$
The value of $K_{0.95}$ will therefore lie between the solutions $K$ to
$$n\log\left(\frac{K-n}{N-n}\right) \lt \log(\alpha) \lt n\log\left(\frac{K}{N}\right),$$
given by
$$n + (N-n)(1 - 3/n) \approx n + (N-n)(1 + \log(\alpha)/n) \gt K;\\K \gt N \exp(\log(\alpha)/n) \approx N \exp(-3/n).$$
(The appearance of $3$ as the approximation to $-\log(0.05)= 2.9957\ldots$ is the basis for this "Rule of Three".) With $N=1000$ and $n=50$ we have
$$941.764 \lt K_{0.95} \lt 943.082$$
(and these bounds are not appreciably changed by using $3$ instead of $-\log(0.05)$).
The right hand value (upper bound) is the value proposed in the question. In fact, the precise solution is $K_{0.95} = 943$ because
$$P(943, 50, 1000) = 0.04924 \lt 0.05 \le 0.051099 = P(944, 50, 1000).$$ | Using Rule of Three to obtain confidence interval for a binomial population
The procedure described in the question is intuitive, clear, and accurate.
Problem Formulation
Formally, this is a hypergeometric sampling problem: in a population of $N=1000$ subjects, of which $K$ a |
49,845 | What's the difference between Leave-One-Out and K-Fold Cross validation? | Leave-one-out fits the model with k-1 observations and classifies the remaining observation left out. It differs from your description because this process is repeated another k-1 times with a different observation left out. You can learn about this from the original paper by Lachenbruch and Mickey in 1968. In my answer I am treating k as the full sample size. In k-fold cross-validation it has a different meaning. | What's the difference between Leave-One-Out and K-Fold Cross validation? | Leave-one-out fits the model with k-1 observations and classifies the remaining observation left out. It differs from your description because this process is repeated another k-1 times with a differ | What's the difference between Leave-One-Out and K-Fold Cross validation?
Leave-one-out fits the model with k-1 observations and classifies the remaining observation left out. It differs from your description because this process is repeated another k-1 times with a different observation left out. You can learn about this from the original paper by Lachenbruch and Mickey in 1968. In my answer I am treating k as the full sample size. In k-fold cross-validation it has a different meaning. | What's the difference between Leave-One-Out and K-Fold Cross validation?
Leave-one-out fits the model with k-1 observations and classifies the remaining observation left out. It differs from your description because this process is repeated another k-1 times with a differ |
49,846 | What's the difference between Leave-One-Out and K-Fold Cross validation? | In loocv method we divide the dataset as one data point for test data while all the remaining data points as our train data. We then validate our model by using this n-1 train data against 1 test data. We perform n iterations like this with 1 test data being forwarded and remaining n-1 data being our new train data. This is suitable in time series analysis. We then find the average of n rmse values obtained. While in k fold method we divide the entire dataset in mot k folds and one fold will be the test fold and k-1 fold will be the train fold. We then validate our model by training k-1 train fold against 1 test fold. We do such k iterations and average the k rmse values. The test fold here moves backward and forward. Hence it cannot be used in time series analysis since it messes up with time. Please somebody correct me if am wrong somewhere | What's the difference between Leave-One-Out and K-Fold Cross validation? | In loocv method we divide the dataset as one data point for test data while all the remaining data points as our train data. We then validate our model by using this n-1 train data against 1 test data | What's the difference between Leave-One-Out and K-Fold Cross validation?
In loocv method we divide the dataset as one data point for test data while all the remaining data points as our train data. We then validate our model by using this n-1 train data against 1 test data. We perform n iterations like this with 1 test data being forwarded and remaining n-1 data being our new train data. This is suitable in time series analysis. We then find the average of n rmse values obtained. While in k fold method we divide the entire dataset in mot k folds and one fold will be the test fold and k-1 fold will be the train fold. We then validate our model by training k-1 train fold against 1 test fold. We do such k iterations and average the k rmse values. The test fold here moves backward and forward. Hence it cannot be used in time series analysis since it messes up with time. Please somebody correct me if am wrong somewhere | What's the difference between Leave-One-Out and K-Fold Cross validation?
In loocv method we divide the dataset as one data point for test data while all the remaining data points as our train data. We then validate our model by using this n-1 train data against 1 test data |
49,847 | Relation between Wiener and Kalman filtering | Dan Simon, in his book Optimal State Estimation, discusses this quite comprehensively.
Until 1960, Wiener filtering was the state of the art in signal estimation. The paradigm of signal estimation was shattered with the publication of Rudolph Kalman’s work and related papers in the early 1960s, but it is still worthwhile understanding Wiener filtering because of its historical place in the history of signal estimation. Furthermore, Wiener filtering is still very useful in signal processing and communication theory.
And, in a later chapter:
Wiener filtering addresses the problem of designing a linear, time invariant filter to extract a signal from noise, approaching the problem from the frequency domain perspective. Norbert Wiener invented his filter as part of the World War I1 effort for the United States. He published his work on the problem in 1942, but it was not available to the public until 1949 [Wie64]. His book was known as the “yellow peril” because of its mathematical difficulty and its yellow cover [Deu65, page 1761. Andrey Kolmogorov actually solved a more general problem earlier (1941), and Mark Krein also worked on the same problem (1945). Kolmogorov’s and Krein’s work was independent of Wiener’s work, and Wiener acknowledges that Kolmogorov’s work predated his own work [Wie56]. However, Kolmogorov’s and Krein’s work did not become well known in the Western world until later, since it was published in Russian [Ko141]. A nontechnical account of Wiener’s work is given in his autobiography [Wie56]. [...] The Wiener filter is based on frequency domain analyses, whereas the Kalman filter that we derive later is based on time domain analyses. Nevertheless, both filters are optimal under their own assumptions. Some problems are solvable by both the Wiener and Kalman filter methods, in which case both methods give the same result.
And finally, in the appendix:
Wiener and Kolmogorov’s work in the 1940s was similar to the Kalman filter (see Section 3.4). However, their work did not arise within the context of state-space theory. It is more statistical in nature than Kalman filtering, and requires knowledge of covariances such as E(ziz7) and E(yiz7). In order to implement a Wiener filter in a closed form, the theory assumes that the state and measurements are stationary random processes. Furthermore, Wiener filtering is a steady-state process; that is, it assumes that the measurements have been generated from the infinite past. The 1950s saw a lot of work on relaxing the assumptions of the Wiener filter [Zad50, Boo521. NASA spent several years investigating Wiener theory in the 1950s, but could not see any practical way to implement it in space navigation problems [Sch81].
Later in the 1950s, work began on replacing the covariance knowledge required by the Wiener filter with state-space descriptions. The results of this work were algorithms that are very close to the Kalman filter as we know it today. Work in this direction at Johns Hopkins University was motivated by missile tracking and appeared in unpublished work as early as 1956 [Spa88]. Peter Swerling’s work at the RAND Corporation in the late 1950s was motivated by satellite orbit estimation [Swe59]. Swerling essentially developed (and published in 1959) the Kalman filter for the case of noise-free system dynamics. Furthermore, he considered nonlinear system dynamics and measurement equations (because of his application). Similar to the dispute between Gauss and Legendre regarding credit for the development of least squares, there has been a smaller dispute regarding credit for the development of the Kalman filter.
...and then he goes on to discuss all the people who essentially developed the Kalman filter before Kalman. | Relation between Wiener and Kalman filtering | Dan Simon, in his book Optimal State Estimation, discusses this quite comprehensively.
Until 1960, Wiener filtering was the state of the art in signal estimation. The paradigm of signal estimation wa | Relation between Wiener and Kalman filtering
Dan Simon, in his book Optimal State Estimation, discusses this quite comprehensively.
Until 1960, Wiener filtering was the state of the art in signal estimation. The paradigm of signal estimation was shattered with the publication of Rudolph Kalman’s work and related papers in the early 1960s, but it is still worthwhile understanding Wiener filtering because of its historical place in the history of signal estimation. Furthermore, Wiener filtering is still very useful in signal processing and communication theory.
And, in a later chapter:
Wiener filtering addresses the problem of designing a linear, time invariant filter to extract a signal from noise, approaching the problem from the frequency domain perspective. Norbert Wiener invented his filter as part of the World War I1 effort for the United States. He published his work on the problem in 1942, but it was not available to the public until 1949 [Wie64]. His book was known as the “yellow peril” because of its mathematical difficulty and its yellow cover [Deu65, page 1761. Andrey Kolmogorov actually solved a more general problem earlier (1941), and Mark Krein also worked on the same problem (1945). Kolmogorov’s and Krein’s work was independent of Wiener’s work, and Wiener acknowledges that Kolmogorov’s work predated his own work [Wie56]. However, Kolmogorov’s and Krein’s work did not become well known in the Western world until later, since it was published in Russian [Ko141]. A nontechnical account of Wiener’s work is given in his autobiography [Wie56]. [...] The Wiener filter is based on frequency domain analyses, whereas the Kalman filter that we derive later is based on time domain analyses. Nevertheless, both filters are optimal under their own assumptions. Some problems are solvable by both the Wiener and Kalman filter methods, in which case both methods give the same result.
And finally, in the appendix:
Wiener and Kolmogorov’s work in the 1940s was similar to the Kalman filter (see Section 3.4). However, their work did not arise within the context of state-space theory. It is more statistical in nature than Kalman filtering, and requires knowledge of covariances such as E(ziz7) and E(yiz7). In order to implement a Wiener filter in a closed form, the theory assumes that the state and measurements are stationary random processes. Furthermore, Wiener filtering is a steady-state process; that is, it assumes that the measurements have been generated from the infinite past. The 1950s saw a lot of work on relaxing the assumptions of the Wiener filter [Zad50, Boo521. NASA spent several years investigating Wiener theory in the 1950s, but could not see any practical way to implement it in space navigation problems [Sch81].
Later in the 1950s, work began on replacing the covariance knowledge required by the Wiener filter with state-space descriptions. The results of this work were algorithms that are very close to the Kalman filter as we know it today. Work in this direction at Johns Hopkins University was motivated by missile tracking and appeared in unpublished work as early as 1956 [Spa88]. Peter Swerling’s work at the RAND Corporation in the late 1950s was motivated by satellite orbit estimation [Swe59]. Swerling essentially developed (and published in 1959) the Kalman filter for the case of noise-free system dynamics. Furthermore, he considered nonlinear system dynamics and measurement equations (because of his application). Similar to the dispute between Gauss and Legendre regarding credit for the development of least squares, there has been a smaller dispute regarding credit for the development of the Kalman filter.
...and then he goes on to discuss all the people who essentially developed the Kalman filter before Kalman. | Relation between Wiener and Kalman filtering
Dan Simon, in his book Optimal State Estimation, discusses this quite comprehensively.
Until 1960, Wiener filtering was the state of the art in signal estimation. The paradigm of signal estimation wa |
49,848 | Can someone explain the Fisher transformation and why it is used in layman's terms? | The Fisher transformation https://en.wikipedia.org/wiki/Fisher_transformation of an estimated correlation coefficient $r$ is
$$
z= \frac12 \ln\left(\frac{1+r}{1-r}\right).
$$
It is an approximate variance-stabilizing transform, so that its variance which is about $\frac{1}{N-3}$, where $N$ is the sample size, does not depend on the true underlying value of the correlation coefficient. This can be used to construct a confidence interval for the correlation coefficient $\rho$.
A modern alternative would be to use the bootstrap. One of the advantages of the bootstrap, according to Efron, is that it can "find" a variance-stabilizing transform like that above "automatically".
To construct the confidence interval, use the approximation, for sufficiently large $N$, that
$$
Z \stackrel{\text{a}}{\sim} \text{N}\left(\frac12\ln\left(\frac{1+\rho}{1-\rho}\right),\frac{1}{N-3}\right),
$$
to find a confidence interval (on the $z$ scale) of the form $(z-q\frac1{\sqrt{N-3}},z+q\frac1{\sqrt{N-3}})$ where $q$ is the appropriate normal quantile, and invert it by using the inverse function $g$ of the Fisher transform,
$$ g(z)=\frac{e^{2z}-1}{e^{2z}+1},
$$
thus obtaining the confidence interval for the correlation coefficient. | Can someone explain the Fisher transformation and why it is used in layman's terms? | The Fisher transformation https://en.wikipedia.org/wiki/Fisher_transformation of an estimated correlation coefficient $r$ is
$$
z= \frac12 \ln\left(\frac{1+r}{1-r}\right).
$$
It is an approximate | Can someone explain the Fisher transformation and why it is used in layman's terms?
The Fisher transformation https://en.wikipedia.org/wiki/Fisher_transformation of an estimated correlation coefficient $r$ is
$$
z= \frac12 \ln\left(\frac{1+r}{1-r}\right).
$$
It is an approximate variance-stabilizing transform, so that its variance which is about $\frac{1}{N-3}$, where $N$ is the sample size, does not depend on the true underlying value of the correlation coefficient. This can be used to construct a confidence interval for the correlation coefficient $\rho$.
A modern alternative would be to use the bootstrap. One of the advantages of the bootstrap, according to Efron, is that it can "find" a variance-stabilizing transform like that above "automatically".
To construct the confidence interval, use the approximation, for sufficiently large $N$, that
$$
Z \stackrel{\text{a}}{\sim} \text{N}\left(\frac12\ln\left(\frac{1+\rho}{1-\rho}\right),\frac{1}{N-3}\right),
$$
to find a confidence interval (on the $z$ scale) of the form $(z-q\frac1{\sqrt{N-3}},z+q\frac1{\sqrt{N-3}})$ where $q$ is the appropriate normal quantile, and invert it by using the inverse function $g$ of the Fisher transform,
$$ g(z)=\frac{e^{2z}-1}{e^{2z}+1},
$$
thus obtaining the confidence interval for the correlation coefficient. | Can someone explain the Fisher transformation and why it is used in layman's terms?
The Fisher transformation https://en.wikipedia.org/wiki/Fisher_transformation of an estimated correlation coefficient $r$ is
$$
z= \frac12 \ln\left(\frac{1+r}{1-r}\right).
$$
It is an approximate |
49,849 | Why Adam and batch normalization are considered approximating second-order behavior? | It's kind of an imprecise statement, so It's hard to give a firm answer. Momentum and normalisation methods such as Adam, (diagonal-)AdaGrad and batch-normalization are (effectively) using diagonal approximations to the Hessian. Obviously, that's a very crude approximation, but it is approximating second-order (hessian) information.
I would associate second order methods with estimation of curvature, which is not something that can be done with diagonal approximations. IMHO it's too strong a statement to say they are approximating second order information. | Why Adam and batch normalization are considered approximating second-order behavior? | It's kind of an imprecise statement, so It's hard to give a firm answer. Momentum and normalisation methods such as Adam, (diagonal-)AdaGrad and batch-normalization are (effectively) using diagonal ap | Why Adam and batch normalization are considered approximating second-order behavior?
It's kind of an imprecise statement, so It's hard to give a firm answer. Momentum and normalisation methods such as Adam, (diagonal-)AdaGrad and batch-normalization are (effectively) using diagonal approximations to the Hessian. Obviously, that's a very crude approximation, but it is approximating second-order (hessian) information.
I would associate second order methods with estimation of curvature, which is not something that can be done with diagonal approximations. IMHO it's too strong a statement to say they are approximating second order information. | Why Adam and batch normalization are considered approximating second-order behavior?
It's kind of an imprecise statement, so It's hard to give a firm answer. Momentum and normalisation methods such as Adam, (diagonal-)AdaGrad and batch-normalization are (effectively) using diagonal ap |
49,850 | MCMC: invalid covariance matrix due to numerical error | You can add a tiny epsilon to the diagonal, say $1e-8$ or similar, to the covariance matrix.
So, let's say your covariance matrix is $\mathbf{\Sigma}$. And when you do stuff that involves $\text{inv}(\mathbf{\Sigma})$, or similar, it doesn't work very well, because your $\Sigma$ is not positive definite. So, therefore you can update your $\mathbf{\Sigma}$ as follows:
$$
\mathbf{\Sigma}' = \mathbf{\Sigma} + 10^{-8} \mathbf{I}
$$
Now your new $\mathbf{\Sigma}'$ matrix is positive definite, and your numerical calculations will work much more smoothly :-)
I've used this technique in the past, and it works very well for me. | MCMC: invalid covariance matrix due to numerical error | You can add a tiny epsilon to the diagonal, say $1e-8$ or similar, to the covariance matrix.
So, let's say your covariance matrix is $\mathbf{\Sigma}$. And when you do stuff that involves $\text{inv}( | MCMC: invalid covariance matrix due to numerical error
You can add a tiny epsilon to the diagonal, say $1e-8$ or similar, to the covariance matrix.
So, let's say your covariance matrix is $\mathbf{\Sigma}$. And when you do stuff that involves $\text{inv}(\mathbf{\Sigma})$, or similar, it doesn't work very well, because your $\Sigma$ is not positive definite. So, therefore you can update your $\mathbf{\Sigma}$ as follows:
$$
\mathbf{\Sigma}' = \mathbf{\Sigma} + 10^{-8} \mathbf{I}
$$
Now your new $\mathbf{\Sigma}'$ matrix is positive definite, and your numerical calculations will work much more smoothly :-)
I've used this technique in the past, and it works very well for me. | MCMC: invalid covariance matrix due to numerical error
You can add a tiny epsilon to the diagonal, say $1e-8$ or similar, to the covariance matrix.
So, let's say your covariance matrix is $\mathbf{\Sigma}$. And when you do stuff that involves $\text{inv}( |
49,851 | Negative autocorrelation in linear regressions : examples and consequences | So I have wandered online and found some examples of negative autocorrelation :
If you've ever seen a row of cabbages growing in a garden, you'll frequently notice an alternating pattern--big cabbage, little cabbage, big cabbage, little cabbage, etc. This happens because one cabbage might have a slight edge in growth. It extends into its neighbor's space, stealing water and nutrition for itself. Because of this slight competitive edge, the one cabbage grows even bigger at the expense of the neighboring cabbage.
If you are looking at the amount of time a doctor spends with successive patients, if the first patient finished faster than expected, you are more likely to adopt a leisurely approach with the second patient. If the first patient takes longer than expected, you are more likely to rush with the second patient, trying to get back on schedule.
In an assembly line process where small pieces are cut from a single large piece, if the first piece is a bit long, the next piece is likely to be a bit short and vice versa.
Stille nothing on theoretical restults, and I am still willing to have new answers. | Negative autocorrelation in linear regressions : examples and consequences | So I have wandered online and found some examples of negative autocorrelation :
If you've ever seen a row of cabbages growing in a garden, you'll frequently notice an alternating pattern--big cabbag | Negative autocorrelation in linear regressions : examples and consequences
So I have wandered online and found some examples of negative autocorrelation :
If you've ever seen a row of cabbages growing in a garden, you'll frequently notice an alternating pattern--big cabbage, little cabbage, big cabbage, little cabbage, etc. This happens because one cabbage might have a slight edge in growth. It extends into its neighbor's space, stealing water and nutrition for itself. Because of this slight competitive edge, the one cabbage grows even bigger at the expense of the neighboring cabbage.
If you are looking at the amount of time a doctor spends with successive patients, if the first patient finished faster than expected, you are more likely to adopt a leisurely approach with the second patient. If the first patient takes longer than expected, you are more likely to rush with the second patient, trying to get back on schedule.
In an assembly line process where small pieces are cut from a single large piece, if the first piece is a bit long, the next piece is likely to be a bit short and vice versa.
Stille nothing on theoretical restults, and I am still willing to have new answers. | Negative autocorrelation in linear regressions : examples and consequences
So I have wandered online and found some examples of negative autocorrelation :
If you've ever seen a row of cabbages growing in a garden, you'll frequently notice an alternating pattern--big cabbag |
49,852 | Difference between mean square residual and mean square error | The answer to this question depends on how you define mean squared error (MSE).
In the context of regression, some define it to be
$MSE = \sum (y-\hat{y})^2/(n-p)$
where p is the number of parameters in the regression (including the intercept). Note that since residuals are $y-\hat{y}$, this is equivalent to mean squared residuals (MSR). Note that this formula is generally used because it provides an unbiased estimate of the variance of the errors.
It is important to note that in the context of regression, the residuals are not the actual errors $(\epsilon)$, which are random variables. However, the residuals are estimates of the errors under the assumed model,
$y-\hat{y} = \hat{\epsilon} $
where $y$ is the observed value and $\hat{y}$ is the predicted value. | Difference between mean square residual and mean square error | The answer to this question depends on how you define mean squared error (MSE).
In the context of regression, some define it to be
$MSE = \sum (y-\hat{y})^2/(n-p)$
where p is the number of parameter | Difference between mean square residual and mean square error
The answer to this question depends on how you define mean squared error (MSE).
In the context of regression, some define it to be
$MSE = \sum (y-\hat{y})^2/(n-p)$
where p is the number of parameters in the regression (including the intercept). Note that since residuals are $y-\hat{y}$, this is equivalent to mean squared residuals (MSR). Note that this formula is generally used because it provides an unbiased estimate of the variance of the errors.
It is important to note that in the context of regression, the residuals are not the actual errors $(\epsilon)$, which are random variables. However, the residuals are estimates of the errors under the assumed model,
$y-\hat{y} = \hat{\epsilon} $
where $y$ is the observed value and $\hat{y}$ is the predicted value. | Difference between mean square residual and mean square error
The answer to this question depends on how you define mean squared error (MSE).
In the context of regression, some define it to be
$MSE = \sum (y-\hat{y})^2/(n-p)$
where p is the number of parameter |
49,853 | Difference between mean square residual and mean square error | I'll try to explain how I see it from statistical point of view.
I don't think MSE and MSR are the same thing (however most people don't differentiate between those two I guess).
Let's say that you do a simulation of data that can be described using regression model. Let's say that you generate the data "randomly" around parabola curve and you have the exact regression function since you did generate data around that function.
Then the "random" part which "secures" that the data are not (most likely) directly on the parabola curve are actually errors.
However, in reality, you usually don't know the theoretical regression function. Then you can only estimate such regression model. The differences between the observed values and your estimated model are then called residuals. So it can be said that residuals are somehow estimate of theoretical errors.
Hope this helps. | Difference between mean square residual and mean square error | I'll try to explain how I see it from statistical point of view.
I don't think MSE and MSR are the same thing (however most people don't differentiate between those two I guess).
Let's say that you do | Difference between mean square residual and mean square error
I'll try to explain how I see it from statistical point of view.
I don't think MSE and MSR are the same thing (however most people don't differentiate between those two I guess).
Let's say that you do a simulation of data that can be described using regression model. Let's say that you generate the data "randomly" around parabola curve and you have the exact regression function since you did generate data around that function.
Then the "random" part which "secures" that the data are not (most likely) directly on the parabola curve are actually errors.
However, in reality, you usually don't know the theoretical regression function. Then you can only estimate such regression model. The differences between the observed values and your estimated model are then called residuals. So it can be said that residuals are somehow estimate of theoretical errors.
Hope this helps. | Difference between mean square residual and mean square error
I'll try to explain how I see it from statistical point of view.
I don't think MSE and MSR are the same thing (however most people don't differentiate between those two I guess).
Let's say that you do |
49,854 | Difference between mean square residual and mean square error | In short, Mean squared error (MSE) is the square of RMSE. For linear regression standard equation: Y=a+bX, considering MSE equals to the sum of squared differences between actual sample values of X´ and Y´ that are used to fit the linear model, and divided by number of paird samples (n).
For Mean Squared Residues (MSR), it should start firstly to know Least Squared Method in linear regression. Simply put, this method minimize the sum of squared difference between actual Y and estimated Y (sum of squared residues, SSr), corresponding to ∑(Y-Y´)^2. SSr divided by n-2 equals to MSR. And this is mainly useful for analyzing overall significance of linear regression, but also being an essential component of correlation of determination. By ensuring MSR less, performace of different linear regression models could be compared.
From my point of understandings, MSR account for dispersions of actual Y and estimated Y derived from linear regresion (thus considering the Ymean), whereas MSE is a direct comparison for prediction errors between predictions and observations. | Difference between mean square residual and mean square error | In short, Mean squared error (MSE) is the square of RMSE. For linear regression standard equation: Y=a+bX, considering MSE equals to the sum of squared differences between actual sample values of X´ a | Difference between mean square residual and mean square error
In short, Mean squared error (MSE) is the square of RMSE. For linear regression standard equation: Y=a+bX, considering MSE equals to the sum of squared differences between actual sample values of X´ and Y´ that are used to fit the linear model, and divided by number of paird samples (n).
For Mean Squared Residues (MSR), it should start firstly to know Least Squared Method in linear regression. Simply put, this method minimize the sum of squared difference between actual Y and estimated Y (sum of squared residues, SSr), corresponding to ∑(Y-Y´)^2. SSr divided by n-2 equals to MSR. And this is mainly useful for analyzing overall significance of linear regression, but also being an essential component of correlation of determination. By ensuring MSR less, performace of different linear regression models could be compared.
From my point of understandings, MSR account for dispersions of actual Y and estimated Y derived from linear regresion (thus considering the Ymean), whereas MSE is a direct comparison for prediction errors between predictions and observations. | Difference between mean square residual and mean square error
In short, Mean squared error (MSE) is the square of RMSE. For linear regression standard equation: Y=a+bX, considering MSE equals to the sum of squared differences between actual sample values of X´ a |
49,855 | Difference between mean square residual and mean square error | There is no difference between the mean square residual and mean square error. | Difference between mean square residual and mean square error | There is no difference between the mean square residual and mean square error. | Difference between mean square residual and mean square error
There is no difference between the mean square residual and mean square error. | Difference between mean square residual and mean square error
There is no difference between the mean square residual and mean square error. |
49,856 | Python package that allows to train a CRF on two datasets | You can use Wapiti (mirror):
Wapiti is a very fast toolkit for segmenting and labeling sequences with discriminative models. It is based on maxent models, maximum entropy Markov models and linear-chain CRF and proposes various optimization and regularization methods to improve both the computational complexity and the prediction performance of standard models. Wapiti is ranked first on the sequence tagging task for more than a year on MLcomp web site.
Wapiti is developed by LIMSI-CNRS and was partially funded by ANR projects CroTaL (ANR-07-MDCO-003) and MGA (ANR-07-BLAN-0311-02).
It is written in standard C99+POSIX and is open source (BSD Licence) (mirror).
It allows a model file to load and to train again. From the manual (mirror)
:
-m | --model <file>
Specify a model file to load and to train again. This allow you
either to continue an interrupted training or to use an old
model as a starting point for a new training. Beware that no new
labels can be inserted in the model. As the training parameters
are not saved in the model file, you have to specify them again,
or specify new one if, for example, you want to continue train-
ing with another algorithm or a different penalty.
There seems to exist some python wrapper for wapiti such as python-wapiti (mirror). | Python package that allows to train a CRF on two datasets | You can use Wapiti (mirror):
Wapiti is a very fast toolkit for segmenting and labeling sequences with discriminative models. It is based on maxent models, maximum entropy Markov models and linear-ch | Python package that allows to train a CRF on two datasets
You can use Wapiti (mirror):
Wapiti is a very fast toolkit for segmenting and labeling sequences with discriminative models. It is based on maxent models, maximum entropy Markov models and linear-chain CRF and proposes various optimization and regularization methods to improve both the computational complexity and the prediction performance of standard models. Wapiti is ranked first on the sequence tagging task for more than a year on MLcomp web site.
Wapiti is developed by LIMSI-CNRS and was partially funded by ANR projects CroTaL (ANR-07-MDCO-003) and MGA (ANR-07-BLAN-0311-02).
It is written in standard C99+POSIX and is open source (BSD Licence) (mirror).
It allows a model file to load and to train again. From the manual (mirror)
:
-m | --model <file>
Specify a model file to load and to train again. This allow you
either to continue an interrupted training or to use an old
model as a starting point for a new training. Beware that no new
labels can be inserted in the model. As the training parameters
are not saved in the model file, you have to specify them again,
or specify new one if, for example, you want to continue train-
ing with another algorithm or a different penalty.
There seems to exist some python wrapper for wapiti such as python-wapiti (mirror). | Python package that allows to train a CRF on two datasets
You can use Wapiti (mirror):
Wapiti is a very fast toolkit for segmenting and labeling sequences with discriminative models. It is based on maxent models, maximum entropy Markov models and linear-ch |
49,857 | Python package that allows to train a CRF on two datasets | You can use NeuroNER:
It implements of a bi-directional LSTM + CRF network in TensorFlow
works on Linux/Mac/Windows
written in Python
open source
allows to train a CRF on two datasets with the options use_pretrained_model = True + train_model = True | Python package that allows to train a CRF on two datasets | You can use NeuroNER:
It implements of a bi-directional LSTM + CRF network in TensorFlow
works on Linux/Mac/Windows
written in Python
open source
allows to train a CRF on two datasets with the optio | Python package that allows to train a CRF on two datasets
You can use NeuroNER:
It implements of a bi-directional LSTM + CRF network in TensorFlow
works on Linux/Mac/Windows
written in Python
open source
allows to train a CRF on two datasets with the options use_pretrained_model = True + train_model = True | Python package that allows to train a CRF on two datasets
You can use NeuroNER:
It implements of a bi-directional LSTM + CRF network in TensorFlow
works on Linux/Mac/Windows
written in Python
open source
allows to train a CRF on two datasets with the optio |
49,858 | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighting formula for ATT effect) | Since $\rho(X_i)=E(D_i|X_i)=P(D_i=1|X_i)$:
$$
E(Y_i D_i|X_i)=\rho(X_i) E(Y_{1i}|X_i) \\
\Rightarrow E(Y_{1i}|X_i)=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)
$$
similarly,
$$
E(Y_i (1-D_i)|X_i)=(1-\rho(X_i)) E(Y_{0i}|X_i) \\
\Rightarrow E(Y_{0i}|X_i)=\frac{1}{1-\rho(X_i)}E(Y_i (1-D_i)|X_i)
$$
We have
$$
E(Y_{1i} - Y_{0i}\vert X_i)\\
=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)-\frac{1}{1-\rho(X_i)}E(Y_i (1-D_i)|X_i)\\
=\frac{1}{\rho(X_i)(1-\rho(X_i))}E(Y_iD_i(1-\rho(X_i))-Y_i(1-D_i)\rho(X_i))\\
=\frac{1}{\rho(X_i)}E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
thus,
$$
\rho(X_i)E(Y_{1i} - Y_{0i}\vert X_i)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
Then, using the fact that $\rho(X_i)=E(D_i\vert X_i)$, and the conditional independence assumption:
$$
E(D_i(Y_{1i} - Y_{0i})\vert X_i)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
Using again, the law of iterated expectation we get:
$$
E(D_i(Y_{1i} - Y_{0i}))=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg]
$$
and then
$$
E(D_i(Y_{1i} - Y_{0i})\vert D_i=1)prob(D_i=1) + E[0\vert D_i=0]prob(D_i=0)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg]
$$
using the fact that $E[E[W\vert Z]]= E[W]$ and
$$
E\bigr[E[W\mid Z]\bigr] = \sum_x E[W\mid Z = z]\cdot p_Z(z)
$$
Lastly, noting that
$prob(D_i=1)$ is just a constant so we can put it inside the expectation, and that $E[0\vert D_i=0]=0$, we can divide both sides by $prob(D_i=1)$ and get the result.
$$
E((Y_{1i} - Y_{0i})\vert D_i=1)
=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{prob(D_i=1)(1-\rho(X_i))}\bigg]
$$
where we don't write $D_i$ on the LHS because it is $1$ (because we condition on $D_i=1$). | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighti | Since $\rho(X_i)=E(D_i|X_i)=P(D_i=1|X_i)$:
$$
E(Y_i D_i|X_i)=\rho(X_i) E(Y_{1i}|X_i) \\
\Rightarrow E(Y_{1i}|X_i)=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)
$$
similarly,
$$
E(Y_i (1-D_i)|X_i)=(1-\rho(X_i)) E( | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighting formula for ATT effect)
Since $\rho(X_i)=E(D_i|X_i)=P(D_i=1|X_i)$:
$$
E(Y_i D_i|X_i)=\rho(X_i) E(Y_{1i}|X_i) \\
\Rightarrow E(Y_{1i}|X_i)=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)
$$
similarly,
$$
E(Y_i (1-D_i)|X_i)=(1-\rho(X_i)) E(Y_{0i}|X_i) \\
\Rightarrow E(Y_{0i}|X_i)=\frac{1}{1-\rho(X_i)}E(Y_i (1-D_i)|X_i)
$$
We have
$$
E(Y_{1i} - Y_{0i}\vert X_i)\\
=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)-\frac{1}{1-\rho(X_i)}E(Y_i (1-D_i)|X_i)\\
=\frac{1}{\rho(X_i)(1-\rho(X_i))}E(Y_iD_i(1-\rho(X_i))-Y_i(1-D_i)\rho(X_i))\\
=\frac{1}{\rho(X_i)}E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
thus,
$$
\rho(X_i)E(Y_{1i} - Y_{0i}\vert X_i)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
Then, using the fact that $\rho(X_i)=E(D_i\vert X_i)$, and the conditional independence assumption:
$$
E(D_i(Y_{1i} - Y_{0i})\vert X_i)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg\vert X_i\bigg]
$$
Using again, the law of iterated expectation we get:
$$
E(D_i(Y_{1i} - Y_{0i}))=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg]
$$
and then
$$
E(D_i(Y_{1i} - Y_{0i})\vert D_i=1)prob(D_i=1) + E[0\vert D_i=0]prob(D_i=0)=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{(1-\rho(X_i))}\bigg]
$$
using the fact that $E[E[W\vert Z]]= E[W]$ and
$$
E\bigr[E[W\mid Z]\bigr] = \sum_x E[W\mid Z = z]\cdot p_Z(z)
$$
Lastly, noting that
$prob(D_i=1)$ is just a constant so we can put it inside the expectation, and that $E[0\vert D_i=0]=0$, we can divide both sides by $prob(D_i=1)$ and get the result.
$$
E((Y_{1i} - Y_{0i})\vert D_i=1)
=E\bigg[\frac{(D_i - \rho(X_i))Y_i}{prob(D_i=1)(1-\rho(X_i))}\bigg]
$$
where we don't write $D_i$ on the LHS because it is $1$ (because we condition on $D_i=1$). | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighti
Since $\rho(X_i)=E(D_i|X_i)=P(D_i=1|X_i)$:
$$
E(Y_i D_i|X_i)=\rho(X_i) E(Y_{1i}|X_i) \\
\Rightarrow E(Y_{1i}|X_i)=\frac{1}{\rho(X_i)}E(Y_i D_i|X_i)
$$
similarly,
$$
E(Y_i (1-D_i)|X_i)=(1-\rho(X_i)) E( |
49,859 | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighting formula for ATT effect) | I also get confused when facing eq. 3.3.12 yesterday, it seemed not "so" obvious as mentioned in the text. When I searched for answers, the only method was the above by @Steve and @user106860, thanks a lot. But soon I realized that using $D_i(Y_{1i}-Y_{0i}) = Y_{1i}-Y_{0i}$ in the condition $D_i=1$ seems to be a little ticky and hard to come up with. Moreover, we cannot use this idea to calculate the dual form: $\mathrm{E}(Y_{1i}-Y_{0i}|D_i=0)$. Herein, I list a direct method below, which maybe easier to follow:
First of all, we change $p(X_i)$into the prob-form $P(D_i=1|X_i)$ in eq. 3.3.12. Then, our target is to verify:
$$
\mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=0|X_i)P(D_i=1)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1)
$$
Similar to the idea in verifying the formula of $ATE$ in the footnote in the text, we need to use the Law of iterated expectation on the $LHS$:
$$
\begin{aligned}
LHS = \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right)
\end{aligned}
$$
let $(D_i-P(D_i=1|X_i))Y_i | X_i \equiv \Delta$, then
$$
\begin{aligned}
\mathrm{E}\left(\Delta\right) &= \mathrm{E}\left(\Delta|X_i, D_i=1\right)P(D_i=1|X_i) + \mathrm{E}\left(\Delta|X_i, D_i=0\right)P(D_i=0|X_i) \\
&= (1-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &+ (0-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\
&= P(D_i=0|X_i)\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &- P(D_i=1|X_i)\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\
&= (\mathrm{E}\left(Y_i |X_i, D_i=1\right) - \mathrm{E}\left(Y_i |X_i, D_i=0\right))P(D_i=1|X_i)P(D_i=0|X_i) \\
&\mathop{=========} \limits_{Y_{1i},Y_{0i} \; \perp\!\!\!\!\perp \;D_i|X_i}^{Y_i=Y_{0i}+D_i(Y_{1i}-Y_{0i})} \mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)P(D_i=0|X_i)
\end{aligned}
$$
Substitute in the above equation of $LHS$:
$$
\begin{aligned}
LHS &= \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right) \\
&= \frac{1}{P(D_i=1)}\mathrm{E}\left(\mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)\right) \\
&= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}|X_i, D_i=1)P(D_i=1|X_i)P(X_i)\right) \\
&= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i, D_i=1)\right) \\
&= \sum\limits_{x}\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i| D_i=1) \\
&= \sum\limits_{y}(Y_{1i}-Y_{0i})\left(\sum\limits_{x}P((Y_{1i}-Y_{0i}), X_i| D_i=1)\right) \\
&= \sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}| D_i=1) \\
&= \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1) \\
&= RHS
\end{aligned}
$$
Similarly, we can also verifying that
$$
\mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=1|X_i)P(D_i=0)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=0)
$$ | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighti | I also get confused when facing eq. 3.3.12 yesterday, it seemed not "so" obvious as mentioned in the text. When I searched for answers, the only method was the above by @Steve and @user106860, thanks | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighting formula for ATT effect)
I also get confused when facing eq. 3.3.12 yesterday, it seemed not "so" obvious as mentioned in the text. When I searched for answers, the only method was the above by @Steve and @user106860, thanks a lot. But soon I realized that using $D_i(Y_{1i}-Y_{0i}) = Y_{1i}-Y_{0i}$ in the condition $D_i=1$ seems to be a little ticky and hard to come up with. Moreover, we cannot use this idea to calculate the dual form: $\mathrm{E}(Y_{1i}-Y_{0i}|D_i=0)$. Herein, I list a direct method below, which maybe easier to follow:
First of all, we change $p(X_i)$into the prob-form $P(D_i=1|X_i)$ in eq. 3.3.12. Then, our target is to verify:
$$
\mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=0|X_i)P(D_i=1)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1)
$$
Similar to the idea in verifying the formula of $ATE$ in the footnote in the text, we need to use the Law of iterated expectation on the $LHS$:
$$
\begin{aligned}
LHS = \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right)
\end{aligned}
$$
let $(D_i-P(D_i=1|X_i))Y_i | X_i \equiv \Delta$, then
$$
\begin{aligned}
\mathrm{E}\left(\Delta\right) &= \mathrm{E}\left(\Delta|X_i, D_i=1\right)P(D_i=1|X_i) + \mathrm{E}\left(\Delta|X_i, D_i=0\right)P(D_i=0|X_i) \\
&= (1-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &+ (0-P(D_i=1|X_i))\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\
&= P(D_i=0|X_i)\mathrm{E}\left(Y_i |X_i, D_i=1\right)P(D_i=1|X_i) \\ &- P(D_i=1|X_i)\mathrm{E}\left(Y_i |X_i, D_i=0\right)P(D_i=0|X_i) \\
&= (\mathrm{E}\left(Y_i |X_i, D_i=1\right) - \mathrm{E}\left(Y_i |X_i, D_i=0\right))P(D_i=1|X_i)P(D_i=0|X_i) \\
&\mathop{=========} \limits_{Y_{1i},Y_{0i} \; \perp\!\!\!\!\perp \;D_i|X_i}^{Y_i=Y_{0i}+D_i(Y_{1i}-Y_{0i})} \mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)P(D_i=0|X_i)
\end{aligned}
$$
Substitute in the above equation of $LHS$:
$$
\begin{aligned}
LHS &= \frac{1}{P(D_i=1)}\mathrm{E}\left(\frac{1}{P(D_i=0|X_i)}\mathrm{E}\left((D_i-P(D_i=1|X_i))Y_i | X_i\right)\right) \\
&= \frac{1}{P(D_i=1)}\mathrm{E}\left(\mathrm{E}\left(Y_{1i}-Y_{0i} |X_i, D_i=1\right)P(D_i=1|X_i)\right) \\
&= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}|X_i, D_i=1)P(D_i=1|X_i)P(X_i)\right) \\
&= \frac{1}{P(D_i=1)} \sum\limits_{x}\left(\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i, D_i=1)\right) \\
&= \sum\limits_{x}\sum\limits_{y}(Y_{1i}-Y_{0i})P((Y_{1i}-Y_{0i}), X_i| D_i=1) \\
&= \sum\limits_{y}(Y_{1i}-Y_{0i})\left(\sum\limits_{x}P((Y_{1i}-Y_{0i}), X_i| D_i=1)\right) \\
&= \sum\limits_{y}(Y_{1i}-Y_{0i})P(Y_{1i}-Y_{0i}| D_i=1) \\
&= \mathrm{E}(Y_{1i}-Y_{0i}|D_i=1) \\
&= RHS
\end{aligned}
$$
Similarly, we can also verifying that
$$
\mathrm{E}\left(\frac{(D_i-P(D_i=1|X_i))Y_i}{P(D_i=1|X_i)P(D_i=0)}\right) = \mathrm{E}(Y_{1i}-Y_{0i}|D_i=0)
$$ | Proving equation 3.3.12 in Angrist Pischke Mostly Harmless Econometrics (inverse probability weighti
I also get confused when facing eq. 3.3.12 yesterday, it seemed not "so" obvious as mentioned in the text. When I searched for answers, the only method was the above by @Steve and @user106860, thanks |
49,860 | How to derive the time computational complexity of k-medoids (PAM) clustering algorithm? | I hope this question is still relevant. Big O notation denotes the upper bound of an algorithm. Let's assume that the first sets of medoids are the worst medoids. The cost function calculated through these medoids is the maximum of all the possible sets of medoids. Every time we choose a random medoid for comparing the cost function, we will always find one that decreases the cost function. Let's assume that unfortunately, we always choose the next worst one in all the possibilities, therefore we will exhaust all the remaining medoids (n-k) to find the set of medoids that has the minimum cost function (Adversary argument). So the outmost loop would be k, for looping through all the medoids. Then it will be n-k, to loop through all the non-medoid data points. Then n-k again for choosing the random medoid. Examples may help understanding the process.
https://www.geeksforgeeks.org/ml-k-medoids-clustering-with-example/ | How to derive the time computational complexity of k-medoids (PAM) clustering algorithm? | I hope this question is still relevant. Big O notation denotes the upper bound of an algorithm. Let's assume that the first sets of medoids are the worst medoids. The cost function calculated through | How to derive the time computational complexity of k-medoids (PAM) clustering algorithm?
I hope this question is still relevant. Big O notation denotes the upper bound of an algorithm. Let's assume that the first sets of medoids are the worst medoids. The cost function calculated through these medoids is the maximum of all the possible sets of medoids. Every time we choose a random medoid for comparing the cost function, we will always find one that decreases the cost function. Let's assume that unfortunately, we always choose the next worst one in all the possibilities, therefore we will exhaust all the remaining medoids (n-k) to find the set of medoids that has the minimum cost function (Adversary argument). So the outmost loop would be k, for looping through all the medoids. Then it will be n-k, to loop through all the non-medoid data points. Then n-k again for choosing the random medoid. Examples may help understanding the process.
https://www.geeksforgeeks.org/ml-k-medoids-clustering-with-example/ | How to derive the time computational complexity of k-medoids (PAM) clustering algorithm?
I hope this question is still relevant. Big O notation denotes the upper bound of an algorithm. Let's assume that the first sets of medoids are the worst medoids. The cost function calculated through |
49,861 | How to handle skewed data and response variable when predicting | For 1) if the response is also skewed, you better log-transform the response variable as well.
For 2) Once you log-transform, your multiple linear regression(if more than 1 predictor) coefficients explains differently than non-transformed coefficients.
Ex: $\log Y = a_1 \log(X_1) + a_2 \log(X_2)+\cdots$.
Interpretation would be as "for every one unit increase in $\log (X_1)$, $\log(Y)$ would increase by $a_1$ after adjusting for other predictors. So, you first predict $\log(Y)$, considering $\log (X_1)$ since your model is going to above form. Then take exponential of that value to estimate predicted Y. | How to handle skewed data and response variable when predicting | For 1) if the response is also skewed, you better log-transform the response variable as well.
For 2) Once you log-transform, your multiple linear regression(if more than 1 predictor) coefficients ex | How to handle skewed data and response variable when predicting
For 1) if the response is also skewed, you better log-transform the response variable as well.
For 2) Once you log-transform, your multiple linear regression(if more than 1 predictor) coefficients explains differently than non-transformed coefficients.
Ex: $\log Y = a_1 \log(X_1) + a_2 \log(X_2)+\cdots$.
Interpretation would be as "for every one unit increase in $\log (X_1)$, $\log(Y)$ would increase by $a_1$ after adjusting for other predictors. So, you first predict $\log(Y)$, considering $\log (X_1)$ since your model is going to above form. Then take exponential of that value to estimate predicted Y. | How to handle skewed data and response variable when predicting
For 1) if the response is also skewed, you better log-transform the response variable as well.
For 2) Once you log-transform, your multiple linear regression(if more than 1 predictor) coefficients ex |
49,862 | Reference to support using binomial GLM for proportion response | Recommended in comments:
McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Monographs on Statistics and Applied Probability 37). CRC press.
Hardin, J. W. & Hilbe, J. M. (2007). Generalized linear models and extensions. Stata press.
Zhao, L., Chen, Y., & Schaffner, D. W. (2001). Comparison of logistic regression and linear regression in modeling percentage data. Applied and Environmental Microbiology, 67(5), 2129-2135. | Reference to support using binomial GLM for proportion response | Recommended in comments:
McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Monographs on Statistics and Applied Probability 37). CRC press.
Hardin, J. W. & Hilbe, J. M. (2007). Genera | Reference to support using binomial GLM for proportion response
Recommended in comments:
McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Monographs on Statistics and Applied Probability 37). CRC press.
Hardin, J. W. & Hilbe, J. M. (2007). Generalized linear models and extensions. Stata press.
Zhao, L., Chen, Y., & Schaffner, D. W. (2001). Comparison of logistic regression and linear regression in modeling percentage data. Applied and Environmental Microbiology, 67(5), 2129-2135. | Reference to support using binomial GLM for proportion response
Recommended in comments:
McCullagh, P., & Nelder, J. A. (1989). Generalized linear models (Monographs on Statistics and Applied Probability 37). CRC press.
Hardin, J. W. & Hilbe, J. M. (2007). Genera |
49,863 | What does the base distribution of the Dirichlet Process mean? | Let
$$
G \sim \textsf{DP}(\alpha, H)
$$
which says that the random distribution $G$ is itself distributed according to the Dirichlet Process with concentration parameter $\alpha$ and base distribution $H$. There is an explicit representation for $G$ and it's useful for understanding the role of the base distribution and its relation to "clustering" that goes on. In particular, $G$ is a discrete distribution with random support points and random weights:
$$
G = \sum_{c=1}^\infty w_c\, \delta_{\theta_c} ,
$$
where $\delta_x$ is a point-mass located at $x$ and $\sum_{c=1}^\infty w_c = 1$.
The distributions for the component weights $w = (w_1, w_2, \ldots)$ and corresponding component parameters $\theta = (\theta_1, \theta_2, \ldots)$ are given by
\begin{align}
w &\sim \textsf{Stick}(\alpha) \\
\theta_c &\stackrel{\text{iid}}{\sim} H .
\end{align}
The stick-breaking weights are generated according to
$$
w_c = v_c \prod_{\ell = 1}^{c-1} (1 - v_\ell) \qquad\text{where $v_c \stackrel{\text{iid}}{\sim} \textsf{Beta}(1,\alpha)$} .
$$
The base distribution determines the locations of the support points while the stick-breaking weights determine the amount of clustering.
In the limit as $\alpha \to 0$, the first weight approaches unity: $w_1 \to 1$, in which case the random distribution $G$ has a single support point. In this case, any draw $G$ is quite different from the base distribution $H$. Going the other way, as $\alpha \to \infty$, no finite collection of weights dominate and each random draw of $G$ becomes arbitrarily close to $H$ (i.e., becomes concentrated on $H$).
It is possible to introduce classifications that indicate cluster assignments and integrate out the weights, leaving one with the Chinese Restaurant Process to make the (table) assignments. The base distribution then is used to determine the entrees for the tables. | What does the base distribution of the Dirichlet Process mean? | Let
$$
G \sim \textsf{DP}(\alpha, H)
$$
which says that the random distribution $G$ is itself distributed according to the Dirichlet Process with concentration parameter $\alpha$ and base distributi | What does the base distribution of the Dirichlet Process mean?
Let
$$
G \sim \textsf{DP}(\alpha, H)
$$
which says that the random distribution $G$ is itself distributed according to the Dirichlet Process with concentration parameter $\alpha$ and base distribution $H$. There is an explicit representation for $G$ and it's useful for understanding the role of the base distribution and its relation to "clustering" that goes on. In particular, $G$ is a discrete distribution with random support points and random weights:
$$
G = \sum_{c=1}^\infty w_c\, \delta_{\theta_c} ,
$$
where $\delta_x$ is a point-mass located at $x$ and $\sum_{c=1}^\infty w_c = 1$.
The distributions for the component weights $w = (w_1, w_2, \ldots)$ and corresponding component parameters $\theta = (\theta_1, \theta_2, \ldots)$ are given by
\begin{align}
w &\sim \textsf{Stick}(\alpha) \\
\theta_c &\stackrel{\text{iid}}{\sim} H .
\end{align}
The stick-breaking weights are generated according to
$$
w_c = v_c \prod_{\ell = 1}^{c-1} (1 - v_\ell) \qquad\text{where $v_c \stackrel{\text{iid}}{\sim} \textsf{Beta}(1,\alpha)$} .
$$
The base distribution determines the locations of the support points while the stick-breaking weights determine the amount of clustering.
In the limit as $\alpha \to 0$, the first weight approaches unity: $w_1 \to 1$, in which case the random distribution $G$ has a single support point. In this case, any draw $G$ is quite different from the base distribution $H$. Going the other way, as $\alpha \to \infty$, no finite collection of weights dominate and each random draw of $G$ becomes arbitrarily close to $H$ (i.e., becomes concentrated on $H$).
It is possible to introduce classifications that indicate cluster assignments and integrate out the weights, leaving one with the Chinese Restaurant Process to make the (table) assignments. The base distribution then is used to determine the entrees for the tables. | What does the base distribution of the Dirichlet Process mean?
Let
$$
G \sim \textsf{DP}(\alpha, H)
$$
which says that the random distribution $G$ is itself distributed according to the Dirichlet Process with concentration parameter $\alpha$ and base distributi |
49,864 | ABRACADABRA Problem | The solution is reached thinking about the process as a martingale betting game with some conditions. At every keystroke, a different gambler jumps in the game with a $\small \$1$ bet. If any given bettor loses at the following round (next keystroke of the monkey), he leaves the game with a $-\small \$1$ balance; if he wins he gets $\small \$26$, corresponding to the upper-case alphabetic characters on an English keyboard, and bets the whole amount gained on the next keyboard stroke corresponding to the next letter in $\small \text{ABRACADABRA}$, i.e, $\small\text{B}.$
This "trick" serves the useful purpose of keeping track of the number of steps ("time") until the word is completed, because there will be as many gamblers as steps to completion each having bet $1$ dollar.
The key then is to see that by the time the winning bettor finishes off the desired sequence of characters, there are only going to be so many potential winners at play: those who could possibly have started on a winning streak after the actual winner.
This is determined by the structure of the string with a repeating fragment: $\small\color{blue}{\text{ABRA}}\text{CAD}\color{blue}{\text{ABRA}}$, and a potential lucky beginning in the last $\small \text{A}$: $\small \text{ABRACADABR}\color{blue}{\text{A}}$, which determines that the only entry points for other would-have-been winners once the actual winner has started off, are the fourth $\text{A}$ (eighth character), and the last $\text{A}$ (eleventh character).
Therefore the maximum amount laid on bets at the time of completion will equal to the number of steps ($t$) in dollars, and if the betting system is fair this has to equal the maximum potential payoffs:
For the winning gambler, this will be $26^{\text{length of the string}}=26^{11}.$
For the gambler who could have won, and starting winning on the fourth $\text{A}$, i.e. $\small \text{ABRACAD}\underset{\cdot}{\color{red}{\text{A}}}\underset{\cdot}{B}\underset{\cdot}{R}\underset{\cdot}{A}$, which is to say $26^4.$
For the gambler entering also too late, but betting on an $\text{A}$ at the very last step, $\small \text{ABRACADABR}\underset{\cdot}{\color{red}{\text{A}}}$, just $26$.
Therefore,
$$0= 26^{11}+26^4+26-t$$
And the expected $t= 26^{11}+26^4+26.$ | ABRACADABRA Problem | The solution is reached thinking about the process as a martingale betting game with some conditions. At every keystroke, a different gambler jumps in the game with a $\small \$1$ bet. If any given be | ABRACADABRA Problem
The solution is reached thinking about the process as a martingale betting game with some conditions. At every keystroke, a different gambler jumps in the game with a $\small \$1$ bet. If any given bettor loses at the following round (next keystroke of the monkey), he leaves the game with a $-\small \$1$ balance; if he wins he gets $\small \$26$, corresponding to the upper-case alphabetic characters on an English keyboard, and bets the whole amount gained on the next keyboard stroke corresponding to the next letter in $\small \text{ABRACADABRA}$, i.e, $\small\text{B}.$
This "trick" serves the useful purpose of keeping track of the number of steps ("time") until the word is completed, because there will be as many gamblers as steps to completion each having bet $1$ dollar.
The key then is to see that by the time the winning bettor finishes off the desired sequence of characters, there are only going to be so many potential winners at play: those who could possibly have started on a winning streak after the actual winner.
This is determined by the structure of the string with a repeating fragment: $\small\color{blue}{\text{ABRA}}\text{CAD}\color{blue}{\text{ABRA}}$, and a potential lucky beginning in the last $\small \text{A}$: $\small \text{ABRACADABR}\color{blue}{\text{A}}$, which determines that the only entry points for other would-have-been winners once the actual winner has started off, are the fourth $\text{A}$ (eighth character), and the last $\text{A}$ (eleventh character).
Therefore the maximum amount laid on bets at the time of completion will equal to the number of steps ($t$) in dollars, and if the betting system is fair this has to equal the maximum potential payoffs:
For the winning gambler, this will be $26^{\text{length of the string}}=26^{11}.$
For the gambler who could have won, and starting winning on the fourth $\text{A}$, i.e. $\small \text{ABRACAD}\underset{\cdot}{\color{red}{\text{A}}}\underset{\cdot}{B}\underset{\cdot}{R}\underset{\cdot}{A}$, which is to say $26^4.$
For the gambler entering also too late, but betting on an $\text{A}$ at the very last step, $\small \text{ABRACADABR}\underset{\cdot}{\color{red}{\text{A}}}$, just $26$.
Therefore,
$$0= 26^{11}+26^4+26-t$$
And the expected $t= 26^{11}+26^4+26.$ | ABRACADABRA Problem
The solution is reached thinking about the process as a martingale betting game with some conditions. At every keystroke, a different gambler jumps in the game with a $\small \$1$ bet. If any given be |
49,865 | McNemar test with multiple scores for the same subject | You want Cochran's Q test. Just as the one way repeated measures ANOVA generalizes the paired t test to two or more measurements in the same individual (or more than two individuals matched per block), Cochran's Q test generalizes McNemars' test to two or more measurements in the same individual (or more than two individuals matched per block). Applying Cochran's Q test to a 2x2 design gives identical results to McNemar's test, and one may use either McNemar's test, or all the 2x2 Cochran's Q tests for post hoc pairwise comparisons following a Cochran's Q omnibus test for more than 2 groups.
Cochran's Q is implemented in software in:
within R in the RVAideMemoire package as the as the cochran.qtest function,
within SAS in proc FREQ,
within SPSS in tests of k related samples, and
within Stata as the cochranq package (within Stata type net describe cochranq, from(http://alexisdinno.com/stata)). This package implements effect size measures by Berry, et al, and by Serlin, et al.
References
Berry, K. J., Johnston, J. E., and Paul W. Mielke, J. (2007). An alternative measure of effect size for Cochran’s $Q$ test for related proportions. Perceptual and Motor Skills, 104:1236–1242.
Cochran, W. G. (1950). The comparison of percentages. Biometrika, 37(3/4):256–266.
Serlin, R. C., Carr, J., and Marascuillo, L. A. (2007). A measure of association for selected nonparametric procedures. Psychological Bulletin, 92:786–790. | McNemar test with multiple scores for the same subject | You want Cochran's Q test. Just as the one way repeated measures ANOVA generalizes the paired t test to two or more measurements in the same individual (or more than two individuals matched per block) | McNemar test with multiple scores for the same subject
You want Cochran's Q test. Just as the one way repeated measures ANOVA generalizes the paired t test to two or more measurements in the same individual (or more than two individuals matched per block), Cochran's Q test generalizes McNemars' test to two or more measurements in the same individual (or more than two individuals matched per block). Applying Cochran's Q test to a 2x2 design gives identical results to McNemar's test, and one may use either McNemar's test, or all the 2x2 Cochran's Q tests for post hoc pairwise comparisons following a Cochran's Q omnibus test for more than 2 groups.
Cochran's Q is implemented in software in:
within R in the RVAideMemoire package as the as the cochran.qtest function,
within SAS in proc FREQ,
within SPSS in tests of k related samples, and
within Stata as the cochranq package (within Stata type net describe cochranq, from(http://alexisdinno.com/stata)). This package implements effect size measures by Berry, et al, and by Serlin, et al.
References
Berry, K. J., Johnston, J. E., and Paul W. Mielke, J. (2007). An alternative measure of effect size for Cochran’s $Q$ test for related proportions. Perceptual and Motor Skills, 104:1236–1242.
Cochran, W. G. (1950). The comparison of percentages. Biometrika, 37(3/4):256–266.
Serlin, R. C., Carr, J., and Marascuillo, L. A. (2007). A measure of association for selected nonparametric procedures. Psychological Bulletin, 92:786–790. | McNemar test with multiple scores for the same subject
You want Cochran's Q test. Just as the one way repeated measures ANOVA generalizes the paired t test to two or more measurements in the same individual (or more than two individuals matched per block) |
49,866 | Why does the jackknife-after-bootstrap estimation of variance give an overestimate? | The over-estimation is due to the error in the ensemble mean due to the finite size of the ensemble. This is discussed, along with a correction term, in this paper by Wager et al. There are implementations in C++/R and scala (disclosure: I am the primary author of the scala package). | Why does the jackknife-after-bootstrap estimation of variance give an overestimate? | The over-estimation is due to the error in the ensemble mean due to the finite size of the ensemble. This is discussed, along with a correction term, in this paper by Wager et al. There are implemen | Why does the jackknife-after-bootstrap estimation of variance give an overestimate?
The over-estimation is due to the error in the ensemble mean due to the finite size of the ensemble. This is discussed, along with a correction term, in this paper by Wager et al. There are implementations in C++/R and scala (disclosure: I am the primary author of the scala package). | Why does the jackknife-after-bootstrap estimation of variance give an overestimate?
The over-estimation is due to the error in the ensemble mean due to the finite size of the ensemble. This is discussed, along with a correction term, in this paper by Wager et al. There are implemen |
49,867 | Find correlation between two time series. Theory and practice (R) | Your very straightforward simple question has unfortunately both a simple and a complex answer. I will avoid the simple . In summary the whole idea is that one needs to account for / condition for intra-correlation while identifying the inter-correlation . Following are some references that you might consider before attempting to proceed . The first is an easy iverview
ARIMAX model's exogenous components?
this reference provides info as to why you should be aware of simple soulutions that may be routinely available
http://empslocal.ex.ac.uk/people/staff/dbs202/cat/stats/corr.html
This outlines a general procedure which is far from general as it doesn't deal with Gaussian Violations.
https://web.archive.org/web/20160216193539/https://onlinecourses.science.psu.edu/stat510/node/75/
This provides a gentle overview of regression vs simple ARIMA time series methods.
http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting?start=5
Lastly, arm yourself with data and try different approaches .
Hope this helps .. | Find correlation between two time series. Theory and practice (R) | Your very straightforward simple question has unfortunately both a simple and a complex answer. I will avoid the simple . In summary the whole idea is that one needs to account for / condition for int | Find correlation between two time series. Theory and practice (R)
Your very straightforward simple question has unfortunately both a simple and a complex answer. I will avoid the simple . In summary the whole idea is that one needs to account for / condition for intra-correlation while identifying the inter-correlation . Following are some references that you might consider before attempting to proceed . The first is an easy iverview
ARIMAX model's exogenous components?
this reference provides info as to why you should be aware of simple soulutions that may be routinely available
http://empslocal.ex.ac.uk/people/staff/dbs202/cat/stats/corr.html
This outlines a general procedure which is far from general as it doesn't deal with Gaussian Violations.
https://web.archive.org/web/20160216193539/https://onlinecourses.science.psu.edu/stat510/node/75/
This provides a gentle overview of regression vs simple ARIMA time series methods.
http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting?start=5
Lastly, arm yourself with data and try different approaches .
Hope this helps .. | Find correlation between two time series. Theory and practice (R)
Your very straightforward simple question has unfortunately both a simple and a complex answer. I will avoid the simple . In summary the whole idea is that one needs to account for / condition for int |
49,868 | Why use perplexity rather than nearest neighbor match in t-SNE? | "The perplexity can be interpreted as a smooth measure of the effective number of neighbors" could be interpreted as $\frac{\delta \sigma_i}{\delta P}$ being smooth. That is, varying Perplexity has an effect on $\sigma_i$ for a fixed i that is continuous in all derivatives.
This is not true of the k-NN approach. One can imagine fixing an i that lies within a cluster containing G points. varying k from 2 ... G-1 should result in similar but monotonically increasing values of $\sigma_i$. There is a jump at k=G as the value of $\sigma$ must be large enough to reach outside the cluster. The distance between the cluster and the nearest point determines the size of this jump.
Below is a (not-too-extreme) example of this. I simulated 100 points from a Gaussian Mixture with two Gaussians of equal probability. The data was not quite linearly separable. I chose a point, i, at random and varied Perplexity and k. Note the sudden change in first derivative of the k-nn approach as we near 50 (the expected number of points that fall within a given cluster). | Why use perplexity rather than nearest neighbor match in t-SNE? | "The perplexity can be interpreted as a smooth measure of the effective number of neighbors" could be interpreted as $\frac{\delta \sigma_i}{\delta P}$ being smooth. That is, varying Perplexity has an | Why use perplexity rather than nearest neighbor match in t-SNE?
"The perplexity can be interpreted as a smooth measure of the effective number of neighbors" could be interpreted as $\frac{\delta \sigma_i}{\delta P}$ being smooth. That is, varying Perplexity has an effect on $\sigma_i$ for a fixed i that is continuous in all derivatives.
This is not true of the k-NN approach. One can imagine fixing an i that lies within a cluster containing G points. varying k from 2 ... G-1 should result in similar but monotonically increasing values of $\sigma_i$. There is a jump at k=G as the value of $\sigma$ must be large enough to reach outside the cluster. The distance between the cluster and the nearest point determines the size of this jump.
Below is a (not-too-extreme) example of this. I simulated 100 points from a Gaussian Mixture with two Gaussians of equal probability. The data was not quite linearly separable. I chose a point, i, at random and varied Perplexity and k. Note the sudden change in first derivative of the k-nn approach as we near 50 (the expected number of points that fall within a given cluster). | Why use perplexity rather than nearest neighbor match in t-SNE?
"The perplexity can be interpreted as a smooth measure of the effective number of neighbors" could be interpreted as $\frac{\delta \sigma_i}{\delta P}$ being smooth. That is, varying Perplexity has an |
49,869 | "Uncertainty range" versus "confidence interval" - what is the difference, and which is preferred? | If you look at some other papers from the project "Global burden of diseases", you will see that data imputation for missing cases have been carried out using proxy. As in case of modelling the global burden of diarrhoea, due to unavailability of data from systematic surveillance in countries like Liberia, Libya, Afghanistan, Syria and Haiti; estimates have been carried out by extrapolating spatiotemporal trends from countries or region with better data matching for the geographical and socio-economic condition. Hence, it is likely that due to the uncertainty associated with the estimates, instead of using confidence interval, the term uncertainty interval might have been used.
https://www.sciencedirect.com/science/article/pii/S1473309917302761
https://www.sciencedirect.com/science/article/pii/S1473309917303365 | "Uncertainty range" versus "confidence interval" - what is the difference, and which is preferred? | If you look at some other papers from the project "Global burden of diseases", you will see that data imputation for missing cases have been carried out using proxy. As in case of modelling the global | "Uncertainty range" versus "confidence interval" - what is the difference, and which is preferred?
If you look at some other papers from the project "Global burden of diseases", you will see that data imputation for missing cases have been carried out using proxy. As in case of modelling the global burden of diarrhoea, due to unavailability of data from systematic surveillance in countries like Liberia, Libya, Afghanistan, Syria and Haiti; estimates have been carried out by extrapolating spatiotemporal trends from countries or region with better data matching for the geographical and socio-economic condition. Hence, it is likely that due to the uncertainty associated with the estimates, instead of using confidence interval, the term uncertainty interval might have been used.
https://www.sciencedirect.com/science/article/pii/S1473309917302761
https://www.sciencedirect.com/science/article/pii/S1473309917303365 | "Uncertainty range" versus "confidence interval" - what is the difference, and which is preferred?
If you look at some other papers from the project "Global burden of diseases", you will see that data imputation for missing cases have been carried out using proxy. As in case of modelling the global |
49,870 | correlation of decision trees | This is an important measure for decision trees in a random forest and is a component of the generalization error of the random forest.
Please read Breiman's original paper(page 6) where he defines correlation as $\bar{\rho}=\mathbf{E}_{\Theta, \Theta^\prime}[\rho(h(\cdot,\Theta), h(\cdot,\Theta^\prime)]$. So that $\bar{\rho}$ is the correlation between two different members of the forest averaged over the $\Theta, \Theta^\prime$ distribution.
A second paper a bit more readable, describes correlation between trees to be the correlation between their raw margin function. The margin function is the extent to which average number of votes for class exceeds the average number of votes for the next-best class. See Slides.
Finally, another way of evaluating the correlation would be consider correlation between the prediction errors between decision tree pairs, though this would not be the same term in the generalization error PE$^\ast$ in Breiman's paper. | correlation of decision trees | This is an important measure for decision trees in a random forest and is a component of the generalization error of the random forest.
Please read Breiman's original paper(page 6) where he defines c | correlation of decision trees
This is an important measure for decision trees in a random forest and is a component of the generalization error of the random forest.
Please read Breiman's original paper(page 6) where he defines correlation as $\bar{\rho}=\mathbf{E}_{\Theta, \Theta^\prime}[\rho(h(\cdot,\Theta), h(\cdot,\Theta^\prime)]$. So that $\bar{\rho}$ is the correlation between two different members of the forest averaged over the $\Theta, \Theta^\prime$ distribution.
A second paper a bit more readable, describes correlation between trees to be the correlation between their raw margin function. The margin function is the extent to which average number of votes for class exceeds the average number of votes for the next-best class. See Slides.
Finally, another way of evaluating the correlation would be consider correlation between the prediction errors between decision tree pairs, though this would not be the same term in the generalization error PE$^\ast$ in Breiman's paper. | correlation of decision trees
This is an important measure for decision trees in a random forest and is a component of the generalization error of the random forest.
Please read Breiman's original paper(page 6) where he defines c |
49,871 | CDF of a transformation of variables with a flat region | You can visualize the CDF of $Y$, $F_Y$, in terms of (a) a graph of the transformation $\phi:X\to Y$ and (b) a graph of the CDF of $X$, $F_X$.
In this figure the top plot shows the graph of $\phi$ while the second plot shows the CDF.
To see the value of $F_Y$ at some trial value $y$, draw a horizontal line at height $y$ in the upper plot (shown at $y=3/8$). Identify all the $x$ for which the graph of $\phi$ lies on or below that line: these are the values of $X$ corresponding to the event $Y\le y$, written $\phi^{-1}(Y \le y)$. They are shown beneath the red parts of the graph. In the lower plot, the values of $F_X$ at this event cover a portion of the vertical axis between $0$ and $1$ (as marked in black). The total amount of coverage is the probability of $\phi^{-1}(Y \le y)$. It is equal to $F_Y(y)$.
Consider what happens when, as in this example, $y$ is increased slightly so that it suddenly includes a flat portion of the graph of $y$.
Instantaneously, all the probability of $X$ corresponding to that flat portion of the graph of $\phi$ is included within $\phi^{-1}(Y \le y)$.
This visual understanding should give you confidence in your calculations, as well as provide intuition for how distributions behave generally when random variables are transformed--even when the transformations are discontinuous (their graphs have jumps), not one-to-one (they don't always increase or always decrease), or have flat spots. | CDF of a transformation of variables with a flat region | You can visualize the CDF of $Y$, $F_Y$, in terms of (a) a graph of the transformation $\phi:X\to Y$ and (b) a graph of the CDF of $X$, $F_X$.
In this figure the top plot shows the graph of $\phi$ whi | CDF of a transformation of variables with a flat region
You can visualize the CDF of $Y$, $F_Y$, in terms of (a) a graph of the transformation $\phi:X\to Y$ and (b) a graph of the CDF of $X$, $F_X$.
In this figure the top plot shows the graph of $\phi$ while the second plot shows the CDF.
To see the value of $F_Y$ at some trial value $y$, draw a horizontal line at height $y$ in the upper plot (shown at $y=3/8$). Identify all the $x$ for which the graph of $\phi$ lies on or below that line: these are the values of $X$ corresponding to the event $Y\le y$, written $\phi^{-1}(Y \le y)$. They are shown beneath the red parts of the graph. In the lower plot, the values of $F_X$ at this event cover a portion of the vertical axis between $0$ and $1$ (as marked in black). The total amount of coverage is the probability of $\phi^{-1}(Y \le y)$. It is equal to $F_Y(y)$.
Consider what happens when, as in this example, $y$ is increased slightly so that it suddenly includes a flat portion of the graph of $y$.
Instantaneously, all the probability of $X$ corresponding to that flat portion of the graph of $\phi$ is included within $\phi^{-1}(Y \le y)$.
This visual understanding should give you confidence in your calculations, as well as provide intuition for how distributions behave generally when random variables are transformed--even when the transformations are discontinuous (their graphs have jumps), not one-to-one (they don't always increase or always decrease), or have flat spots. | CDF of a transformation of variables with a flat region
You can visualize the CDF of $Y$, $F_Y$, in terms of (a) a graph of the transformation $\phi:X\to Y$ and (b) a graph of the CDF of $X$, $F_X$.
In this figure the top plot shows the graph of $\phi$ whi |
49,872 | Cox Snell residuals in R | Based on the book's formulas, I wrote this code :
#rC : Cox-Snell residuals
#rM : Martingale residuals
#rD : Deviance residuals
rC<-exp(((fit$y[,1])-log(predict(fit,lung,na.action = "na.omit")))/fit$scale)
rM<-fit$y[,2]-rC
rD<-sign(rM)*sqrt(-2*(rM+fit$y[,2]*log(rC))) # -residuals(fit,type='deviance')
mean(rC)
var(rC)
qqplot((qexp(ppoints(length(rC)))),(rC));qqline(rC, distribution=qexp,col="red", lty=2)
(The exponential qqplot reference is here: https://stackoverflow.com/a/37031433/10042541)
And this is the result :
> mean(rC)
[1] 0.722467
> var(rC)
[1] 0.3411307
This graph seems not bad but the mean and the variance isn't close to 1 and deletion of the 6th row which looks like outlier lower the variance.
I guess the residuals are more spread out than the exponential distribution with parameter 1 and some of them are smaller than they supposed to be.
According to the reference book, we need to modify rC for censored observations because they will be too small.
rC<-rC+(1-fit$y[,2])*1 # or *log (2) instead of *1
This makes the mean to be unity though it makes the qqplot seem worse.
I haven't studied the survival analysis thoroughly so I am not sure how to improve this model. Any improvement on this answer should be welcomed.
Thank you very much. | Cox Snell residuals in R | Based on the book's formulas, I wrote this code :
#rC : Cox-Snell residuals
#rM : Martingale residuals
#rD : Deviance residuals
rC<-exp(((fit$y[,1])-log(predict(fit,lung,na.action = "na.omit")))/fit | Cox Snell residuals in R
Based on the book's formulas, I wrote this code :
#rC : Cox-Snell residuals
#rM : Martingale residuals
#rD : Deviance residuals
rC<-exp(((fit$y[,1])-log(predict(fit,lung,na.action = "na.omit")))/fit$scale)
rM<-fit$y[,2]-rC
rD<-sign(rM)*sqrt(-2*(rM+fit$y[,2]*log(rC))) # -residuals(fit,type='deviance')
mean(rC)
var(rC)
qqplot((qexp(ppoints(length(rC)))),(rC));qqline(rC, distribution=qexp,col="red", lty=2)
(The exponential qqplot reference is here: https://stackoverflow.com/a/37031433/10042541)
And this is the result :
> mean(rC)
[1] 0.722467
> var(rC)
[1] 0.3411307
This graph seems not bad but the mean and the variance isn't close to 1 and deletion of the 6th row which looks like outlier lower the variance.
I guess the residuals are more spread out than the exponential distribution with parameter 1 and some of them are smaller than they supposed to be.
According to the reference book, we need to modify rC for censored observations because they will be too small.
rC<-rC+(1-fit$y[,2])*1 # or *log (2) instead of *1
This makes the mean to be unity though it makes the qqplot seem worse.
I haven't studied the survival analysis thoroughly so I am not sure how to improve this model. Any improvement on this answer should be welcomed.
Thank you very much. | Cox Snell residuals in R
Based on the book's formulas, I wrote this code :
#rC : Cox-Snell residuals
#rM : Martingale residuals
#rD : Deviance residuals
rC<-exp(((fit$y[,1])-log(predict(fit,lung,na.action = "na.omit")))/fit |
49,873 | Cox Snell residuals in R | The Cox-Snell residual for case $j$ in a survival model is $r_j=\hat H(T_j|X_j)$, where $\hat H(T|X)$ is the estimated cumulative hazard function, $T_j$ is the event/censoring time for the case and $X_j$ its vector of covariate values. That's a convenient form for proportional hazard models, for which $\hat H(T_j|X_j)= \hat H_0(T) \exp(X_j' \hat\beta)$, the product of an estimated baseline hazard $H_0(T)$ and the hazard ratio for the case calculated from the vector $\hat \beta$ of regression coefficients. That simple multiplicative form for $\hat H$, however, doesn't hold without proportional hazards.
As $S(T|X)=\exp(-H(T|X))$ for continuous-time survival models, once you have a parametric equation for $\hat S(T|X)$ you can calculate a Cox-Snell residual directly from that equation: $r_j =\hat H(T_j|X_j) = -\ln(\hat S(T_j|X_j))$. The Survival() function in the R rms package, for example, provides an easy way to get survival probability estimates from a psm parametric model fit for specified times and linear-predictor values, which then can be transformed into Cox-Snell residuals.
There is a simple relationship between Cox-Snell residuals and standardized residuals for models that can be expressed in a location-scale form:
$$ f(T) \sim X' \beta + \sigma W,$$
where $\sigma$ is a scale factor, $W$ is the standard probability distribution corresponding to the parametric survival model, and $f(T)$ is a link function, typically $\ln(T)$. With a log link, $W$ is standard minimum extreme value for Weibull and exponential models ($\sigma = 1$ for exponential), standard logistic for log-logistic, and standard normal for lognormal. These notes provide a concise introduction to such parametric modeling.
The survival function $S(T)$ is the complement of the cumulative distribution function (CDF) of the survival times, so the Cox-Snell residual can be written $r_j = -\ln(1-\widehat {\text{CDF}}(T_j|X_j))$. For a location-scale model with distribution $W$, $\widehat {\text{CDF}}(T_j|X_j)$ can be calculated from the standardized residuals
$$s_j=\frac {f(T_j)-X_j' \hat \beta}{\hat \sigma}, $$
as those $s_j$ should be distributed according to $W$. For example, if you want a Cox-Snell residual for a lognormal model, $r_j = -\ln(1-\Phi(s_j))$ where $\Phi$ is the standard normal CDF. This 1:1 relationship between the residual types means that evaluation of a model via Cox-Snell residuals $r_j$ is equivalent to evaluation via standardized residuals $s_j$, as Klein and Moeschberger note on page 415.
The answer from @KDG shows problems with using Cox-Snell residuals to evaluate a model. First is how to deal with censoring. Second is the nature of the plots typically used to evaluate the agreement with the theoretical exponential form, which tend to overemphasize the larger values at the upper right while squishing together most of the cases.
Both of these problems are solved by working directly with the (potentially censored) standardized residuals $s_j$ instead. You examine the survival function of the $s_j$ (the complement of the CDF) with a Kaplan-Meier plot that incorporates the censoring and compare that against the survival function of the standard distribution $W$. That both takes care of the censoring problem and spaces cases more evenly.
Harrell's rms package provides a simple implementation of standardized residual analysis for standard parametric families. Starting with a psm object from an rms parametric survival model fit, a residuals() function provides standardized residuals along with censoring indicators in a Surv object. The survplot() function applied to that Surv object of (censored) standardized residuals displays the Kaplan-Meier (KM) curve of the $s_j$ along with the theoretical CDF based on the assumed distribution $W$. For the example in the OP:
where the thin lines are the KM plots and the thick lines are the theoretical forms for the indicated parametric families. The inadequacy of the exponential model and the quality of the Weibull model for these data are apparent. I don't see any reason to use Cox-Snell residuals for such parametric models.
Code for plots:
library(survival)
library(rms)
lungCC <- lung[complete.cases(lung[,c("age","sex","ph.karno")]),] ## instead of using na.action
psmE <- psm(Surv(time,status)~age+sex+ph.karno,dist="exponential",data=lungCC)
residE <- residuals(psmE)
psmW <- psm(Surv(time,status)~age+sex+ph.karno,dist="weibull",data=lungCC)
residW <- residuals(psmW)
psmLN <- psm(Surv(time,status)~age+sex+ph.karno,dist="lognormal",data=lungCC)
residLN <- residuals(psmLN)
psmLL <- psm(Surv(time,status)~age+sex+ph.karno,dist="loglogistic",data=lungCC)
residLL <- residuals(psmLL)
par(mfrow=c(2,2))
survplot(residE,main="Exponential",ylab="Complement of residual CDF")
survplot(residW,main="Weibull",ylab="Complement of residual CDF")
survplot(residLN,main="Lognormal",ylab="Complement of residual CDF")
survplot(residLL,main="Log Logistic",ylab="Complement of residual CDF") | Cox Snell residuals in R | The Cox-Snell residual for case $j$ in a survival model is $r_j=\hat H(T_j|X_j)$, where $\hat H(T|X)$ is the estimated cumulative hazard function, $T_j$ is the event/censoring time for the case and $X | Cox Snell residuals in R
The Cox-Snell residual for case $j$ in a survival model is $r_j=\hat H(T_j|X_j)$, where $\hat H(T|X)$ is the estimated cumulative hazard function, $T_j$ is the event/censoring time for the case and $X_j$ its vector of covariate values. That's a convenient form for proportional hazard models, for which $\hat H(T_j|X_j)= \hat H_0(T) \exp(X_j' \hat\beta)$, the product of an estimated baseline hazard $H_0(T)$ and the hazard ratio for the case calculated from the vector $\hat \beta$ of regression coefficients. That simple multiplicative form for $\hat H$, however, doesn't hold without proportional hazards.
As $S(T|X)=\exp(-H(T|X))$ for continuous-time survival models, once you have a parametric equation for $\hat S(T|X)$ you can calculate a Cox-Snell residual directly from that equation: $r_j =\hat H(T_j|X_j) = -\ln(\hat S(T_j|X_j))$. The Survival() function in the R rms package, for example, provides an easy way to get survival probability estimates from a psm parametric model fit for specified times and linear-predictor values, which then can be transformed into Cox-Snell residuals.
There is a simple relationship between Cox-Snell residuals and standardized residuals for models that can be expressed in a location-scale form:
$$ f(T) \sim X' \beta + \sigma W,$$
where $\sigma$ is a scale factor, $W$ is the standard probability distribution corresponding to the parametric survival model, and $f(T)$ is a link function, typically $\ln(T)$. With a log link, $W$ is standard minimum extreme value for Weibull and exponential models ($\sigma = 1$ for exponential), standard logistic for log-logistic, and standard normal for lognormal. These notes provide a concise introduction to such parametric modeling.
The survival function $S(T)$ is the complement of the cumulative distribution function (CDF) of the survival times, so the Cox-Snell residual can be written $r_j = -\ln(1-\widehat {\text{CDF}}(T_j|X_j))$. For a location-scale model with distribution $W$, $\widehat {\text{CDF}}(T_j|X_j)$ can be calculated from the standardized residuals
$$s_j=\frac {f(T_j)-X_j' \hat \beta}{\hat \sigma}, $$
as those $s_j$ should be distributed according to $W$. For example, if you want a Cox-Snell residual for a lognormal model, $r_j = -\ln(1-\Phi(s_j))$ where $\Phi$ is the standard normal CDF. This 1:1 relationship between the residual types means that evaluation of a model via Cox-Snell residuals $r_j$ is equivalent to evaluation via standardized residuals $s_j$, as Klein and Moeschberger note on page 415.
The answer from @KDG shows problems with using Cox-Snell residuals to evaluate a model. First is how to deal with censoring. Second is the nature of the plots typically used to evaluate the agreement with the theoretical exponential form, which tend to overemphasize the larger values at the upper right while squishing together most of the cases.
Both of these problems are solved by working directly with the (potentially censored) standardized residuals $s_j$ instead. You examine the survival function of the $s_j$ (the complement of the CDF) with a Kaplan-Meier plot that incorporates the censoring and compare that against the survival function of the standard distribution $W$. That both takes care of the censoring problem and spaces cases more evenly.
Harrell's rms package provides a simple implementation of standardized residual analysis for standard parametric families. Starting with a psm object from an rms parametric survival model fit, a residuals() function provides standardized residuals along with censoring indicators in a Surv object. The survplot() function applied to that Surv object of (censored) standardized residuals displays the Kaplan-Meier (KM) curve of the $s_j$ along with the theoretical CDF based on the assumed distribution $W$. For the example in the OP:
where the thin lines are the KM plots and the thick lines are the theoretical forms for the indicated parametric families. The inadequacy of the exponential model and the quality of the Weibull model for these data are apparent. I don't see any reason to use Cox-Snell residuals for such parametric models.
Code for plots:
library(survival)
library(rms)
lungCC <- lung[complete.cases(lung[,c("age","sex","ph.karno")]),] ## instead of using na.action
psmE <- psm(Surv(time,status)~age+sex+ph.karno,dist="exponential",data=lungCC)
residE <- residuals(psmE)
psmW <- psm(Surv(time,status)~age+sex+ph.karno,dist="weibull",data=lungCC)
residW <- residuals(psmW)
psmLN <- psm(Surv(time,status)~age+sex+ph.karno,dist="lognormal",data=lungCC)
residLN <- residuals(psmLN)
psmLL <- psm(Surv(time,status)~age+sex+ph.karno,dist="loglogistic",data=lungCC)
residLL <- residuals(psmLL)
par(mfrow=c(2,2))
survplot(residE,main="Exponential",ylab="Complement of residual CDF")
survplot(residW,main="Weibull",ylab="Complement of residual CDF")
survplot(residLN,main="Lognormal",ylab="Complement of residual CDF")
survplot(residLL,main="Log Logistic",ylab="Complement of residual CDF") | Cox Snell residuals in R
The Cox-Snell residual for case $j$ in a survival model is $r_j=\hat H(T_j|X_j)$, where $\hat H(T|X)$ is the estimated cumulative hazard function, $T_j$ is the event/censoring time for the case and $X |
49,874 | Linear model of 2 samples t-test | This is just the standard idea of the data generating process that lies behind a run of the mill t-test.
$\alpha$ is just the difference between the two groups' means in raw units (i.e., not standardized). That is, $\alpha$ isn't "obtained" from somewhere else, it just is. It is a primitive with respect to this setup.
$\alpha$ is divided by two because the groups are balanced around their combined mean. The variances are intended to be equal. I think the sample sizes seem to be intended to be equal as well (although the notation is ambiguous on that), but it could be that they mean that the mean of the samples could diverge from the grand mean by having a greater sample size in one group vs. the other.
The grand mean is halfway between the two means, so one group is above the grand mean and the other is below it. | Linear model of 2 samples t-test | This is just the standard idea of the data generating process that lies behind a run of the mill t-test.
$\alpha$ is just the difference between the two groups' means in raw units (i.e., not standa | Linear model of 2 samples t-test
This is just the standard idea of the data generating process that lies behind a run of the mill t-test.
$\alpha$ is just the difference between the two groups' means in raw units (i.e., not standardized). That is, $\alpha$ isn't "obtained" from somewhere else, it just is. It is a primitive with respect to this setup.
$\alpha$ is divided by two because the groups are balanced around their combined mean. The variances are intended to be equal. I think the sample sizes seem to be intended to be equal as well (although the notation is ambiguous on that), but it could be that they mean that the mean of the samples could diverge from the grand mean by having a greater sample size in one group vs. the other.
The grand mean is halfway between the two means, so one group is above the grand mean and the other is below it. | Linear model of 2 samples t-test
This is just the standard idea of the data generating process that lies behind a run of the mill t-test.
$\alpha$ is just the difference between the two groups' means in raw units (i.e., not standa |
49,875 | Comparing two classifiers from a binary response | Why use accuracy?
Accuracy metric (what you call misclassification) is horrible, don't use it. However, there are reason why it is used.
Sometimes, that's what you care about. It doesn't matter if the spam filter, missclassify 5 spam emails by one percent or by 40 percent, you will still get 5 spam emails. In this case it make sense to optimize your model with respect to some weighted accuracy measure.
It's easy to interpret, especially for a lay person. I have model that predict if a percent has a cancer 90 percent times correctly, is easier to understand than the model has 93 AUROC. Sadly, this is exactly the case, where the error considering the probabilities would be more appropriate.
Why rank metrics (ROC) and not continuous metrics?
Good ranks => good probabilities.
Reasonable assumption is that if your model predicts the ranks good, then you can also make it to predict the probabilities good. If the ROC is good, but brier score is horrible. it is possible just to refit the models output with some monotone function and get the correct probabilities without much troubles. So basically, by not using those probability based measures, you just don't have bother with it.
Outliers. Brier will square errors, entropy might go to infinity. Few extremely good hits or missclassifications, can move your whole model performance.
All metrics are arbitrary. There is nothing in universe telling you that missclassification by 15 percent is 2.25 times worse than missclassification by 10 percent (brier), or even that missclassification of a positive example has exactly the same weight as missclassification of a negative example. You should know what error/cost function you care about and optimize for it. E.g. I want to ear most money on a stock market, so I will use error where loss of 1 eur will give me error 1 and loss of 10 eur will give error 10 and not 10^2
Conclusion
Use error function that you wan't to minimize, if you don't have one, use ROC | Comparing two classifiers from a binary response | Why use accuracy?
Accuracy metric (what you call misclassification) is horrible, don't use it. However, there are reason why it is used.
Sometimes, that's what you care about. It doesn't matter if t | Comparing two classifiers from a binary response
Why use accuracy?
Accuracy metric (what you call misclassification) is horrible, don't use it. However, there are reason why it is used.
Sometimes, that's what you care about. It doesn't matter if the spam filter, missclassify 5 spam emails by one percent or by 40 percent, you will still get 5 spam emails. In this case it make sense to optimize your model with respect to some weighted accuracy measure.
It's easy to interpret, especially for a lay person. I have model that predict if a percent has a cancer 90 percent times correctly, is easier to understand than the model has 93 AUROC. Sadly, this is exactly the case, where the error considering the probabilities would be more appropriate.
Why rank metrics (ROC) and not continuous metrics?
Good ranks => good probabilities.
Reasonable assumption is that if your model predicts the ranks good, then you can also make it to predict the probabilities good. If the ROC is good, but brier score is horrible. it is possible just to refit the models output with some monotone function and get the correct probabilities without much troubles. So basically, by not using those probability based measures, you just don't have bother with it.
Outliers. Brier will square errors, entropy might go to infinity. Few extremely good hits or missclassifications, can move your whole model performance.
All metrics are arbitrary. There is nothing in universe telling you that missclassification by 15 percent is 2.25 times worse than missclassification by 10 percent (brier), or even that missclassification of a positive example has exactly the same weight as missclassification of a negative example. You should know what error/cost function you care about and optimize for it. E.g. I want to ear most money on a stock market, so I will use error where loss of 1 eur will give me error 1 and loss of 10 eur will give error 10 and not 10^2
Conclusion
Use error function that you wan't to minimize, if you don't have one, use ROC | Comparing two classifiers from a binary response
Why use accuracy?
Accuracy metric (what you call misclassification) is horrible, don't use it. However, there are reason why it is used.
Sometimes, that's what you care about. It doesn't matter if t |
49,876 | Comparing two classifiers from a binary response | I'd encourage you to think about loss. Each algorithm is outputting a distribution which represents its "bet" for a value of 0 or 1. What should its loss be for such a bet, given the actual value? What would your ideal algorithm minimize? Generally it would be expected loss, but it could be something else (e.g., loss at 95% confidence).
Entropy would treat this as an information model; i.e., how many bits would it cost the algorithm to output the actual value. Indeed, if it is completely wrong, it would cost an infinite number of bits. To avoid that you could use Jensen-Shannon Divergence. Note that in its generalized form, you could also include priors as part of it.
Interestingly, you ignored the option of a simple linear loss, $|\hat{p}_i - y_i|$, which would generally be the first thing to consider after misclassification which is 0/1 loss. | Comparing two classifiers from a binary response | I'd encourage you to think about loss. Each algorithm is outputting a distribution which represents its "bet" for a value of 0 or 1. What should its loss be for such a bet, given the actual value? Wha | Comparing two classifiers from a binary response
I'd encourage you to think about loss. Each algorithm is outputting a distribution which represents its "bet" for a value of 0 or 1. What should its loss be for such a bet, given the actual value? What would your ideal algorithm minimize? Generally it would be expected loss, but it could be something else (e.g., loss at 95% confidence).
Entropy would treat this as an information model; i.e., how many bits would it cost the algorithm to output the actual value. Indeed, if it is completely wrong, it would cost an infinite number of bits. To avoid that you could use Jensen-Shannon Divergence. Note that in its generalized form, you could also include priors as part of it.
Interestingly, you ignored the option of a simple linear loss, $|\hat{p}_i - y_i|$, which would generally be the first thing to consider after misclassification which is 0/1 loss. | Comparing two classifiers from a binary response
I'd encourage you to think about loss. Each algorithm is outputting a distribution which represents its "bet" for a value of 0 or 1. What should its loss be for such a bet, given the actual value? Wha |
49,877 | Combining two linear regression model into a single linear model using covariates | It sounds like you want a fully interacted model.
In which case, you interact all the terms with the dummy variable:
$Y_i = \beta_0 + \beta_1 X_{1,i} + \beta_2 X_{2,i} + \beta_3 X_{3,i} + \beta_4 (X_{1,i}*X_{3,i}) + \beta_5 (X_{2,i}*X_{3,i}) + \epsilon_i$
$\beta_1$ and $\beta_2$ are the effects of $X_1$ and $X_2$ when $X_{3,i} = 0$
$\beta_1 + \beta_4$ is the effect of $X_1$ when $X_{3,i} = 1$
$\beta_2 + \beta_5$ is the effect of $X_2$ when $X_{3,i} = 1$ | Combining two linear regression model into a single linear model using covariates | It sounds like you want a fully interacted model.
In which case, you interact all the terms with the dummy variable:
$Y_i = \beta_0 + \beta_1 X_{1,i} + \beta_2 X_{2,i} + \beta_3 X_{3,i} + \beta_4 (X_{ | Combining two linear regression model into a single linear model using covariates
It sounds like you want a fully interacted model.
In which case, you interact all the terms with the dummy variable:
$Y_i = \beta_0 + \beta_1 X_{1,i} + \beta_2 X_{2,i} + \beta_3 X_{3,i} + \beta_4 (X_{1,i}*X_{3,i}) + \beta_5 (X_{2,i}*X_{3,i}) + \epsilon_i$
$\beta_1$ and $\beta_2$ are the effects of $X_1$ and $X_2$ when $X_{3,i} = 0$
$\beta_1 + \beta_4$ is the effect of $X_1$ when $X_{3,i} = 1$
$\beta_2 + \beta_5$ is the effect of $X_2$ when $X_{3,i} = 1$ | Combining two linear regression model into a single linear model using covariates
It sounds like you want a fully interacted model.
In which case, you interact all the terms with the dummy variable:
$Y_i = \beta_0 + \beta_1 X_{1,i} + \beta_2 X_{2,i} + \beta_3 X_{3,i} + \beta_4 (X_{ |
49,878 | Robustness of the Student t-test to non-Gaussian data | Here is an attempt at answering the question using numerical experiments: using Monte Carlo estimation it is easy to determine the rate of type I errors for the test with a given distribution of input data. Here I try data from the following distributions:
Normally distributed data: here the t-test is guaranteed to work.
Samples from the uniform distribution on $[-1,1]$: this is a prototype for a distribution with light tail (or rather, the extreme case of no tails).
The double-exponential distribution: this is a distribution with heavier tails than the normal distribution has.
A shifted exponential distribution, $\mathrm{Exp}(1) - 1$: this is a very asymmetric distribution, with a tail only on one side.
The discrete uniform distribution on the set $\{-1,+1\}$: this could be seen as an extreme case of a bi-modal distribution.
The discrete distribution with $P(X=-1) = 0.9$ and $P(X=9)=0.1$: this is very far from a normal distribution because it is both discrete and very asymmetric.
Since we expect the test to get more accurate as $n$ increases, I try only small and moderate values of $n$, namely $n \in \{10, 30, 100\}$. For the significance level I choose the commonly used value $\alpha = 5\%$.
My experiment is performed using the following R script: the script simulates $N=1,000,000$ dataset of size $n$, applies the t-test and counts how often $H_0\colon \mu=0$ is (wrongly) rejected. If the t-test still works, this should be the case in $5\%$ of the cases, any deviation from $5\%$ indicates that for the given distribution and $n$ the t-test did not perform optimally.
Edit: As requested by the OP, I have changed the code to also perform the same experiments for the z-test, so that the performance of both tests can be compared.
set.seed(1)
try.one <- function(gen, n, N=1000000, alpha=0.05) {
crit.t <- qt(1 - alpha/2, n-1)
reject.t <- 0
crit.z <- qnorm(1 - alpha/2)
reject.z <- 0
for (j in 1:N) {
X <- gen(n)
Z <- sqrt(n) * mean(X) / sd(X)
if (abs(Z) > crit.t) {
reject.t <- reject.t + 1
}
if (abs(Z) > crit.z) {
reject.z <- reject.z + 1
}
}
p.t <- reject.t/N
p.z <- reject.z/N
list(prob.t=p.t, sd.t=sqrt(p.t*(1-p.t)/N), prob.z=p.z, sd.z=sqrt(p.z*(1-p.z)/N))
}
distributions <- c("normal", "uniform", "double exponential", "exponential",
"discrete", "asym. discrete")
try.all <- function() {
dist.name <- character(0)
nn <- numeric(0)
fp.rate.t <- numeric(0)
std.err.t <- numeric(0)
fp.rate.z <- numeric(0)
std.err.z <- numeric(0)
for (dist in distributions) {
if (dist == "normal") {
gen <- rnorm
} else if (dist == "uniform") {
gen <- function(n) runif(n, -1, 1)
} else if (dist == "double exponential") {
gen <- function(n) rexp(n) * sample(c(-1,1), n, replace=TRUE)
} else if (dist == "exponential") {
gen <- function(n) rexp(n) - 1
} else if (dist == "discrete") {
gen <- function(n) sample(c(-1,1), n, replace=TRUE)
} else if (dist == "asym. discrete") {
gen <- function(n) sample(c(-1, 9), n, replace=TRUE, prob=c(0.9,0.1))
}
for (n in c(10, 30, 100)) {
row <- try.one(gen, n)
dist.name <- c(dist.name, dist)
nn <- c(nn, n)
fp.rate.t <- c(fp.rate.t, row$prob.t)
std.err.t <- c(std.err.t, row$sd.t)
fp.rate.z <- c(fp.rate.z, row$prob.z)
std.err.z <- c(std.err.z, row$sd.z)
}
}
data.frame(dist.name, n=nn, fp.rate.t, std.err.t, fp.rate.z, std.err.z)
}
print(try.all(), row.names=FALSE)
The output, after some minutes, is
dist.name n fp.rate.t std.err.t fp.rate.z std.err.z
normal 10 0.050029 0.0002180048 0.081694 0.0002738980
normal 30 0.050059 0.0002180667 0.059824 0.0002371605
normal 100 0.049930 0.0002178004 0.052726 0.0002234859
uniform 10 0.054490 0.0002269820 0.084445 0.0002780540
uniform 30 0.050906 0.0002198058 0.060263 0.0002379735
uniform 100 0.050116 0.0002181843 0.053001 0.0002240355
double exponential 10 0.042272 0.0002012090 0.074645 0.0002628177
double exponential 30 0.047506 0.0002127185 0.057410 0.0002326244
double exponential 100 0.049646 0.0002172125 0.052526 0.0002230852
exponential 10 0.099738 0.0002996503 0.130045 0.0003363529
exponential 30 0.072758 0.0002597389 0.082090 0.0002745018
exponential 100 0.058040 0.0002338191 0.060755 0.0002388804
discrete 10 0.021386 0.0001446673 0.109666 0.0003124730
discrete 30 0.042853 0.0002025256 0.042853 0.0002025256
discrete 100 0.056972 0.0002317891 0.056972 0.0002317891
asym. discrete 10 0.350463 0.0004771150 0.350463 0.0004771150
asym. discrete 30 0.044408 0.0002059998 0.191153 0.0003932093
asym. discrete 100 0.067916 0.0002516017 0.067916 0.0002516017
Some observations about these results:
The rate of type I errors is listed in the column fp.rate. As expected, for the normal distribution this is very close to $5\%$.
In nearly all cases, the rate of type I errors gets closer to $5\%$ as $n$ increases, sometimes from below and sometimes from above. The only exception is the asymmetric discrete distribution.
The weight of the tails seems not to have too much effect: the test performs reasonably well for both uniform and double exponential distributions.
For small sample size ($n=10$) there are notable deviation of the type I error rate from $5\%$, both for the discrete distributions and for the asymmetric distributions.
The worst case is the discrete, asymmetric distribution where the t-test at $5\%$-level shows type I errors in $35\%$ of the cases. Given this huge discrepancy, I would argue that care is required when attempting to use the $t$-test for distributions which are far from normal.
Edit: Using the updated code, we can also compare the performance of the t-test to the performance of a z-test (still using the sample variance):
As expected, for normally distributed data the z-test performs worse that the t-test (because we didn't use the exact variance). The effect is quite noticeable for $n=10$ and nearly disappears for $n=100$. For $n=10$, the t-test seems superior to the $z$-test (using estimated variances) for all examples tested.
The worst case (assymetric+discrete, $n=10$) is equally bad for both tests.
For $n=100$ the results of both tests are very similar, but in some cases the t-test seems to perform slightly better.
This experiment only considers the type I error, but experiments along similar lines could be used to compare type II errors between distributions. | Robustness of the Student t-test to non-Gaussian data | Here is an attempt at answering the question using numerical experiments: using Monte Carlo estimation it is easy to determine the rate of type I errors for the test with a given distribution of input | Robustness of the Student t-test to non-Gaussian data
Here is an attempt at answering the question using numerical experiments: using Monte Carlo estimation it is easy to determine the rate of type I errors for the test with a given distribution of input data. Here I try data from the following distributions:
Normally distributed data: here the t-test is guaranteed to work.
Samples from the uniform distribution on $[-1,1]$: this is a prototype for a distribution with light tail (or rather, the extreme case of no tails).
The double-exponential distribution: this is a distribution with heavier tails than the normal distribution has.
A shifted exponential distribution, $\mathrm{Exp}(1) - 1$: this is a very asymmetric distribution, with a tail only on one side.
The discrete uniform distribution on the set $\{-1,+1\}$: this could be seen as an extreme case of a bi-modal distribution.
The discrete distribution with $P(X=-1) = 0.9$ and $P(X=9)=0.1$: this is very far from a normal distribution because it is both discrete and very asymmetric.
Since we expect the test to get more accurate as $n$ increases, I try only small and moderate values of $n$, namely $n \in \{10, 30, 100\}$. For the significance level I choose the commonly used value $\alpha = 5\%$.
My experiment is performed using the following R script: the script simulates $N=1,000,000$ dataset of size $n$, applies the t-test and counts how often $H_0\colon \mu=0$ is (wrongly) rejected. If the t-test still works, this should be the case in $5\%$ of the cases, any deviation from $5\%$ indicates that for the given distribution and $n$ the t-test did not perform optimally.
Edit: As requested by the OP, I have changed the code to also perform the same experiments for the z-test, so that the performance of both tests can be compared.
set.seed(1)
try.one <- function(gen, n, N=1000000, alpha=0.05) {
crit.t <- qt(1 - alpha/2, n-1)
reject.t <- 0
crit.z <- qnorm(1 - alpha/2)
reject.z <- 0
for (j in 1:N) {
X <- gen(n)
Z <- sqrt(n) * mean(X) / sd(X)
if (abs(Z) > crit.t) {
reject.t <- reject.t + 1
}
if (abs(Z) > crit.z) {
reject.z <- reject.z + 1
}
}
p.t <- reject.t/N
p.z <- reject.z/N
list(prob.t=p.t, sd.t=sqrt(p.t*(1-p.t)/N), prob.z=p.z, sd.z=sqrt(p.z*(1-p.z)/N))
}
distributions <- c("normal", "uniform", "double exponential", "exponential",
"discrete", "asym. discrete")
try.all <- function() {
dist.name <- character(0)
nn <- numeric(0)
fp.rate.t <- numeric(0)
std.err.t <- numeric(0)
fp.rate.z <- numeric(0)
std.err.z <- numeric(0)
for (dist in distributions) {
if (dist == "normal") {
gen <- rnorm
} else if (dist == "uniform") {
gen <- function(n) runif(n, -1, 1)
} else if (dist == "double exponential") {
gen <- function(n) rexp(n) * sample(c(-1,1), n, replace=TRUE)
} else if (dist == "exponential") {
gen <- function(n) rexp(n) - 1
} else if (dist == "discrete") {
gen <- function(n) sample(c(-1,1), n, replace=TRUE)
} else if (dist == "asym. discrete") {
gen <- function(n) sample(c(-1, 9), n, replace=TRUE, prob=c(0.9,0.1))
}
for (n in c(10, 30, 100)) {
row <- try.one(gen, n)
dist.name <- c(dist.name, dist)
nn <- c(nn, n)
fp.rate.t <- c(fp.rate.t, row$prob.t)
std.err.t <- c(std.err.t, row$sd.t)
fp.rate.z <- c(fp.rate.z, row$prob.z)
std.err.z <- c(std.err.z, row$sd.z)
}
}
data.frame(dist.name, n=nn, fp.rate.t, std.err.t, fp.rate.z, std.err.z)
}
print(try.all(), row.names=FALSE)
The output, after some minutes, is
dist.name n fp.rate.t std.err.t fp.rate.z std.err.z
normal 10 0.050029 0.0002180048 0.081694 0.0002738980
normal 30 0.050059 0.0002180667 0.059824 0.0002371605
normal 100 0.049930 0.0002178004 0.052726 0.0002234859
uniform 10 0.054490 0.0002269820 0.084445 0.0002780540
uniform 30 0.050906 0.0002198058 0.060263 0.0002379735
uniform 100 0.050116 0.0002181843 0.053001 0.0002240355
double exponential 10 0.042272 0.0002012090 0.074645 0.0002628177
double exponential 30 0.047506 0.0002127185 0.057410 0.0002326244
double exponential 100 0.049646 0.0002172125 0.052526 0.0002230852
exponential 10 0.099738 0.0002996503 0.130045 0.0003363529
exponential 30 0.072758 0.0002597389 0.082090 0.0002745018
exponential 100 0.058040 0.0002338191 0.060755 0.0002388804
discrete 10 0.021386 0.0001446673 0.109666 0.0003124730
discrete 30 0.042853 0.0002025256 0.042853 0.0002025256
discrete 100 0.056972 0.0002317891 0.056972 0.0002317891
asym. discrete 10 0.350463 0.0004771150 0.350463 0.0004771150
asym. discrete 30 0.044408 0.0002059998 0.191153 0.0003932093
asym. discrete 100 0.067916 0.0002516017 0.067916 0.0002516017
Some observations about these results:
The rate of type I errors is listed in the column fp.rate. As expected, for the normal distribution this is very close to $5\%$.
In nearly all cases, the rate of type I errors gets closer to $5\%$ as $n$ increases, sometimes from below and sometimes from above. The only exception is the asymmetric discrete distribution.
The weight of the tails seems not to have too much effect: the test performs reasonably well for both uniform and double exponential distributions.
For small sample size ($n=10$) there are notable deviation of the type I error rate from $5\%$, both for the discrete distributions and for the asymmetric distributions.
The worst case is the discrete, asymmetric distribution where the t-test at $5\%$-level shows type I errors in $35\%$ of the cases. Given this huge discrepancy, I would argue that care is required when attempting to use the $t$-test for distributions which are far from normal.
Edit: Using the updated code, we can also compare the performance of the t-test to the performance of a z-test (still using the sample variance):
As expected, for normally distributed data the z-test performs worse that the t-test (because we didn't use the exact variance). The effect is quite noticeable for $n=10$ and nearly disappears for $n=100$. For $n=10$, the t-test seems superior to the $z$-test (using estimated variances) for all examples tested.
The worst case (assymetric+discrete, $n=10$) is equally bad for both tests.
For $n=100$ the results of both tests are very similar, but in some cases the t-test seems to perform slightly better.
This experiment only considers the type I error, but experiments along similar lines could be used to compare type II errors between distributions. | Robustness of the Student t-test to non-Gaussian data
Here is an attempt at answering the question using numerical experiments: using Monte Carlo estimation it is easy to determine the rate of type I errors for the test with a given distribution of input |
49,879 | Meta-analysis of proportions | This is a meta-analysis of proportions. Just as you mentioned, the m-a of proportions is a little different than other types of meta-analysis- it includes studies that do not use controls. You can use R to do a meta-analysis of proportions. I recently made a tutorial on that on YouTube and shared my code on Github. This hands-on tutorial provides a step-by-step guide showing you how to conduct a full meta-analysis of proportions, including all the goals you mentioned in your post. My code allows you to conduct your analysis with either the logit transformation or double-arcsine transformation. You can also do it without transformation using my code. The R script shown in the video is readily adaptable for you to use for your own analyses.
Check out the tutorial here: https://youtu.be/2wbXTFvaRnM.
Download my code here: https://github.com/wnk4242/meta-analysis-of-proportions | Meta-analysis of proportions | This is a meta-analysis of proportions. Just as you mentioned, the m-a of proportions is a little different than other types of meta-analysis- it includes studies that do not use controls. You can use | Meta-analysis of proportions
This is a meta-analysis of proportions. Just as you mentioned, the m-a of proportions is a little different than other types of meta-analysis- it includes studies that do not use controls. You can use R to do a meta-analysis of proportions. I recently made a tutorial on that on YouTube and shared my code on Github. This hands-on tutorial provides a step-by-step guide showing you how to conduct a full meta-analysis of proportions, including all the goals you mentioned in your post. My code allows you to conduct your analysis with either the logit transformation or double-arcsine transformation. You can also do it without transformation using my code. The R script shown in the video is readily adaptable for you to use for your own analyses.
Check out the tutorial here: https://youtu.be/2wbXTFvaRnM.
Download my code here: https://github.com/wnk4242/meta-analysis-of-proportions | Meta-analysis of proportions
This is a meta-analysis of proportions. Just as you mentioned, the m-a of proportions is a little different than other types of meta-analysis- it includes studies that do not use controls. You can use |
49,880 | Meta-analysis of proportions | Yes it is perfectly possible to do this either in Stata or in R. Since I use R I offer a few hints to get you going.
A list of software for meta-analysis in R is available in the CRAN Task View (Disclaimer, I maintain it). There are several packages there which will do what you are proposing. I personally use metafor but there are other options.
You will almost certainly need to choose a transformation for your proportions before doing the meta-analysis and then back-transforming for interpretation. This is so your estimates are more approximately normally distributed. I would suggest the logit is worth considering. If you have any zeroes you will need to deal with them by adding a constant.
Adding moderator variables presents no new problems but with 20 primary studies you cannot add too many at once.
After fitting there is a range of graphical techniques you can use to display your model. Forest plots are commonly used for this but there are others available.
And finally it is worth, if you do use metafor to check the author's web-site which has many examples and hints. Particularly relevant may be these two here and here | Meta-analysis of proportions | Yes it is perfectly possible to do this either in Stata or in R. Since I use R I offer a few hints to get you going.
A list of software for meta-analysis in R is available in the CRAN Task View (Discl | Meta-analysis of proportions
Yes it is perfectly possible to do this either in Stata or in R. Since I use R I offer a few hints to get you going.
A list of software for meta-analysis in R is available in the CRAN Task View (Disclaimer, I maintain it). There are several packages there which will do what you are proposing. I personally use metafor but there are other options.
You will almost certainly need to choose a transformation for your proportions before doing the meta-analysis and then back-transforming for interpretation. This is so your estimates are more approximately normally distributed. I would suggest the logit is worth considering. If you have any zeroes you will need to deal with them by adding a constant.
Adding moderator variables presents no new problems but with 20 primary studies you cannot add too many at once.
After fitting there is a range of graphical techniques you can use to display your model. Forest plots are commonly used for this but there are others available.
And finally it is worth, if you do use metafor to check the author's web-site which has many examples and hints. Particularly relevant may be these two here and here | Meta-analysis of proportions
Yes it is perfectly possible to do this either in Stata or in R. Since I use R I offer a few hints to get you going.
A list of software for meta-analysis in R is available in the CRAN Task View (Discl |
49,881 | What is the difference between "inverse reinforcement learning" and supervised learning? | Disclaimer: I am MSc student of Control theory (with engineering background) who is starting his thesis on Reinforcement Learning. I am just beginning to get a feel for the field. Kinda like I just am taking my first walk around the lake of machine learning. So my information may not be spot on. I am answering because I FEEL that I understand the subtle difference. I also get the feeling from your request of example that you would like an application oriented example, not a mathematical abstraction of it.
Differences
- IRL frames its problem as an MDP and uses the notion of an 'agent' to select 'actions' that maximize the net reward. The key difference is, in IRL supervised learning techniques (ie data fitting) are used to obtain the reward function. Supervised learning uses labeled data in order approximate a mapping.
Example of learning ground distance from images
Supervised learning: Using features in images with labeled ground distances to train a Neural network weights to find ground distances in the general case.
IRL: Using labeled data to derive a reward function, which would be a mapping from features to rewards. Letting an agent explore the space of features and coming up with a policy that selects the best actions, which in this case would be an estimation of the ground distance.
For this specific task I described, it seems trivial since using RL for the classification of image distance when simpler supervised learning suffices is redundant. However, in RL situations where the definition of reward functions are difficult but it can be advantageous to use RL, IRL can prove very useful.
For example, if one were to imagine using RL to teach acrobatic maneuvers to helicopters (Paper by Abbeel Et Al), using IRL to obtain reward functions can be very useful. Once the reward functions for the maneuvers are obtained, this can be used to teach these maneuvers to others helicopters (with different aerodynamic models but similar controls) how to perform these maneuvers. Using supervised learning to come up with a mapping of states to controls wont work, since the different aircrafts would have different aerodynamic models.
Reference:
* Ng, A. Y., & Russell, S. J. (2000, June). Algorithms for inverse reinforcement learning. In Icml (pp. 663-670).
* Abbeel, P., Coates, A., & Ng, A. Y. (2010). Autonomous helicopter aerobatics through apprenticeship learning. The International Journal of Robotics Research. | What is the difference between "inverse reinforcement learning" and supervised learning? | Disclaimer: I am MSc student of Control theory (with engineering background) who is starting his thesis on Reinforcement Learning. I am just beginning to get a feel for the field. Kinda like I just am | What is the difference between "inverse reinforcement learning" and supervised learning?
Disclaimer: I am MSc student of Control theory (with engineering background) who is starting his thesis on Reinforcement Learning. I am just beginning to get a feel for the field. Kinda like I just am taking my first walk around the lake of machine learning. So my information may not be spot on. I am answering because I FEEL that I understand the subtle difference. I also get the feeling from your request of example that you would like an application oriented example, not a mathematical abstraction of it.
Differences
- IRL frames its problem as an MDP and uses the notion of an 'agent' to select 'actions' that maximize the net reward. The key difference is, in IRL supervised learning techniques (ie data fitting) are used to obtain the reward function. Supervised learning uses labeled data in order approximate a mapping.
Example of learning ground distance from images
Supervised learning: Using features in images with labeled ground distances to train a Neural network weights to find ground distances in the general case.
IRL: Using labeled data to derive a reward function, which would be a mapping from features to rewards. Letting an agent explore the space of features and coming up with a policy that selects the best actions, which in this case would be an estimation of the ground distance.
For this specific task I described, it seems trivial since using RL for the classification of image distance when simpler supervised learning suffices is redundant. However, in RL situations where the definition of reward functions are difficult but it can be advantageous to use RL, IRL can prove very useful.
For example, if one were to imagine using RL to teach acrobatic maneuvers to helicopters (Paper by Abbeel Et Al), using IRL to obtain reward functions can be very useful. Once the reward functions for the maneuvers are obtained, this can be used to teach these maneuvers to others helicopters (with different aerodynamic models but similar controls) how to perform these maneuvers. Using supervised learning to come up with a mapping of states to controls wont work, since the different aircrafts would have different aerodynamic models.
Reference:
* Ng, A. Y., & Russell, S. J. (2000, June). Algorithms for inverse reinforcement learning. In Icml (pp. 663-670).
* Abbeel, P., Coates, A., & Ng, A. Y. (2010). Autonomous helicopter aerobatics through apprenticeship learning. The International Journal of Robotics Research. | What is the difference between "inverse reinforcement learning" and supervised learning?
Disclaimer: I am MSc student of Control theory (with engineering background) who is starting his thesis on Reinforcement Learning. I am just beginning to get a feel for the field. Kinda like I just am |
49,882 | What is the difference between "inverse reinforcement learning" and supervised learning? | Inverse reinforcement learning (IRL) can be seen an instance of supervised learning. The data is the demonstrations and the target is the reward function. So the 'learning' task is just to look for the mapping from the space of demonstrations to reward functions, under the constraints of the specification of the MDP.
Concrete example: Lets use Bayesian IRL to illustrate.
Given some MDP without reward function $(\mathcal{S}, \mathcal{A}, T, \gamma)$, and a set of demonstrations $\Xi = (\xi_1, \ldots, \xi_M)$ where each demo trajectory $\xi_i = ((s^i_1,a^i_1), \ldots, (s^i_H,a^i_H))$ is a set of state-action pairs. The BIRL task is to find,
$$
\Pr(R \mid \Xi)
$$
which is easily expanded as $\Pr(R \mid \Xi) \propto \Pr(\Xi \mid R) \Pr(R)$ by Bayes rule. The 'data' ($\Xi$) are also often assumed to be iid. From this formulation it is obvious that its a supervised learning problem. The devil is only in the details of computing the likelihood.
Important: IRL seeks the reward functions that 'explains' the demonstrations. Do not confuse this with Apprenticeship learning (AL) where the primary interest is a policy which can generate the seen demonstrations (although this is often but not necessarily obtained via the reward).
Additionally, there is behavior cloning which is also closely related. Given some examples of a behavior, behavior cloning simple try to reproduce it. This could mean generating behavior that 'matches' the statistics of the observed behavior. It is obvious to see how this is supervised learning. E.g. given some demonstrations, train a neural net to generate 'similar' behaviors given 'similar' situations.
P.S. Forgive my hand-wavy nature with the vocabulary. | What is the difference between "inverse reinforcement learning" and supervised learning? | Inverse reinforcement learning (IRL) can be seen an instance of supervised learning. The data is the demonstrations and the target is the reward function. So the 'learning' task is just to look for th | What is the difference between "inverse reinforcement learning" and supervised learning?
Inverse reinforcement learning (IRL) can be seen an instance of supervised learning. The data is the demonstrations and the target is the reward function. So the 'learning' task is just to look for the mapping from the space of demonstrations to reward functions, under the constraints of the specification of the MDP.
Concrete example: Lets use Bayesian IRL to illustrate.
Given some MDP without reward function $(\mathcal{S}, \mathcal{A}, T, \gamma)$, and a set of demonstrations $\Xi = (\xi_1, \ldots, \xi_M)$ where each demo trajectory $\xi_i = ((s^i_1,a^i_1), \ldots, (s^i_H,a^i_H))$ is a set of state-action pairs. The BIRL task is to find,
$$
\Pr(R \mid \Xi)
$$
which is easily expanded as $\Pr(R \mid \Xi) \propto \Pr(\Xi \mid R) \Pr(R)$ by Bayes rule. The 'data' ($\Xi$) are also often assumed to be iid. From this formulation it is obvious that its a supervised learning problem. The devil is only in the details of computing the likelihood.
Important: IRL seeks the reward functions that 'explains' the demonstrations. Do not confuse this with Apprenticeship learning (AL) where the primary interest is a policy which can generate the seen demonstrations (although this is often but not necessarily obtained via the reward).
Additionally, there is behavior cloning which is also closely related. Given some examples of a behavior, behavior cloning simple try to reproduce it. This could mean generating behavior that 'matches' the statistics of the observed behavior. It is obvious to see how this is supervised learning. E.g. given some demonstrations, train a neural net to generate 'similar' behaviors given 'similar' situations.
P.S. Forgive my hand-wavy nature with the vocabulary. | What is the difference between "inverse reinforcement learning" and supervised learning?
Inverse reinforcement learning (IRL) can be seen an instance of supervised learning. The data is the demonstrations and the target is the reward function. So the 'learning' task is just to look for th |
49,883 | Why does the direction with highest eigenvalue have the largest semi-axis? | I think there are two ellipses that we could consider. First, consider the image of the unit circle with respect to the map $x \mapsto x^T A x$ for PD $A \in \mathbb R^{n \times n}$. It is a standard result that $f(x) = x^T A x$ is maximized over unit vectors $x$ by the unit eigenvector $v_1$ with largest eigenvalue $\lambda_1$. So this means that the ellipse formed by the image of the unit circle with respect to this map has its largest semi-axis as $v_1$ with length $\lambda_1$, and so on for the other eigenpairs. So in this case we clearly have that the eigenvalues give the lengths of the semi-axes and the biggest semi-axis is for the biggest eigenvalue.
But now consider the contour $f(x) = 1$ for any $x \in \mathbb R^n$. Since $A$ is positive-definite we know that $f$ is a paraboloid, so its intersection with a horizontal plane (i.e. the contour) is an ellipse. In this case, we find that the shortest semi-axis is parallel to $v_1$, which makes sense because that's the direction that $f$ grows the fastest so we hit 1 the soonest. The largest semi-axis is $v_n$ since that's the direction in which $f$ is growing the slowest. Plugging $v_1$ in to $f$ we get $f(v_1) = v_1^T A v_1 = \lambda_1 v_1^T v_1 = \lambda_1$, not 1 as required, so the actual vector parallel to $v_1$ is $\frac{1}{\sqrt \lambda_1}v_1$. Does this help?
Bringing this back to PCA, let's say that our data consists of $m$ observations coming iid from $\mathcal N_2(\vec 0, \Sigma)$. Let's draw an ellipse around our data such that every point in the ellipse has likelihood greater than some cutoff $c$. This corresponds to a contour of the likelihood and can be found by $$
c = \frac{1}{2\pi \vert \Sigma \vert}\exp \left( -\frac12 x^T \Sigma^{-1} x\right) \iff x^T \Sigma^{-1} x = -2\log (2\pi c \vert \Sigma \vert)
$$
i.e. the ellipse that circles the data is a contour of the quadratic form $g(x) = x^T \Sigma^{-1} x$. Note that $\Sigma^{-1}v = \lambda v \implies \frac1\lambda v = \Sigma v$, so the eigenvectors defining the axes of this likelihood contour ellipse are the same as those for the covariance matrix $\Sigma$ but with inverted eigenvalues. | Why does the direction with highest eigenvalue have the largest semi-axis? | I think there are two ellipses that we could consider. First, consider the image of the unit circle with respect to the map $x \mapsto x^T A x$ for PD $A \in \mathbb R^{n \times n}$. It is a standard | Why does the direction with highest eigenvalue have the largest semi-axis?
I think there are two ellipses that we could consider. First, consider the image of the unit circle with respect to the map $x \mapsto x^T A x$ for PD $A \in \mathbb R^{n \times n}$. It is a standard result that $f(x) = x^T A x$ is maximized over unit vectors $x$ by the unit eigenvector $v_1$ with largest eigenvalue $\lambda_1$. So this means that the ellipse formed by the image of the unit circle with respect to this map has its largest semi-axis as $v_1$ with length $\lambda_1$, and so on for the other eigenpairs. So in this case we clearly have that the eigenvalues give the lengths of the semi-axes and the biggest semi-axis is for the biggest eigenvalue.
But now consider the contour $f(x) = 1$ for any $x \in \mathbb R^n$. Since $A$ is positive-definite we know that $f$ is a paraboloid, so its intersection with a horizontal plane (i.e. the contour) is an ellipse. In this case, we find that the shortest semi-axis is parallel to $v_1$, which makes sense because that's the direction that $f$ grows the fastest so we hit 1 the soonest. The largest semi-axis is $v_n$ since that's the direction in which $f$ is growing the slowest. Plugging $v_1$ in to $f$ we get $f(v_1) = v_1^T A v_1 = \lambda_1 v_1^T v_1 = \lambda_1$, not 1 as required, so the actual vector parallel to $v_1$ is $\frac{1}{\sqrt \lambda_1}v_1$. Does this help?
Bringing this back to PCA, let's say that our data consists of $m$ observations coming iid from $\mathcal N_2(\vec 0, \Sigma)$. Let's draw an ellipse around our data such that every point in the ellipse has likelihood greater than some cutoff $c$. This corresponds to a contour of the likelihood and can be found by $$
c = \frac{1}{2\pi \vert \Sigma \vert}\exp \left( -\frac12 x^T \Sigma^{-1} x\right) \iff x^T \Sigma^{-1} x = -2\log (2\pi c \vert \Sigma \vert)
$$
i.e. the ellipse that circles the data is a contour of the quadratic form $g(x) = x^T \Sigma^{-1} x$. Note that $\Sigma^{-1}v = \lambda v \implies \frac1\lambda v = \Sigma v$, so the eigenvectors defining the axes of this likelihood contour ellipse are the same as those for the covariance matrix $\Sigma$ but with inverted eigenvalues. | Why does the direction with highest eigenvalue have the largest semi-axis?
I think there are two ellipses that we could consider. First, consider the image of the unit circle with respect to the map $x \mapsto x^T A x$ for PD $A \in \mathbb R^{n \times n}$. It is a standard |
49,884 | Forecasting models for time series with lots of zero values | The problem you are referring to is called sparse data analysis/intermittent demand analysis.The ACF/PACF is meaningless due to the false correlation induced by consecutive 0's. One earlier method to deal with this is called Croston's Method but lacks generality to deal with unusual values and level/trend changes in the data. Level/trend changes can be observed when examining the rate data (demand/# of 0's since last demand ) where demand is the actual non-zero observation. I have implemented a robust Croston-like approach in AUTOBOX (a piece of software that I helped to develop ). | Forecasting models for time series with lots of zero values | The problem you are referring to is called sparse data analysis/intermittent demand analysis.The ACF/PACF is meaningless due to the false correlation induced by consecutive 0's. One earlier method to | Forecasting models for time series with lots of zero values
The problem you are referring to is called sparse data analysis/intermittent demand analysis.The ACF/PACF is meaningless due to the false correlation induced by consecutive 0's. One earlier method to deal with this is called Croston's Method but lacks generality to deal with unusual values and level/trend changes in the data. Level/trend changes can be observed when examining the rate data (demand/# of 0's since last demand ) where demand is the actual non-zero observation. I have implemented a robust Croston-like approach in AUTOBOX (a piece of software that I helped to develop ). | Forecasting models for time series with lots of zero values
The problem you are referring to is called sparse data analysis/intermittent demand analysis.The ACF/PACF is meaningless due to the false correlation induced by consecutive 0's. One earlier method to |
49,885 | Does it make sense to multiply two embedding vectors? | Yes it does. Here you can find example of network that uses multiplication, among other methods, for combining embeddings. As described in my answer
element-wise product $u*v$, is basically an interaction term, this can
catch similarities between values (big * big = bigger; small * small =
smaller), or the discrepancies (negative * positive = negative) (see
example here).
So it is perfectly reasonable way of combining weights, but often, as in above example, people use in parallel several different methods for combining them, to produce different kind of features for the model. | Does it make sense to multiply two embedding vectors? | Yes it does. Here you can find example of network that uses multiplication, among other methods, for combining embeddings. As described in my answer
element-wise product $u*v$, is basically an intera | Does it make sense to multiply two embedding vectors?
Yes it does. Here you can find example of network that uses multiplication, among other methods, for combining embeddings. As described in my answer
element-wise product $u*v$, is basically an interaction term, this can
catch similarities between values (big * big = bigger; small * small =
smaller), or the discrepancies (negative * positive = negative) (see
example here).
So it is perfectly reasonable way of combining weights, but often, as in above example, people use in parallel several different methods for combining them, to produce different kind of features for the model. | Does it make sense to multiply two embedding vectors?
Yes it does. Here you can find example of network that uses multiplication, among other methods, for combining embeddings. As described in my answer
element-wise product $u*v$, is basically an intera |
49,886 | Does it make sense to multiply two embedding vectors? | I started working with words vectors for several weeks.
I suspect that in order to obtain a valid answer to something like: "what is blood color?"
the network will handle better Vec(blood)*Vec(color) insted of Vec(blood)+Vec(color) before calculating the sinus with all database words.
Alas, I didn't test it yet.
Some stop words should change the way we operate with vectors.
For example "I want non american food" should be calculated as:
Vec(eat)+Vec(food)-Vec(american)
My main problem with words vectors is how slow it is when you have to calculate millions of sinus with 300 dimensional vectors ... I didn't find a way to accelerate this. | Does it make sense to multiply two embedding vectors? | I started working with words vectors for several weeks.
I suspect that in order to obtain a valid answer to something like: "what is blood color?"
the network will handle better Vec(blood)*Vec(color) | Does it make sense to multiply two embedding vectors?
I started working with words vectors for several weeks.
I suspect that in order to obtain a valid answer to something like: "what is blood color?"
the network will handle better Vec(blood)*Vec(color) insted of Vec(blood)+Vec(color) before calculating the sinus with all database words.
Alas, I didn't test it yet.
Some stop words should change the way we operate with vectors.
For example "I want non american food" should be calculated as:
Vec(eat)+Vec(food)-Vec(american)
My main problem with words vectors is how slow it is when you have to calculate millions of sinus with 300 dimensional vectors ... I didn't find a way to accelerate this. | Does it make sense to multiply two embedding vectors?
I started working with words vectors for several weeks.
I suspect that in order to obtain a valid answer to something like: "what is blood color?"
the network will handle better Vec(blood)*Vec(color) |
49,887 | Does it make sense to multiply two embedding vectors? | Not only does it make sense, it is one of the key operations in one of the biggest breakthroughs in network design of recent years, the idea of "attention" as used by, e.g. Google Translate, ChatGPT and all other GPT-based applications, Stable Diffusion, and many other recent machine learning systems.
Attention is essentially a database lookup over the values that are currently being examined by a network. For instance in the transformer architecture, the inputs to the attention layers are usually keys, queries and values, all of which are word embeddings for words in the current context of the input or that have recently been generated in the output. The attention layer then calculates dot products between each query and the keys (ie, component-wise multiplications followed by summing to a single scalar), scales them so that they total to 1, then uses those as weights to add up all the values. This produces an output embedding that is most similar to the values associated with the keys that are most similar to the queries, but contains a little of all of the values mixed in.
This has turned out to be extremely useful, allowing networks to be built that work on much larger contexts than would otherwise be possible because they are able to select only the parts of the context that are currently relevant for the output they are building. This means that tasks that previously required recurrent networks to accumulate information over a large context can now be addressed by feed-forward networks that are much simpler to train.
For more information, the paper Attention Is All You Need (Vaswani et al) is considered the best starting point. | Does it make sense to multiply two embedding vectors? | Not only does it make sense, it is one of the key operations in one of the biggest breakthroughs in network design of recent years, the idea of "attention" as used by, e.g. Google Translate, ChatGPT a | Does it make sense to multiply two embedding vectors?
Not only does it make sense, it is one of the key operations in one of the biggest breakthroughs in network design of recent years, the idea of "attention" as used by, e.g. Google Translate, ChatGPT and all other GPT-based applications, Stable Diffusion, and many other recent machine learning systems.
Attention is essentially a database lookup over the values that are currently being examined by a network. For instance in the transformer architecture, the inputs to the attention layers are usually keys, queries and values, all of which are word embeddings for words in the current context of the input or that have recently been generated in the output. The attention layer then calculates dot products between each query and the keys (ie, component-wise multiplications followed by summing to a single scalar), scales them so that they total to 1, then uses those as weights to add up all the values. This produces an output embedding that is most similar to the values associated with the keys that are most similar to the queries, but contains a little of all of the values mixed in.
This has turned out to be extremely useful, allowing networks to be built that work on much larger contexts than would otherwise be possible because they are able to select only the parts of the context that are currently relevant for the output they are building. This means that tasks that previously required recurrent networks to accumulate information over a large context can now be addressed by feed-forward networks that are much simpler to train.
For more information, the paper Attention Is All You Need (Vaswani et al) is considered the best starting point. | Does it make sense to multiply two embedding vectors?
Not only does it make sense, it is one of the key operations in one of the biggest breakthroughs in network design of recent years, the idea of "attention" as used by, e.g. Google Translate, ChatGPT a |
49,888 | How is the tail of a distribution defined (about heavy-tailed distributions)? | We distinguish what distributions are heavy tailed by first limiting our discussion to those tails that are long, that is, there is always an $\epsilon>0$, no matter how small, for which $f(x)>\epsilon>0$ for any $x<M$ no matter how large $M$ (for right tails), or $x>M$ for $M$ large magnitude negative (for left tails). In other words, $f(x)$ is non-zero no matter how large $|x|$ is. A random variable rather than density function definition for long tailed would be equivalent.
Then (using the right tail) $\lim_{x\rightarrow \infty} 1-F(x)\rightarrow 0$, that is, the long heavy-tailed survival function, i.e., $1-F(x)$, A.K.A. $1-\text{CDF}$, can then be used to construct the ratio of two candidate survival functions, which ratio will go to zero as $x\rightarrow \infty$ if the lighter tail is in the numerator. In practice, it is often easier to compare the limiting logarithm of the ratio of survival functions, but this is actually not different, if properly interpreted. For long left tails, we would compare the limiting (logarithm of) the ratio of the CDF's themselves as $x\rightarrow -\infty$, rather than the survival functions.
Why use the CDF or 1-CDF for this? Why not use the (e.g., logarithm) of the ratio of the pdf's? In many cases we could use the pdf's, however, for actual random variables (observations), and for some pdf's with nasty properties like non-smooth derivatives, this would be less revealing than comparison of the limiting areas under the pdf's.
What is the big deal with tail heaviness of exponential functions? Exponential functions have the same rate everywhere so that they are memoryless. Thus, exponential distributions form a natural cut point for measuring heaviness of tails. | How is the tail of a distribution defined (about heavy-tailed distributions)? | We distinguish what distributions are heavy tailed by first limiting our discussion to those tails that are long, that is, there is always an $\epsilon>0$, no matter how small, for which $f(x)>\epsilo | How is the tail of a distribution defined (about heavy-tailed distributions)?
We distinguish what distributions are heavy tailed by first limiting our discussion to those tails that are long, that is, there is always an $\epsilon>0$, no matter how small, for which $f(x)>\epsilon>0$ for any $x<M$ no matter how large $M$ (for right tails), or $x>M$ for $M$ large magnitude negative (for left tails). In other words, $f(x)$ is non-zero no matter how large $|x|$ is. A random variable rather than density function definition for long tailed would be equivalent.
Then (using the right tail) $\lim_{x\rightarrow \infty} 1-F(x)\rightarrow 0$, that is, the long heavy-tailed survival function, i.e., $1-F(x)$, A.K.A. $1-\text{CDF}$, can then be used to construct the ratio of two candidate survival functions, which ratio will go to zero as $x\rightarrow \infty$ if the lighter tail is in the numerator. In practice, it is often easier to compare the limiting logarithm of the ratio of survival functions, but this is actually not different, if properly interpreted. For long left tails, we would compare the limiting (logarithm of) the ratio of the CDF's themselves as $x\rightarrow -\infty$, rather than the survival functions.
Why use the CDF or 1-CDF for this? Why not use the (e.g., logarithm) of the ratio of the pdf's? In many cases we could use the pdf's, however, for actual random variables (observations), and for some pdf's with nasty properties like non-smooth derivatives, this would be less revealing than comparison of the limiting areas under the pdf's.
What is the big deal with tail heaviness of exponential functions? Exponential functions have the same rate everywhere so that they are memoryless. Thus, exponential distributions form a natural cut point for measuring heaviness of tails. | How is the tail of a distribution defined (about heavy-tailed distributions)?
We distinguish what distributions are heavy tailed by first limiting our discussion to those tails that are long, that is, there is always an $\epsilon>0$, no matter how small, for which $f(x)>\epsilo |
49,889 | With two restrictions on the parameters, how does an AR(p) process change as we increase p | Here is a partial answer for the case AR(1) vs. AR(2).
In the AR(1) case, the variance is (setting $\sigma^2=1$)
$$
\gamma_0=\frac{1}{1-\phi^2}
$$
In the AR(2) case, one may show that
$$
\gamma_0=\frac {(1-\phi_2)} {1-\phi_2-\phi_1^2 -\phi_1^2\phi_2 - \phi_2^2(1-\phi_2)}
$$
This expression maybe helps motivate why I cannot come up with a general answer along this route, as the expressions for the variance will only become more complicated in the general AR(p) case.
Your restrictions imply that $0<\phi<1$ (restricting to the stationary case), $\phi_1,\phi_2>0$, $\phi_1+\phi_2=\phi$ and thus also $\phi>\phi_1$.
Mapping this to R for numerical evaluation gives me
phiAR1 <- seq(0.01,0.99,by=0.01)
phi_1 <- seq(0.01,0.99,by=0.01)
gamma0AR1 <- function(phiAR1) 1/(1-phiAR1^2)
gamma0AR2 <- function(phiAR1,phi_1){
phi_2 <- phiAR1 - phi_1
return(ifelse(phi_2>0,(1-phi_2)/(1-phi_2-phi_1^2 -phi_1^2*phi_2 - phi_2^2*(1-phi_2)),NA))
}
Vardiff <- function(phiAR1,phi_1) gamma0AR1(phiAR1)-gamma0AR2(phiAR1,phi_1)
Vardiffs <- outer(phiAR1,phi_1,Vardiff)
persp(phiAR1,phi_1,Vardiffs)
min(Vardiffs[!is.na(Vardiffs)])
This seems to be a nonnegative function:
I also tried to let MAPLE verify this analytically via
restart;
assume(0<phi[AR1],phi[AR1]<1,phi[1]>0,phi[AR1]>phi[1]);
is(1/(1-phi[AR1]^2) - (1-phi[2])/(1-phi[2]-(phi[1])^2 -(phi[1])^2*phi[2] - (phi[2])^2*(1-phi[2]))>0);
but that does not seem to a correct approach. | With two restrictions on the parameters, how does an AR(p) process change as we increase p | Here is a partial answer for the case AR(1) vs. AR(2).
In the AR(1) case, the variance is (setting $\sigma^2=1$)
$$
\gamma_0=\frac{1}{1-\phi^2}
$$
In the AR(2) case, one may show that
$$
\gamma_0=\fr | With two restrictions on the parameters, how does an AR(p) process change as we increase p
Here is a partial answer for the case AR(1) vs. AR(2).
In the AR(1) case, the variance is (setting $\sigma^2=1$)
$$
\gamma_0=\frac{1}{1-\phi^2}
$$
In the AR(2) case, one may show that
$$
\gamma_0=\frac {(1-\phi_2)} {1-\phi_2-\phi_1^2 -\phi_1^2\phi_2 - \phi_2^2(1-\phi_2)}
$$
This expression maybe helps motivate why I cannot come up with a general answer along this route, as the expressions for the variance will only become more complicated in the general AR(p) case.
Your restrictions imply that $0<\phi<1$ (restricting to the stationary case), $\phi_1,\phi_2>0$, $\phi_1+\phi_2=\phi$ and thus also $\phi>\phi_1$.
Mapping this to R for numerical evaluation gives me
phiAR1 <- seq(0.01,0.99,by=0.01)
phi_1 <- seq(0.01,0.99,by=0.01)
gamma0AR1 <- function(phiAR1) 1/(1-phiAR1^2)
gamma0AR2 <- function(phiAR1,phi_1){
phi_2 <- phiAR1 - phi_1
return(ifelse(phi_2>0,(1-phi_2)/(1-phi_2-phi_1^2 -phi_1^2*phi_2 - phi_2^2*(1-phi_2)),NA))
}
Vardiff <- function(phiAR1,phi_1) gamma0AR1(phiAR1)-gamma0AR2(phiAR1,phi_1)
Vardiffs <- outer(phiAR1,phi_1,Vardiff)
persp(phiAR1,phi_1,Vardiffs)
min(Vardiffs[!is.na(Vardiffs)])
This seems to be a nonnegative function:
I also tried to let MAPLE verify this analytically via
restart;
assume(0<phi[AR1],phi[AR1]<1,phi[1]>0,phi[AR1]>phi[1]);
is(1/(1-phi[AR1]^2) - (1-phi[2])/(1-phi[2]-(phi[1])^2 -(phi[1])^2*phi[2] - (phi[2])^2*(1-phi[2]))>0);
but that does not seem to a correct approach. | With two restrictions on the parameters, how does an AR(p) process change as we increase p
Here is a partial answer for the case AR(1) vs. AR(2).
In the AR(1) case, the variance is (setting $\sigma^2=1$)
$$
\gamma_0=\frac{1}{1-\phi^2}
$$
In the AR(2) case, one may show that
$$
\gamma_0=\fr |
49,890 | How to prove that Bernoulli random variable's sum is binomial distribution? | If you start from $$X_1,\ldots,X_n\stackrel{\text{i.i.d.}}{\sim}\mathcal{B}(p)$$ and define $$Y=X_1+\cdots+X_n$$ you can compute directly$$\mathbb{P}(Y=y)={n \choose y} p^y (1-p)^{n-y}\qquad y=0,1,\ldots,n$$ by a combinatoric argument. | How to prove that Bernoulli random variable's sum is binomial distribution? | If you start from $$X_1,\ldots,X_n\stackrel{\text{i.i.d.}}{\sim}\mathcal{B}(p)$$ and define $$Y=X_1+\cdots+X_n$$ you can compute directly$$\mathbb{P}(Y=y)={n \choose y} p^y (1-p)^{n-y}\qquad y=0,1,\ld | How to prove that Bernoulli random variable's sum is binomial distribution?
If you start from $$X_1,\ldots,X_n\stackrel{\text{i.i.d.}}{\sim}\mathcal{B}(p)$$ and define $$Y=X_1+\cdots+X_n$$ you can compute directly$$\mathbb{P}(Y=y)={n \choose y} p^y (1-p)^{n-y}\qquad y=0,1,\ldots,n$$ by a combinatoric argument. | How to prove that Bernoulli random variable's sum is binomial distribution?
If you start from $$X_1,\ldots,X_n\stackrel{\text{i.i.d.}}{\sim}\mathcal{B}(p)$$ and define $$Y=X_1+\cdots+X_n$$ you can compute directly$$\mathbb{P}(Y=y)={n \choose y} p^y (1-p)^{n-y}\qquad y=0,1,\ld |
49,891 | How to interpret GLM coefficients? | The original author of the analysis was kind enough to respond and clear this up for me. The calculation converts odds ratio (which are highly unintuitive) to relative risk (which is easier to understand). More details are provided here.
The GLM coefficients only show the multiplicative change in odds ratio. so if p1 is the risk of getting a high score for black defendants and p0 is the risk of getting a high score for white defendants, then exp(0.47721) shows (p1/(1-p1))/(p0/(1-p0)). Unfortunately, this is not particularly easy to intuit. Relative risk (p1/p0) is a much simpler concept to understand and odds ratio can be converted to relative risk using the formula: Relative risk=odds ratio/(1−p0+(p0×odds ratio)). The calculation in the question accomplishes that. | How to interpret GLM coefficients? | The original author of the analysis was kind enough to respond and clear this up for me. The calculation converts odds ratio (which are highly unintuitive) to relative risk (which is easier to underst | How to interpret GLM coefficients?
The original author of the analysis was kind enough to respond and clear this up for me. The calculation converts odds ratio (which are highly unintuitive) to relative risk (which is easier to understand). More details are provided here.
The GLM coefficients only show the multiplicative change in odds ratio. so if p1 is the risk of getting a high score for black defendants and p0 is the risk of getting a high score for white defendants, then exp(0.47721) shows (p1/(1-p1))/(p0/(1-p0)). Unfortunately, this is not particularly easy to intuit. Relative risk (p1/p0) is a much simpler concept to understand and odds ratio can be converted to relative risk using the formula: Relative risk=odds ratio/(1−p0+(p0×odds ratio)). The calculation in the question accomplishes that. | How to interpret GLM coefficients?
The original author of the analysis was kind enough to respond and clear this up for me. The calculation converts odds ratio (which are highly unintuitive) to relative risk (which is easier to underst |
49,892 | Are the No Free Lunch Theorem and Halting Problem connected? | No they are not related. In NFL, a function can be considered as a look-up-table (that is, a list of input-output pairs.) We are not concerned with how a function is implemented with NFL. With computability theory, we are concerned with how a function is actually computed.
try Woodward J. Computable and Incomputable Search Algorithms and Functions. IEEE International Conference on Intelligent Computing and Intelligent Systems (IEEE ICIS 2009) November 20-22,2009 Shanghai, China. pdf. | Are the No Free Lunch Theorem and Halting Problem connected? | No they are not related. In NFL, a function can be considered as a look-up-table (that is, a list of input-output pairs.) We are not concerned with how a function is implemented with NFL. With computa | Are the No Free Lunch Theorem and Halting Problem connected?
No they are not related. In NFL, a function can be considered as a look-up-table (that is, a list of input-output pairs.) We are not concerned with how a function is implemented with NFL. With computability theory, we are concerned with how a function is actually computed.
try Woodward J. Computable and Incomputable Search Algorithms and Functions. IEEE International Conference on Intelligent Computing and Intelligent Systems (IEEE ICIS 2009) November 20-22,2009 Shanghai, China. pdf. | Are the No Free Lunch Theorem and Halting Problem connected?
No they are not related. In NFL, a function can be considered as a look-up-table (that is, a list of input-output pairs.) We are not concerned with how a function is implemented with NFL. With computa |
49,893 | What does "orthogonalize" mean? | I believe the quote refers to this algorithm, where the relevant line reads:
$x_j^m=x_j^{m-1}-\frac{\langle z_m,x_j^{m-1}\rangle}{\langle z_m,z_m\rangle}z_m$
Here the authors are using the angle-brackets to denote an inner product, which is essentially the standard vector dot product from Physics 101.
The second term is the orthogonal projection of $x_j^{m-1}$ onto $z_m$. By subtracting this from $x_j^{m-1}$, the result $x_j^m$ is made orthogonal to $z_m$. | What does "orthogonalize" mean? | I believe the quote refers to this algorithm, where the relevant line reads:
$x_j^m=x_j^{m-1}-\frac{\langle z_m,x_j^{m-1}\rangle}{\langle z_m,z_m\rangle}z_m$
Here the authors are using the angle-brack | What does "orthogonalize" mean?
I believe the quote refers to this algorithm, where the relevant line reads:
$x_j^m=x_j^{m-1}-\frac{\langle z_m,x_j^{m-1}\rangle}{\langle z_m,z_m\rangle}z_m$
Here the authors are using the angle-brackets to denote an inner product, which is essentially the standard vector dot product from Physics 101.
The second term is the orthogonal projection of $x_j^{m-1}$ onto $z_m$. By subtracting this from $x_j^{m-1}$, the result $x_j^m$ is made orthogonal to $z_m$. | What does "orthogonalize" mean?
I believe the quote refers to this algorithm, where the relevant line reads:
$x_j^m=x_j^{m-1}-\frac{\langle z_m,x_j^{m-1}\rangle}{\langle z_m,z_m\rangle}z_m$
Here the authors are using the angle-brack |
49,894 | Different definitions of cross entropy loss function not equivalent? | Your extension of the two-class definition requires a bit of care. If there are more than two classes, we have to stipulate that in this case, $\sum a_i=1$ and that $a_i\ge 0\forall i,$ i.e. the predicted class memberships are all positive and sum to 1. Then the one-hot encoding provides that precisely one of the $y_i=1$ and the rest are $0$.
Then we can write the multi-class cross-entropy as
$$C=-\frac{1}{n}\sum_x\sum_i y_i \ln (a_i)$$
Note that the one-hot encoding scheme makes the product 0 for all but one of the $a_i$. Contrast this to the expression you have, in which each predicted value appears twice because it is evaluated once for each of two $y$ vectors, and each $y_i$ has precisely one nonzero value under one-hot encoding. The result is that your expression produces a result which is $2C$ the expression here (because of the restrictions on $a_i$). This can be demonstrated directly using any configuration of $y_i, a_i$ which satisfy our requirements.
There is basically no consequence, since optima will occur for the same parameter values, but I recommend using the definition with is consistent with standard practice. | Different definitions of cross entropy loss function not equivalent? | Your extension of the two-class definition requires a bit of care. If there are more than two classes, we have to stipulate that in this case, $\sum a_i=1$ and that $a_i\ge 0\forall i,$ i.e. the predi | Different definitions of cross entropy loss function not equivalent?
Your extension of the two-class definition requires a bit of care. If there are more than two classes, we have to stipulate that in this case, $\sum a_i=1$ and that $a_i\ge 0\forall i,$ i.e. the predicted class memberships are all positive and sum to 1. Then the one-hot encoding provides that precisely one of the $y_i=1$ and the rest are $0$.
Then we can write the multi-class cross-entropy as
$$C=-\frac{1}{n}\sum_x\sum_i y_i \ln (a_i)$$
Note that the one-hot encoding scheme makes the product 0 for all but one of the $a_i$. Contrast this to the expression you have, in which each predicted value appears twice because it is evaluated once for each of two $y$ vectors, and each $y_i$ has precisely one nonzero value under one-hot encoding. The result is that your expression produces a result which is $2C$ the expression here (because of the restrictions on $a_i$). This can be demonstrated directly using any configuration of $y_i, a_i$ which satisfy our requirements.
There is basically no consequence, since optima will occur for the same parameter values, but I recommend using the definition with is consistent with standard practice. | Different definitions of cross entropy loss function not equivalent?
Your extension of the two-class definition requires a bit of care. If there are more than two classes, we have to stipulate that in this case, $\sum a_i=1$ and that $a_i\ge 0\forall i,$ i.e. the predi |
49,895 | How does one Initialize Neural Networks as suggested by Saxe et al using Orthogonal matrices and a gain factor? | Here is what Lasagne does, it should answer your two questions:
class Orthogonal(Initializer):
"""Intialize weights as Orthogonal matrix.
Orthogonal matrix initialization [1]_. For n-dimensional shapes where
n > 2, the n-1 trailing axes are flattened. For convolutional layers, this
corresponds to the fan-in, so this makes the initialization usable for
both dense and convolutional layers.
Parameters
----------
gain : float or 'relu'
Scaling factor for the weights. Set this to ``1.0`` for linear and
sigmoid units, to 'relu' or ``sqrt(2)`` for rectified linear units, and
to ``sqrt(2/(1+alpha**2))`` for leaky rectified linear units with
leakiness ``alpha``. Other transfer functions may need different
factors.
References
----------
.. [1] Saxe, Andrew M., James L. McClelland, and Surya Ganguli.
"Exact solutions to the nonlinear dynamics of learning in deep
linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
"""
def __init__(self, gain=1.0):
if gain == 'relu':
gain = np.sqrt(2)
self.gain = gain
def sample(self, shape):
if len(shape) < 2:
raise RuntimeError("Only shapes of length 2 or more are "
"supported.")
flat_shape = (shape[0], np.prod(shape[1:]))
a = get_rng().normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
# pick the one with the correct shape
q = u if u.shape == flat_shape else v
q = q.reshape(shape)
return floatX(self.gain * q)
This RNN tutorial does the same thing (minus the gain):
# orthogonal initialization for weights
# see Saxe et al. ICLR'14
def ortho_weight(ndim):
W = numpy.random.randn(ndim, ndim)
u, s, v = numpy.linalg.svd(W)
return u.astype('float32')
So I assume it's correct (I hope so since this is the code I use).
I probably should have mentioned but Iam planning to use it with python/tensorflow if possible.
In TensorFlow:
def orthogonal_initializer(scale = 1.1):
''' From Lasagne and Keras. Reference: Saxe et al., http://arxiv.org/abs/1312.6120
'''
print('Warning -- You have opted to use the orthogonal_initializer function')
def _initializer(shape, dtype=tf.float32):
flat_shape = (shape[0], np.prod(shape[1:]))
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
# pick the one with the correct shape
q = u if u.shape == flat_shape else v
q = q.reshape(shape) #this needs to be corrected to float32
print('you have initialized one orthogonal matrix.')
return tf.constant(scale * q[:shape[0], :shape[1]], dtype=tf.float32)
return _initializer | How does one Initialize Neural Networks as suggested by Saxe et al using Orthogonal matrices and a g | Here is what Lasagne does, it should answer your two questions:
class Orthogonal(Initializer):
"""Intialize weights as Orthogonal matrix.
Orthogonal matrix initialization [1]_. For n-dimension | How does one Initialize Neural Networks as suggested by Saxe et al using Orthogonal matrices and a gain factor?
Here is what Lasagne does, it should answer your two questions:
class Orthogonal(Initializer):
"""Intialize weights as Orthogonal matrix.
Orthogonal matrix initialization [1]_. For n-dimensional shapes where
n > 2, the n-1 trailing axes are flattened. For convolutional layers, this
corresponds to the fan-in, so this makes the initialization usable for
both dense and convolutional layers.
Parameters
----------
gain : float or 'relu'
Scaling factor for the weights. Set this to ``1.0`` for linear and
sigmoid units, to 'relu' or ``sqrt(2)`` for rectified linear units, and
to ``sqrt(2/(1+alpha**2))`` for leaky rectified linear units with
leakiness ``alpha``. Other transfer functions may need different
factors.
References
----------
.. [1] Saxe, Andrew M., James L. McClelland, and Surya Ganguli.
"Exact solutions to the nonlinear dynamics of learning in deep
linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
"""
def __init__(self, gain=1.0):
if gain == 'relu':
gain = np.sqrt(2)
self.gain = gain
def sample(self, shape):
if len(shape) < 2:
raise RuntimeError("Only shapes of length 2 or more are "
"supported.")
flat_shape = (shape[0], np.prod(shape[1:]))
a = get_rng().normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
# pick the one with the correct shape
q = u if u.shape == flat_shape else v
q = q.reshape(shape)
return floatX(self.gain * q)
This RNN tutorial does the same thing (minus the gain):
# orthogonal initialization for weights
# see Saxe et al. ICLR'14
def ortho_weight(ndim):
W = numpy.random.randn(ndim, ndim)
u, s, v = numpy.linalg.svd(W)
return u.astype('float32')
So I assume it's correct (I hope so since this is the code I use).
I probably should have mentioned but Iam planning to use it with python/tensorflow if possible.
In TensorFlow:
def orthogonal_initializer(scale = 1.1):
''' From Lasagne and Keras. Reference: Saxe et al., http://arxiv.org/abs/1312.6120
'''
print('Warning -- You have opted to use the orthogonal_initializer function')
def _initializer(shape, dtype=tf.float32):
flat_shape = (shape[0], np.prod(shape[1:]))
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
# pick the one with the correct shape
q = u if u.shape == flat_shape else v
q = q.reshape(shape) #this needs to be corrected to float32
print('you have initialized one orthogonal matrix.')
return tf.constant(scale * q[:shape[0], :shape[1]], dtype=tf.float32)
return _initializer | How does one Initialize Neural Networks as suggested by Saxe et al using Orthogonal matrices and a g
Here is what Lasagne does, it should answer your two questions:
class Orthogonal(Initializer):
"""Intialize weights as Orthogonal matrix.
Orthogonal matrix initialization [1]_. For n-dimension |
49,896 | Sampling variance for meta-analysis one-sample data | This is an interesting question because (so far as I know) there is no widely used formula for computing the variance in this situation. Some time ago, I did some simulations to examine the performance of different formulas to estimate the sampling variance of Cohen's d in case of a one-sample t-test.
I was aware of three different formulas:
The formula used in the Comprehensive Meta-analysis Software:
(1/sqrt(ni))*sqrt(1+di^2/2)^2,
with ni being the sample size per study and di the observed Cohen's d.
Other people use the standard formula for the dependent samples t-test (e.g., Borenstein, 2009) with correlation between pre- and posttest (r) equal to 0.5:
(1/ni)+di^2/(2*ni)
Another formula I have seen is one that was used in a paper by Koenig et al. (2011). This formula is obtained by personal communication with B. Becker.
(1/ni)+di^2/(2*ni*(ni-1))
I did a very small simulation study to examine the performance of these three formulas with sample sizes ranging from 10 to 500 and effect sizes in the population ranging from 0 to 0.8. The differences between the formulas were most observable for a population effect size of 0.8.
Using the formula of the dependent samples t-test with r=0.5 yielded the least biased estimates. However, there may be other formulas with better properties. I am curious what other people think about this.
Code:
rm(list = ls()) # Clean workspace
k <- 10000 # Number of studies
thetais <- c(0, 0.2, 0.5, 0.8) # Effect in population
nis <- c(10,15,20,30,50,75,100,250,500) # Sample size in primary study
sigma <- 1 # Standard deviation in population
### Empty objects for storing results
vi.ac <- vi.beck <- vi.comp <- vi.dep <- matrix(NA, nrow = length(nis),
ncol = length(thetais),
dimnames = list(nis, thetais))
############################################
for(thetai in thetais) {
for(ni in nis) {
### Actual variance Cohen's d
sdi <- sqrt(sigma/(ni-1) * rchisq(k, df = ni-1))
mi <- rnorm(k, mean = thetai, sd = sigma/sqrt(ni))
di <- mi/sdi
vi.ac[as.character(ni),as.character(thetai)] <- var(di)
############################################
### Suggestion by Becker in Koenig et al.
vi <- (1/ni)+di^2/(2*ni*(ni-1))
vi.beck[as.character(ni),as.character(thetai)] <- mean(vi)
############################################
### Comprehensive meta-analysis software
vi <- (1/sqrt(ni))*sqrt(1+di^2/2)^2
vi.comp[as.character(ni),as.character(thetai)] <- mean(vi)
############################################
### Dependent sample t-test with r=0.5
vi <- (1/ni)+di^2/(2*ni)
vi.dep[as.character(ni),as.character(thetai)] <- mean(vi)
}
}
plot(x = nis, y = vi.ac[ ,1], type = "l", main = "theta = 0", ylab = "Variance")
lines(x = nis, y = vi.beck[ ,1], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,1], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,1], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,2], type = "l", main = "theta = 0.2")
lines(x = nis, y = vi.beck[ ,2], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,2], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,2], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,3], type = "l", main = "theta = 0.5")
lines(x = nis, y = vi.beck[ ,3], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,3], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,3], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,4], type = "l", main = "theta = 0.8")
lines(x = nis, y = vi.beck[ ,4], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,4], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,4], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
data.frame(vi.ac[,1], vi.beck[,1], vi.comp[,1], vi.dep[,1])
References:
Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The Handbook of Research Synthesis and Meta-Analysis (pp. 221-236). New York: Russell Sage Foundation.
Koenig, A. M., Eagly, A. H., Mitchell, A. A., & Ristikari, T. (2011). Are leader stereotypes masculine? A meta-analysis of three research paradigms. Psychological Bulletin, 137, 4, 616-42. | Sampling variance for meta-analysis one-sample data | This is an interesting question because (so far as I know) there is no widely used formula for computing the variance in this situation. Some time ago, I did some simulations to examine the performanc | Sampling variance for meta-analysis one-sample data
This is an interesting question because (so far as I know) there is no widely used formula for computing the variance in this situation. Some time ago, I did some simulations to examine the performance of different formulas to estimate the sampling variance of Cohen's d in case of a one-sample t-test.
I was aware of three different formulas:
The formula used in the Comprehensive Meta-analysis Software:
(1/sqrt(ni))*sqrt(1+di^2/2)^2,
with ni being the sample size per study and di the observed Cohen's d.
Other people use the standard formula for the dependent samples t-test (e.g., Borenstein, 2009) with correlation between pre- and posttest (r) equal to 0.5:
(1/ni)+di^2/(2*ni)
Another formula I have seen is one that was used in a paper by Koenig et al. (2011). This formula is obtained by personal communication with B. Becker.
(1/ni)+di^2/(2*ni*(ni-1))
I did a very small simulation study to examine the performance of these three formulas with sample sizes ranging from 10 to 500 and effect sizes in the population ranging from 0 to 0.8. The differences between the formulas were most observable for a population effect size of 0.8.
Using the formula of the dependent samples t-test with r=0.5 yielded the least biased estimates. However, there may be other formulas with better properties. I am curious what other people think about this.
Code:
rm(list = ls()) # Clean workspace
k <- 10000 # Number of studies
thetais <- c(0, 0.2, 0.5, 0.8) # Effect in population
nis <- c(10,15,20,30,50,75,100,250,500) # Sample size in primary study
sigma <- 1 # Standard deviation in population
### Empty objects for storing results
vi.ac <- vi.beck <- vi.comp <- vi.dep <- matrix(NA, nrow = length(nis),
ncol = length(thetais),
dimnames = list(nis, thetais))
############################################
for(thetai in thetais) {
for(ni in nis) {
### Actual variance Cohen's d
sdi <- sqrt(sigma/(ni-1) * rchisq(k, df = ni-1))
mi <- rnorm(k, mean = thetai, sd = sigma/sqrt(ni))
di <- mi/sdi
vi.ac[as.character(ni),as.character(thetai)] <- var(di)
############################################
### Suggestion by Becker in Koenig et al.
vi <- (1/ni)+di^2/(2*ni*(ni-1))
vi.beck[as.character(ni),as.character(thetai)] <- mean(vi)
############################################
### Comprehensive meta-analysis software
vi <- (1/sqrt(ni))*sqrt(1+di^2/2)^2
vi.comp[as.character(ni),as.character(thetai)] <- mean(vi)
############################################
### Dependent sample t-test with r=0.5
vi <- (1/ni)+di^2/(2*ni)
vi.dep[as.character(ni),as.character(thetai)] <- mean(vi)
}
}
plot(x = nis, y = vi.ac[ ,1], type = "l", main = "theta = 0", ylab = "Variance")
lines(x = nis, y = vi.beck[ ,1], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,1], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,1], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,2], type = "l", main = "theta = 0.2")
lines(x = nis, y = vi.beck[ ,2], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,2], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,2], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,3], type = "l", main = "theta = 0.5")
lines(x = nis, y = vi.beck[ ,3], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,3], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,3], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
plot(x = nis, y = vi.ac[ ,4], type = "l", main = "theta = 0.8")
lines(x = nis, y = vi.beck[ ,4], type = "l", col = "red")
lines(x = nis, y = vi.comp[ ,4], type = "l", col = "blue")
lines(x = nis, y = vi.dep[ ,4], type = "l", col = "green")
legend("topright", legend = c("Actual variance", "Becker", "CMA", "Dep. samples"),
col = c("black", "red", "blue", "green"), lty = c(1,1,1,1))
data.frame(vi.ac[,1], vi.beck[,1], vi.comp[,1], vi.dep[,1])
References:
Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges & J. C. Valentine (Eds.), The Handbook of Research Synthesis and Meta-Analysis (pp. 221-236). New York: Russell Sage Foundation.
Koenig, A. M., Eagly, A. H., Mitchell, A. A., & Ristikari, T. (2011). Are leader stereotypes masculine? A meta-analysis of three research paradigms. Psychological Bulletin, 137, 4, 616-42. | Sampling variance for meta-analysis one-sample data
This is an interesting question because (so far as I know) there is no widely used formula for computing the variance in this situation. Some time ago, I did some simulations to examine the performanc |
49,897 | Choosing regressors for inclusion in regression with ARMA errors | The gold standard in time series model selection is to use a holdout sample. Hold out the last few months of data, fit the different models (with different combinations of regressors) to the data before that, forecast into your holdout sample and pick the model with the lowest forecast error - MAE or MSE.
That said, I would expect readership numbers of different Wikipedia articles to be correlated, especially if used as a proxy for "has a lot of time on his hands". So you might want to look at dimension reduction techniques, like principal components analysis (PCA) or similar, to reduce your regressors to only the first few principal components. Fewer orthogonal regressors will yield a more stable model and probably better forecasts. (The problem is that interpretability suffers.) | Choosing regressors for inclusion in regression with ARMA errors | The gold standard in time series model selection is to use a holdout sample. Hold out the last few months of data, fit the different models (with different combinations of regressors) to the data befo | Choosing regressors for inclusion in regression with ARMA errors
The gold standard in time series model selection is to use a holdout sample. Hold out the last few months of data, fit the different models (with different combinations of regressors) to the data before that, forecast into your holdout sample and pick the model with the lowest forecast error - MAE or MSE.
That said, I would expect readership numbers of different Wikipedia articles to be correlated, especially if used as a proxy for "has a lot of time on his hands". So you might want to look at dimension reduction techniques, like principal components analysis (PCA) or similar, to reduce your regressors to only the first few principal components. Fewer orthogonal regressors will yield a more stable model and probably better forecasts. (The problem is that interpretability suffers.) | Choosing regressors for inclusion in regression with ARMA errors
The gold standard in time series model selection is to use a holdout sample. Hold out the last few months of data, fit the different models (with different combinations of regressors) to the data befo |
49,898 | Linear mixed effects model and - multiplicity issue and adjusting for p-values | Since the reviewer only seems to be concerned about the two outcomes measured on the same subjects (and did not question the modeling procedure itself), I would simply use a sequential Bonferroni adjustment (a.k.a. Holm-Bonferroni method) to correct for it.
Sort your $p$-values in ascending order
Refer to them as $p_i$ (i.e. $p_1, p_2, p_3$, etc.)
Than you adjust your $\alpha$-level and compare the $p$-values against that new $\alpha$-levels, i.e. you test whether $p_i \le \alpha / (1 + k - i)$, where $k$ is the number of statistical tests conducted, i.e. the number of $p$-values calculated. You can stop when $p_i \gt \alpha / (1 + k - i)$. Those $p_i$ that fall below the sequentially adjusted $\alpha$-levels are now your significant tests which are adjusted for multiplicity (after the Holm-Bonferroni method).
For example you conducted five tests ($\alpha = 0.05$) resulting in the following $p$-values:
$p_1 = 0.0024, p_2 = 0.0084, p_3 = 0.019, p_4 = 0.027, p_5 = 0.12$
The new $\alpha$-level you compare $p_1$ against is:
$0.05/(1+5-1) = 0.01$
Since $p_1 \le 0.01$ you can move on to $p_2$:
$0.05/(1+5-2) = 0.0125$
Since $p_2 \le 0.0125$ you can move on to $p_3$:
$0.05/(1+5-3) = 0.0167$
Since $p_3 \gt 0.0167$ you can stop.
In this case, from initially four significant $p$-values, you now only have two but those are adjusted for multiplicity (Note: Instead of adjusting the $\alpha$-levels, you can also adjust the $p$-values and compare against your chosen $\alpha$-level (e.g. $\alpha = 0.05$). Then all you need to do is $(1 + k - i)*p_i$ instead).
See also:
Abdi, H. (2010). Holm’s sequential Bonferroni procedure. Encyclopedia of research design, 1.
Peres-Neto, P. R. (1999). How many statistical tests are too many? The problem of conducting multiple ecological inferences revisited. Marine Ecology Progress Series, 176, 303-306.
Alternatively, you could also argue that you don't want to adjust for multiplicity because of reason such as being concerned with making type II errors.
See here:
Feise, R. J. (2002). Do multiple outcome measures require p-value adjustment?. BMC Medical Research Methodology, 2(1), 1.
Or maybe this one: Gelman, A., Hill, J., & Yajima, M. (2012). Why we (usually) don't have to worry about multiple comparisons. Journal of Research on Educational Effectiveness, 5(2), 189-211. | Linear mixed effects model and - multiplicity issue and adjusting for p-values | Since the reviewer only seems to be concerned about the two outcomes measured on the same subjects (and did not question the modeling procedure itself), I would simply use a sequential Bonferroni adju | Linear mixed effects model and - multiplicity issue and adjusting for p-values
Since the reviewer only seems to be concerned about the two outcomes measured on the same subjects (and did not question the modeling procedure itself), I would simply use a sequential Bonferroni adjustment (a.k.a. Holm-Bonferroni method) to correct for it.
Sort your $p$-values in ascending order
Refer to them as $p_i$ (i.e. $p_1, p_2, p_3$, etc.)
Than you adjust your $\alpha$-level and compare the $p$-values against that new $\alpha$-levels, i.e. you test whether $p_i \le \alpha / (1 + k - i)$, where $k$ is the number of statistical tests conducted, i.e. the number of $p$-values calculated. You can stop when $p_i \gt \alpha / (1 + k - i)$. Those $p_i$ that fall below the sequentially adjusted $\alpha$-levels are now your significant tests which are adjusted for multiplicity (after the Holm-Bonferroni method).
For example you conducted five tests ($\alpha = 0.05$) resulting in the following $p$-values:
$p_1 = 0.0024, p_2 = 0.0084, p_3 = 0.019, p_4 = 0.027, p_5 = 0.12$
The new $\alpha$-level you compare $p_1$ against is:
$0.05/(1+5-1) = 0.01$
Since $p_1 \le 0.01$ you can move on to $p_2$:
$0.05/(1+5-2) = 0.0125$
Since $p_2 \le 0.0125$ you can move on to $p_3$:
$0.05/(1+5-3) = 0.0167$
Since $p_3 \gt 0.0167$ you can stop.
In this case, from initially four significant $p$-values, you now only have two but those are adjusted for multiplicity (Note: Instead of adjusting the $\alpha$-levels, you can also adjust the $p$-values and compare against your chosen $\alpha$-level (e.g. $\alpha = 0.05$). Then all you need to do is $(1 + k - i)*p_i$ instead).
See also:
Abdi, H. (2010). Holm’s sequential Bonferroni procedure. Encyclopedia of research design, 1.
Peres-Neto, P. R. (1999). How many statistical tests are too many? The problem of conducting multiple ecological inferences revisited. Marine Ecology Progress Series, 176, 303-306.
Alternatively, you could also argue that you don't want to adjust for multiplicity because of reason such as being concerned with making type II errors.
See here:
Feise, R. J. (2002). Do multiple outcome measures require p-value adjustment?. BMC Medical Research Methodology, 2(1), 1.
Or maybe this one: Gelman, A., Hill, J., & Yajima, M. (2012). Why we (usually) don't have to worry about multiple comparisons. Journal of Research on Educational Effectiveness, 5(2), 189-211. | Linear mixed effects model and - multiplicity issue and adjusting for p-values
Since the reviewer only seems to be concerned about the two outcomes measured on the same subjects (and did not question the modeling procedure itself), I would simply use a sequential Bonferroni adju |
49,899 | How are performance measures affected in PU learning? | Introduction
Many practical applications have only positive and unlabeled data (aka PU learning), which poses problems in building and evaluating classifiers. Evaluating classifiers using only positive and unlabeled data is a tricky task, and can only be done by making some assumptions, which may or may not be reasonable for a real problem.
Shameless self-advertisement: For a detailed overview, I suggest reading my paper on the subject.
I will describe the main effects of the PU learning setting on performance metrics that are based on contingency tables. A contingency table relates the predicted labels to the true labels:
+---------------------+---------------------+---------------------+
| | positive true label | negative true label |
+---------------------+---------------------+---------------------+
| positive prediction | true positive | false positive |
| negative prediction | false negative | true negative |
+---------------------+---------------------+---------------------+
The problem in PU learning is that we don't know the true labels, which affects all cells in the contingency table (not just the last column!). It is impossible to make claims about the effect of the PU learning setting on performance metrics without making additional assumptions. For example, if your known positives are biased you can't make any reliable inference (this is common!).
Treating the unlabeled set as negative
A common simplification used in PU learning is to treat the unlabeled set as if it is negative, and then compute metrics as if the problem is fully supervised. Sometimes this is good enough, but this can be detrimental in many cases. I highly recommend against it.
Effect on precision. Say we want to compute precision:
$$p = \frac{TP}{TP + FP}.$$
Now, suppose we have a perfect classifier if we would know the true labels (i.e., no false positives, $p=1$). In the PU learning setting, using the approximation that the unlabeled set is negative, only a fraction of (in reality) true positives are marked as such, while the rest will be considered false positives, immediately yielding $\hat{p} < 1$. Obviously this is wrong, but it gets worse: the estimation error can be arbitrarily large, depending on the fraction of known positives over latent positives. Suppose only 1% of positives are known, and the rest is in the unlabeled set, then (still with a perfect classifier), we would get $\hat{p} = 0.01$ ... Yuck!
Effect on other metrics:
True Positives: underestimated
True Negatives: overestimated
False Positives: overestimated
False Negatives: underestimated
Accuracy: depends on balance and classifier
For AUC, sensitivity and specificity I recommend reading the paper as describing it in sufficient detail here would take us too far.
Start from the rank distribution of known positives
A reasonable assumption is that the known positives are a representative subset of all positives (e.g., they are a random, unbiased sample). Under this assumption, the distribution of decision values of known positives can be used as a proxy for the distribution of decision values of all positives (and hence also associated ranks). This assumption enables us to compute strict bounds on all entries of the contingency table, which then translates into (guaranteed!) bounds on all derived performance metrics.
A crucial observation we've made is that in the PU learning context under the assumption mentioned above is that the bounds on most performance metrics are a function of the fraction of positives in the unlabeled set ($\beta$). We have shown that computing (bounds on) performance metrics without an estimate of $\beta$ is basically impossible, as the bounds are then no longer strict. | How are performance measures affected in PU learning? | Introduction
Many practical applications have only positive and unlabeled data (aka PU learning), which poses problems in building and evaluating classifiers. Evaluating classifiers using only positiv | How are performance measures affected in PU learning?
Introduction
Many practical applications have only positive and unlabeled data (aka PU learning), which poses problems in building and evaluating classifiers. Evaluating classifiers using only positive and unlabeled data is a tricky task, and can only be done by making some assumptions, which may or may not be reasonable for a real problem.
Shameless self-advertisement: For a detailed overview, I suggest reading my paper on the subject.
I will describe the main effects of the PU learning setting on performance metrics that are based on contingency tables. A contingency table relates the predicted labels to the true labels:
+---------------------+---------------------+---------------------+
| | positive true label | negative true label |
+---------------------+---------------------+---------------------+
| positive prediction | true positive | false positive |
| negative prediction | false negative | true negative |
+---------------------+---------------------+---------------------+
The problem in PU learning is that we don't know the true labels, which affects all cells in the contingency table (not just the last column!). It is impossible to make claims about the effect of the PU learning setting on performance metrics without making additional assumptions. For example, if your known positives are biased you can't make any reliable inference (this is common!).
Treating the unlabeled set as negative
A common simplification used in PU learning is to treat the unlabeled set as if it is negative, and then compute metrics as if the problem is fully supervised. Sometimes this is good enough, but this can be detrimental in many cases. I highly recommend against it.
Effect on precision. Say we want to compute precision:
$$p = \frac{TP}{TP + FP}.$$
Now, suppose we have a perfect classifier if we would know the true labels (i.e., no false positives, $p=1$). In the PU learning setting, using the approximation that the unlabeled set is negative, only a fraction of (in reality) true positives are marked as such, while the rest will be considered false positives, immediately yielding $\hat{p} < 1$. Obviously this is wrong, but it gets worse: the estimation error can be arbitrarily large, depending on the fraction of known positives over latent positives. Suppose only 1% of positives are known, and the rest is in the unlabeled set, then (still with a perfect classifier), we would get $\hat{p} = 0.01$ ... Yuck!
Effect on other metrics:
True Positives: underestimated
True Negatives: overestimated
False Positives: overestimated
False Negatives: underestimated
Accuracy: depends on balance and classifier
For AUC, sensitivity and specificity I recommend reading the paper as describing it in sufficient detail here would take us too far.
Start from the rank distribution of known positives
A reasonable assumption is that the known positives are a representative subset of all positives (e.g., they are a random, unbiased sample). Under this assumption, the distribution of decision values of known positives can be used as a proxy for the distribution of decision values of all positives (and hence also associated ranks). This assumption enables us to compute strict bounds on all entries of the contingency table, which then translates into (guaranteed!) bounds on all derived performance metrics.
A crucial observation we've made is that in the PU learning context under the assumption mentioned above is that the bounds on most performance metrics are a function of the fraction of positives in the unlabeled set ($\beta$). We have shown that computing (bounds on) performance metrics without an estimate of $\beta$ is basically impossible, as the bounds are then no longer strict. | How are performance measures affected in PU learning?
Introduction
Many practical applications have only positive and unlabeled data (aka PU learning), which poses problems in building and evaluating classifiers. Evaluating classifiers using only positiv |
49,900 | R: anova() vs. Anova() for test of categorical predictor from glmer or glm.nb object | anova{stats} is for Type I only, and has no way of doing Type III ANOVA. Anova{car} uses Type II or III tests.
You might find other helpful bits of information in these two threads:
Choice between Type-I, Type-II, or Type-III ANOVA
Difference between anova and Anova function | R: anova() vs. Anova() for test of categorical predictor from glmer or glm.nb object | anova{stats} is for Type I only, and has no way of doing Type III ANOVA. Anova{car} uses Type II or III tests.
You might find other helpful bits of information in these two threads:
Choice between Typ | R: anova() vs. Anova() for test of categorical predictor from glmer or glm.nb object
anova{stats} is for Type I only, and has no way of doing Type III ANOVA. Anova{car} uses Type II or III tests.
You might find other helpful bits of information in these two threads:
Choice between Type-I, Type-II, or Type-III ANOVA
Difference between anova and Anova function | R: anova() vs. Anova() for test of categorical predictor from glmer or glm.nb object
anova{stats} is for Type I only, and has no way of doing Type III ANOVA. Anova{car} uses Type II or III tests.
You might find other helpful bits of information in these two threads:
Choice between Typ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.