idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,501 | Continuous and differentiable bell-shaped distribution on $[a, b]$ | The Truncated normal distribution obeys all prerequisites:
It's bell shaped
It's continuous
Its support is $x \in [a,b]$
It's differentiable, i.e. $\nabla_x p(x)$ exists for all $x \in [a,b]$ | Continuous and differentiable bell-shaped distribution on $[a, b]$ | The Truncated normal distribution obeys all prerequisites:
It's bell shaped
It's continuous
Its support is $x \in [a,b]$
It's differentiable, i.e. $\nabla_x p(x)$ exists for all $x \in [a,b]$ | Continuous and differentiable bell-shaped distribution on $[a, b]$
The Truncated normal distribution obeys all prerequisites:
It's bell shaped
It's continuous
Its support is $x \in [a,b]$
It's differentiable, i.e. $\nabla_x p(x)$ exists for all $x \in [a,b]$ | Continuous and differentiable bell-shaped distribution on $[a, b]$
The Truncated normal distribution obeys all prerequisites:
It's bell shaped
It's continuous
Its support is $x \in [a,b]$
It's differentiable, i.e. $\nabla_x p(x)$ exists for all $x \in [a,b]$ |
53,502 | Continuous and differentiable bell-shaped distribution on $[a, b]$ | One option is to transform a beta distribution.
$Beta(3,3)$ has your desired properties on $[0,1]$.
Now subtract $1/2$ to center the distribution.
Next, multiply to stretch or compress the distribution.
Finally, add your desired mean. | Continuous and differentiable bell-shaped distribution on $[a, b]$ | One option is to transform a beta distribution.
$Beta(3,3)$ has your desired properties on $[0,1]$.
Now subtract $1/2$ to center the distribution.
Next, multiply to stretch or compress the distributio | Continuous and differentiable bell-shaped distribution on $[a, b]$
One option is to transform a beta distribution.
$Beta(3,3)$ has your desired properties on $[0,1]$.
Now subtract $1/2$ to center the distribution.
Next, multiply to stretch or compress the distribution.
Finally, add your desired mean. | Continuous and differentiable bell-shaped distribution on $[a, b]$
One option is to transform a beta distribution.
$Beta(3,3)$ has your desired properties on $[0,1]$.
Now subtract $1/2$ to center the distribution.
Next, multiply to stretch or compress the distributio |
53,503 | Can we construct a pair of random variables having any given covariance? | Covariances cannot have arbitrary values in comparison to variances; $|\operatorname{Cov}(X,Y)| \leq \sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}$. So, Yes, it is not possible to find random variables that have the alleged covariance matrix, which is, as you have discovered, not a positive semidefinite matrix. For the given variances, $|\operatorname{Cov}(X,Y)|$ has maximum value $\frac{\sqrt{2}}{20} \approx 0.0707\cdots$, and the given value $-0.4$ is way out of range, | Can we construct a pair of random variables having any given covariance? | Covariances cannot have arbitrary values in comparison to variances; $|\operatorname{Cov}(X,Y)| \leq \sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}$. So, Yes, it is not possible to find random vari | Can we construct a pair of random variables having any given covariance?
Covariances cannot have arbitrary values in comparison to variances; $|\operatorname{Cov}(X,Y)| \leq \sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}$. So, Yes, it is not possible to find random variables that have the alleged covariance matrix, which is, as you have discovered, not a positive semidefinite matrix. For the given variances, $|\operatorname{Cov}(X,Y)|$ has maximum value $\frac{\sqrt{2}}{20} \approx 0.0707\cdots$, and the given value $-0.4$ is way out of range, | Can we construct a pair of random variables having any given covariance?
Covariances cannot have arbitrary values in comparison to variances; $|\operatorname{Cov}(X,Y)| \leq \sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}$. So, Yes, it is not possible to find random vari |
53,504 | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist? | The matrix $X^\text{T} X$ is the Gramian matrix of the design matrix (assuming here that the design matrix has elements that are all real numbers). If the Gram-determinant is zero (i.e., if $\text{det} (X^\text{T} X) = 0$) then the Gramian matrix is not invertible, which means that the design matrix has at least one column of values that can be constructed as a linear combination of the other columns. When this occurs there are regression coefficients in the model that are non-identifiable and there are an infinite number of solutions for the estimated regression coefficients in the OLS/MLE problem in the regression model.
To fix this problem, we remove redundant explanatory variables from the model (corresponding to removing columns of the design matrix) until we get a design matrix that has a Gram-determinant that is non-zero. We remove the excess explanatory variables because they are not giving any additional information in the model. Once we have removed the excess explanatory variables and have a non-zero Gram-determinant for the design matrix, we can then estimate the coefficients of the reduced model and proceed as normal. | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist? | The matrix $X^\text{T} X$ is the Gramian matrix of the design matrix (assuming here that the design matrix has elements that are all real numbers). If the Gram-determinant is zero (i.e., if $\text{de | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist?
The matrix $X^\text{T} X$ is the Gramian matrix of the design matrix (assuming here that the design matrix has elements that are all real numbers). If the Gram-determinant is zero (i.e., if $\text{det} (X^\text{T} X) = 0$) then the Gramian matrix is not invertible, which means that the design matrix has at least one column of values that can be constructed as a linear combination of the other columns. When this occurs there are regression coefficients in the model that are non-identifiable and there are an infinite number of solutions for the estimated regression coefficients in the OLS/MLE problem in the regression model.
To fix this problem, we remove redundant explanatory variables from the model (corresponding to removing columns of the design matrix) until we get a design matrix that has a Gram-determinant that is non-zero. We remove the excess explanatory variables because they are not giving any additional information in the model. Once we have removed the excess explanatory variables and have a non-zero Gram-determinant for the design matrix, we can then estimate the coefficients of the reduced model and proceed as normal. | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist?
The matrix $X^\text{T} X$ is the Gramian matrix of the design matrix (assuming here that the design matrix has elements that are all real numbers). If the Gram-determinant is zero (i.e., if $\text{de |
53,505 | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist? | There are two common settings where the problem occurs in regression, and these are treated differently.
The first is when the number of columns $p$ of $X$ is greater than the number of rows $n$. In that case you can't do the regression, and you need some sort of dimension-reduction approach. You might do subset selection or $L_1$-type penalisation (lasso) or shrinkage without dropping variables (ridge regression, mixed models) or just think about variables you are willing to drop.
The second is when you have just set up $X$ wrong and the solution is to fix it. For example, if you have a $k$-level categorical variable and you set up $k$ indicator variables ('one-hot' encoding) then $X^TX$ will be singular and the solution is to change to an encoding by $k-1$ variables (just drop a variable to get treatment contrasts or switch to sum-to-zero contrasts or Helmert or whatever)
When $p$ is slightly less than $n$ and the data aren't recorded to very high precision, it's not that unusual to get get $X^TX$ singular, and that's basically like the $p>n$ setting.
It's fairly unusual to have $X^TX$ singular when $p\ll n$ and there isn't a simple and fixable symbolic issue with encoding. It's not impossible; it can happen; but it's unusual. It happened more a bit often in the Bad Old Days when we were all working with stone axes and 7-digit floating point. | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist? | There are two common settings where the problem occurs in regression, and these are treated differently.
The first is when the number of columns $p$ of $X$ is greater than the number of rows $n$. In t | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist?
There are two common settings where the problem occurs in regression, and these are treated differently.
The first is when the number of columns $p$ of $X$ is greater than the number of rows $n$. In that case you can't do the regression, and you need some sort of dimension-reduction approach. You might do subset selection or $L_1$-type penalisation (lasso) or shrinkage without dropping variables (ridge regression, mixed models) or just think about variables you are willing to drop.
The second is when you have just set up $X$ wrong and the solution is to fix it. For example, if you have a $k$-level categorical variable and you set up $k$ indicator variables ('one-hot' encoding) then $X^TX$ will be singular and the solution is to change to an encoding by $k-1$ variables (just drop a variable to get treatment contrasts or switch to sum-to-zero contrasts or Helmert or whatever)
When $p$ is slightly less than $n$ and the data aren't recorded to very high precision, it's not that unusual to get get $X^TX$ singular, and that's basically like the $p>n$ setting.
It's fairly unusual to have $X^TX$ singular when $p\ll n$ and there isn't a simple and fixable symbolic issue with encoding. It's not impossible; it can happen; but it's unusual. It happened more a bit often in the Bad Old Days when we were all working with stone axes and 7-digit floating point. | What do you do in regression if $(X^\text{T} X)^{-1}$ does not exist?
There are two common settings where the problem occurs in regression, and these are treated differently.
The first is when the number of columns $p$ of $X$ is greater than the number of rows $n$. In t |
53,506 | Post hoc contrasts when only certain contrasts make sense | In what follows I assume that you want to control the FWER (in the strong sense). In general, if you want to test a fixed number of arbitrary planned contrasts (in your case: treatment A vs. control A, treatment B vs. control B), the Holm method can be used to control the FWER strongly within the family of hypothesis tests for these contrasts. The Holm method is more powerful than the Bonferroni correction, which could also be applied in this setting.
Note that this is to be distinguished from the case where contrasts suggested by the data (e.g., the difference between the two group means that differ the most) are tested. Here Scheffé's method (among other, more powerful, methods tailored to specific types of comparisons) could be used.
If the planned contrasts fully describe your hypotheses, there is also no need for an omnibus test.
Note also that your second hypothesis "treatment B doesn't work" suggests that you are looking for an equivalence test which, when based on two one-sided t-tests (TOST), would require rejecting two null hypotheses to support your (research) hypothesis. | Post hoc contrasts when only certain contrasts make sense | In what follows I assume that you want to control the FWER (in the strong sense). In general, if you want to test a fixed number of arbitrary planned contrasts (in your case: treatment A vs. control A | Post hoc contrasts when only certain contrasts make sense
In what follows I assume that you want to control the FWER (in the strong sense). In general, if you want to test a fixed number of arbitrary planned contrasts (in your case: treatment A vs. control A, treatment B vs. control B), the Holm method can be used to control the FWER strongly within the family of hypothesis tests for these contrasts. The Holm method is more powerful than the Bonferroni correction, which could also be applied in this setting.
Note that this is to be distinguished from the case where contrasts suggested by the data (e.g., the difference between the two group means that differ the most) are tested. Here Scheffé's method (among other, more powerful, methods tailored to specific types of comparisons) could be used.
If the planned contrasts fully describe your hypotheses, there is also no need for an omnibus test.
Note also that your second hypothesis "treatment B doesn't work" suggests that you are looking for an equivalence test which, when based on two one-sided t-tests (TOST), would require rejecting two null hypotheses to support your (research) hypothesis. | Post hoc contrasts when only certain contrasts make sense
In what follows I assume that you want to control the FWER (in the strong sense). In general, if you want to test a fixed number of arbitrary planned contrasts (in your case: treatment A vs. control A |
53,507 | Post hoc contrasts when only certain contrasts make sense | This is a good question.
In your situation, it's common and perfectly ok to just run two t tests for the contrasts of interest, and do the Boneferroni correction manually by multiplying the p values by 2. With only two contrasts, there will be very little difference between this and other, harder to justify corrections. | Post hoc contrasts when only certain contrasts make sense | This is a good question.
In your situation, it's common and perfectly ok to just run two t tests for the contrasts of interest, and do the Boneferroni correction manually by multiplying the p values b | Post hoc contrasts when only certain contrasts make sense
This is a good question.
In your situation, it's common and perfectly ok to just run two t tests for the contrasts of interest, and do the Boneferroni correction manually by multiplying the p values by 2. With only two contrasts, there will be very little difference between this and other, harder to justify corrections. | Post hoc contrasts when only certain contrasts make sense
This is a good question.
In your situation, it's common and perfectly ok to just run two t tests for the contrasts of interest, and do the Boneferroni correction manually by multiplying the p values b |
53,508 | heavier tails means that it is less sensitive to outlying data for logistic and probit | The statement is not about generating extreme values from either a logistic or a normal, it's about trying to fit a logistic or a Normal to pre-existing data that may have extreme values.
You have data, and you are trying to fit a model to it. However, a data point is not an outlier if it is "expected", in some sense, by the model. Any extreme values in the data are more likely when viewed through the lens of a logistic distribution than when viewed through the lens of a Normal distribution, therefore, the parameter estimates don't get moved around as much to try to "explain" (fit) them. This can be restated as the parameter estimates being less sensitive to outliers when we use a logistic distribution than when we use a Normal distribution. | heavier tails means that it is less sensitive to outlying data for logistic and probit | The statement is not about generating extreme values from either a logistic or a normal, it's about trying to fit a logistic or a Normal to pre-existing data that may have extreme values.
You have dat | heavier tails means that it is less sensitive to outlying data for logistic and probit
The statement is not about generating extreme values from either a logistic or a normal, it's about trying to fit a logistic or a Normal to pre-existing data that may have extreme values.
You have data, and you are trying to fit a model to it. However, a data point is not an outlier if it is "expected", in some sense, by the model. Any extreme values in the data are more likely when viewed through the lens of a logistic distribution than when viewed through the lens of a Normal distribution, therefore, the parameter estimates don't get moved around as much to try to "explain" (fit) them. This can be restated as the parameter estimates being less sensitive to outliers when we use a logistic distribution than when we use a Normal distribution. | heavier tails means that it is less sensitive to outlying data for logistic and probit
The statement is not about generating extreme values from either a logistic or a normal, it's about trying to fit a logistic or a Normal to pre-existing data that may have extreme values.
You have dat |
53,509 | Acceptance-Rejection Technique Theorem Proof | The probability a proposal is accepted is the sum over $j$'s that the value $j$ is (1) generated and then (2) accepted:
\begin{align}\mathbb P(\text{proposal accepted})&=\sum_{j=1}^\infty \mathbb P(\text{proposal accepted and }Y=j)\\
&=\sum_{j=1}^\infty \mathbb P(\text{proposal accepted }|Y=j)\mathbb P(Y=j)\\
&=\sum_{j=1}^\infty \frac{p_j}{c}=\frac{1}{c}\sum_{j=1}^\infty {p_j}=\frac{1}{c}
\end{align}
(The argument is much more straightforward when considering continuous densities as it corresponds to a ratio of areas, $1$ under the target versus $c$ under the proposal. The picture below is taken from our Monte Carlo book.)
$\qquad\qquad\qquad$
The probability that an accepted value is equal to $j$ is the probability that a value $j$ is proposed and accepted divided by the probability a value is accepted:
\begin{align}\mathbb P(X=j)&=\mathbb P(Y=j|Y\text{ is accepted})\\
&=\dfrac{\mathbb P(Y=j\text{ is accepted})}{\mathbb P(\text{proposal is accepted})}\\
&=\dfrac{p_j/c}{1/c}=p_j
\end{align} | Acceptance-Rejection Technique Theorem Proof | The probability a proposal is accepted is the sum over $j$'s that the value $j$ is (1) generated and then (2) accepted:
\begin{align}\mathbb P(\text{proposal accepted})&=\sum_{j=1}^\infty \mathbb P(\t | Acceptance-Rejection Technique Theorem Proof
The probability a proposal is accepted is the sum over $j$'s that the value $j$ is (1) generated and then (2) accepted:
\begin{align}\mathbb P(\text{proposal accepted})&=\sum_{j=1}^\infty \mathbb P(\text{proposal accepted and }Y=j)\\
&=\sum_{j=1}^\infty \mathbb P(\text{proposal accepted }|Y=j)\mathbb P(Y=j)\\
&=\sum_{j=1}^\infty \frac{p_j}{c}=\frac{1}{c}\sum_{j=1}^\infty {p_j}=\frac{1}{c}
\end{align}
(The argument is much more straightforward when considering continuous densities as it corresponds to a ratio of areas, $1$ under the target versus $c$ under the proposal. The picture below is taken from our Monte Carlo book.)
$\qquad\qquad\qquad$
The probability that an accepted value is equal to $j$ is the probability that a value $j$ is proposed and accepted divided by the probability a value is accepted:
\begin{align}\mathbb P(X=j)&=\mathbb P(Y=j|Y\text{ is accepted})\\
&=\dfrac{\mathbb P(Y=j\text{ is accepted})}{\mathbb P(\text{proposal is accepted})}\\
&=\dfrac{p_j/c}{1/c}=p_j
\end{align} | Acceptance-Rejection Technique Theorem Proof
The probability a proposal is accepted is the sum over $j$'s that the value $j$ is (1) generated and then (2) accepted:
\begin{align}\mathbb P(\text{proposal accepted})&=\sum_{j=1}^\infty \mathbb P(\t |
53,510 | Meaning of "$\stackrel{p}\longrightarrow$" in math notation (arrow with a p over it) | It means convergence in probability. In your case, it's about random processes rather than random variables. It says that the series of random processes will converge towards a single random process. | Meaning of "$\stackrel{p}\longrightarrow$" in math notation (arrow with a p over it) | It means convergence in probability. In your case, it's about random processes rather than random variables. It says that the series of random processes will converge towards a single random process. | Meaning of "$\stackrel{p}\longrightarrow$" in math notation (arrow with a p over it)
It means convergence in probability. In your case, it's about random processes rather than random variables. It says that the series of random processes will converge towards a single random process. | Meaning of "$\stackrel{p}\longrightarrow$" in math notation (arrow with a p over it)
It means convergence in probability. In your case, it's about random processes rather than random variables. It says that the series of random processes will converge towards a single random process. |
53,511 | Why use supervised binning on train data if it leaks data? | As already noticed in the comments and another answer, you need to train the binning algorithm using training data only, in such a case it has no chance to leak the test data, as it hasn't seen it.
But you seem to be concerned with the fact that the binning algorithm uses the labels, so it "leaks" the labels to the features. This concern makes sense, in the end if you had a model like
$$
y = f(y)
$$
it would be quite useless. It would predict nothing and it would be unusable at prediction time, when you have no access to the labels. But it is not that bad.
First, notice that any machine learning algorithm has access to both labels and the features during the training, so if you weren't allowed to look at the labels while training, you couldn't do it. The best example would be naive Bayes algorithm that groups the data by the labels $Y$, calculates the empirical probabilities for the labels $p(Y=c)$, and the empirical probabilities for the features given (grouped by) each label $p(X_i | Y=c)$, and combines those using Bayes theorem
$$
p(Y=c) \prod_{i=1}^n p(X_i | Y=c)
$$
If you think about it, it is almost like a generalization of the binning idea to the smooth categories: in binning we transform $X_i | Y=c$ to discrete bins, while naive Bayes replaces it with a probability (continous score!). Of course, the difference is that with binning you then use the features as input for another model, but basically the idea is like a kind of poor man's naive Bayes algorithm.
Finally, as noticed by Stephan Kolassa in the comment, binning is usually discouraged. It results in loosing information, so you have worse quality features to train as compared to the raw data. Ask yourself if you really need to bin the data in the first place. | Why use supervised binning on train data if it leaks data? | As already noticed in the comments and another answer, you need to train the binning algorithm using training data only, in such a case it has no chance to leak the test data, as it hasn't seen it.
Bu | Why use supervised binning on train data if it leaks data?
As already noticed in the comments and another answer, you need to train the binning algorithm using training data only, in such a case it has no chance to leak the test data, as it hasn't seen it.
But you seem to be concerned with the fact that the binning algorithm uses the labels, so it "leaks" the labels to the features. This concern makes sense, in the end if you had a model like
$$
y = f(y)
$$
it would be quite useless. It would predict nothing and it would be unusable at prediction time, when you have no access to the labels. But it is not that bad.
First, notice that any machine learning algorithm has access to both labels and the features during the training, so if you weren't allowed to look at the labels while training, you couldn't do it. The best example would be naive Bayes algorithm that groups the data by the labels $Y$, calculates the empirical probabilities for the labels $p(Y=c)$, and the empirical probabilities for the features given (grouped by) each label $p(X_i | Y=c)$, and combines those using Bayes theorem
$$
p(Y=c) \prod_{i=1}^n p(X_i | Y=c)
$$
If you think about it, it is almost like a generalization of the binning idea to the smooth categories: in binning we transform $X_i | Y=c$ to discrete bins, while naive Bayes replaces it with a probability (continous score!). Of course, the difference is that with binning you then use the features as input for another model, but basically the idea is like a kind of poor man's naive Bayes algorithm.
Finally, as noticed by Stephan Kolassa in the comment, binning is usually discouraged. It results in loosing information, so you have worse quality features to train as compared to the raw data. Ask yourself if you really need to bin the data in the first place. | Why use supervised binning on train data if it leaks data?
As already noticed in the comments and another answer, you need to train the binning algorithm using training data only, in such a case it has no chance to leak the test data, as it hasn't seen it.
Bu |
53,512 | Why use supervised binning on train data if it leaks data? | If you only use training data for supervised binning, you cannot leak information from the test dataset, simply because you are not using it. So, no, when done right, there is no leakage. | Why use supervised binning on train data if it leaks data? | If you only use training data for supervised binning, you cannot leak information from the test dataset, simply because you are not using it. So, no, when done right, there is no leakage. | Why use supervised binning on train data if it leaks data?
If you only use training data for supervised binning, you cannot leak information from the test dataset, simply because you are not using it. So, no, when done right, there is no leakage. | Why use supervised binning on train data if it leaks data?
If you only use training data for supervised binning, you cannot leak information from the test dataset, simply because you are not using it. So, no, when done right, there is no leakage. |
53,513 | If my data doesn't completely follow the Zipf's law, how do I justify it mathematically? | The literature on the mathematical theory underlying Zipf's law is quite vast, and includes a large number of underlying theoretical models in which the law emerges. Zipf's law is related to power laws through the fact that it asserts a power-law relationship for the rank versus frequency of the objects under analysis, so there is also a substantial literature examining the connections between the Zipf distribution and power-law behaviours in the Pareto distribution. The statistical literature on this topic is quite vast, but you can find a good introductory exposition on this field in Mitzenmacher (2003). As you will see from that reference, there are a number of modelling approaches that lead to the behaviour set out in Zipf's law.
For natural language and vocabulary analysis, the most prominent modelling approach is an information-theoretic derivation akin to the work of Mandelbrot (1953). This paper uses information-optimisation to derive a slightly generalised form for Zipf's law; this model has had a large impact in information theory and has led to a range of later models. The approach used by Mandelbrot leads to a slighly generalised form of the Zipf distribution over the support $1,...,N$, defined by the proportionality relationship:
$$f(k|s,c,N) \propto \frac{1}{(k+c)^s} \cdot \mathbb{I}(k \in \{ 1,...,N \}),$$
with parameters $c \geqslant 0$ and $s > 0$. This distributional relationship is often exhibited on a log-log plot via the fact that the distribution satisfies:
$$\begin{align}
\log f(k|s,c,N)
&= \text{const} - s \log(k+c) \\[6pt]
&= \text{const} - s \log(k) - s \log \Big( 1+\frac{c}{k} \Big). \\[6pt]
\end{align}$$
In the special case where $c=0$ we see that the rank-frequency relationship will appear as a negative linear relationship on a log-log plot. For $c>0$ the relationship will appear nonlinear, but will become linear for $k \gg c$ (i.e., it will be close to linear except when $k$ is relatively low).
A useful starting point to investigate Zipf's law in empirical data is to plot the rank versus frequency on a log-log plot to see if it appears to roughly follow the above form. You can do this using the zipfplot function in the utilities package in R (example shown below). Inspection of the Zipf plot should give you a reasonable idea of whether or not your data follow the expected form under that distribution.
Once you have inspected your data on a Zipf plot, you can obtain the MLEs for the parameters $c$ and $s$ and use this to superimpose the estimated Zipf distribution onto the log-log plot, to see how closely the data follows the closest version of this distribution. You can also use goodness of fit tests to see if the variation of the data from the theoretical distribution is sufficient to falsify the assumed distributional form.
Now, if your data does depart from Zipf's law (in the generalised sense shown here) then that means you will need to investigate broader distributional forms that accomodate your data. It is a bad idea to try to "justify" Zipf's law against the evidence provided by your data --- you should allow your data to lead the analysis and seek models and distributional forms that are plausible when compared with your data. If your data does not fit the family of Zipf distributions, ideally you would broaden your analysis by examining distributional families that arise from some plausible simple change to the underlying information-theoretic models. Ideally you will end up with a distributional form that has a solid information-theoretic foundation and also fits your data well.
An example of the Zipf Plot: If you have an underlying vector of the data values from a discrete distribution, you can generate a Zipf plot of the data using the zipfplot function in the utilties package. The function automatically computes the ranks and frequencies of the outcomes in the data, so you enter the data into the function in its raw form. Here we show a simple example using data from a binomial distribution. As you can see from the plot, the distribution does not follow Zipf's law.
#Generate some mock data from a discrete distribution
set.seed(1)
XX <- rbinom(10000, size = 120, prob = 0.3)
#Plot the Zipf plot for the data
library(utilities)
zipfplot(XX) | If my data doesn't completely follow the Zipf's law, how do I justify it mathematically? | The literature on the mathematical theory underlying Zipf's law is quite vast, and includes a large number of underlying theoretical models in which the law emerges. Zipf's law is related to power la | If my data doesn't completely follow the Zipf's law, how do I justify it mathematically?
The literature on the mathematical theory underlying Zipf's law is quite vast, and includes a large number of underlying theoretical models in which the law emerges. Zipf's law is related to power laws through the fact that it asserts a power-law relationship for the rank versus frequency of the objects under analysis, so there is also a substantial literature examining the connections between the Zipf distribution and power-law behaviours in the Pareto distribution. The statistical literature on this topic is quite vast, but you can find a good introductory exposition on this field in Mitzenmacher (2003). As you will see from that reference, there are a number of modelling approaches that lead to the behaviour set out in Zipf's law.
For natural language and vocabulary analysis, the most prominent modelling approach is an information-theoretic derivation akin to the work of Mandelbrot (1953). This paper uses information-optimisation to derive a slightly generalised form for Zipf's law; this model has had a large impact in information theory and has led to a range of later models. The approach used by Mandelbrot leads to a slighly generalised form of the Zipf distribution over the support $1,...,N$, defined by the proportionality relationship:
$$f(k|s,c,N) \propto \frac{1}{(k+c)^s} \cdot \mathbb{I}(k \in \{ 1,...,N \}),$$
with parameters $c \geqslant 0$ and $s > 0$. This distributional relationship is often exhibited on a log-log plot via the fact that the distribution satisfies:
$$\begin{align}
\log f(k|s,c,N)
&= \text{const} - s \log(k+c) \\[6pt]
&= \text{const} - s \log(k) - s \log \Big( 1+\frac{c}{k} \Big). \\[6pt]
\end{align}$$
In the special case where $c=0$ we see that the rank-frequency relationship will appear as a negative linear relationship on a log-log plot. For $c>0$ the relationship will appear nonlinear, but will become linear for $k \gg c$ (i.e., it will be close to linear except when $k$ is relatively low).
A useful starting point to investigate Zipf's law in empirical data is to plot the rank versus frequency on a log-log plot to see if it appears to roughly follow the above form. You can do this using the zipfplot function in the utilities package in R (example shown below). Inspection of the Zipf plot should give you a reasonable idea of whether or not your data follow the expected form under that distribution.
Once you have inspected your data on a Zipf plot, you can obtain the MLEs for the parameters $c$ and $s$ and use this to superimpose the estimated Zipf distribution onto the log-log plot, to see how closely the data follows the closest version of this distribution. You can also use goodness of fit tests to see if the variation of the data from the theoretical distribution is sufficient to falsify the assumed distributional form.
Now, if your data does depart from Zipf's law (in the generalised sense shown here) then that means you will need to investigate broader distributional forms that accomodate your data. It is a bad idea to try to "justify" Zipf's law against the evidence provided by your data --- you should allow your data to lead the analysis and seek models and distributional forms that are plausible when compared with your data. If your data does not fit the family of Zipf distributions, ideally you would broaden your analysis by examining distributional families that arise from some plausible simple change to the underlying information-theoretic models. Ideally you will end up with a distributional form that has a solid information-theoretic foundation and also fits your data well.
An example of the Zipf Plot: If you have an underlying vector of the data values from a discrete distribution, you can generate a Zipf plot of the data using the zipfplot function in the utilties package. The function automatically computes the ranks and frequencies of the outcomes in the data, so you enter the data into the function in its raw form. Here we show a simple example using data from a binomial distribution. As you can see from the plot, the distribution does not follow Zipf's law.
#Generate some mock data from a discrete distribution
set.seed(1)
XX <- rbinom(10000, size = 120, prob = 0.3)
#Plot the Zipf plot for the data
library(utilities)
zipfplot(XX) | If my data doesn't completely follow the Zipf's law, how do I justify it mathematically?
The literature on the mathematical theory underlying Zipf's law is quite vast, and includes a large number of underlying theoretical models in which the law emerges. Zipf's law is related to power la |
53,514 | Does autocorrelation imply temporal dependence? | Removing non-stationarity just makes statistical structure of of your time series independent of absolute time-steps. It will typically reduce the auto-correlation but will not remove them. In some cases, it might happen that the time series consists of a deterministic seasonal component with some white-noise superimposed on it. In this case, removing non-stationarity will remove auto-correlations. But this is a special case.
A time series $X_t$ is stationary if the joint probability distribution of ${x_{t_1},x_{t_2},x_{t_3},x_{t_4},...,x_{t_n}}$ and ${x_{t_1+c},x_{t_2+c},x_{t_3+c},x_{t_4+c},...,x_{t_n+c}}$ is same for all $n$ and $c$. Intuitively, it means that if that the joint distribution depends upon the relative position of your time-steps, not the absolute position.
With some work, this definition translates to some important facts:
(1) Probability distribution of $X_t$ is same for all $t$.
(2) Expected value of $X_t$, $E(X_t)$, is independent of $t$: the mean does not change with time.
(3) In fact, the variance and all the higher moments do not change with time.
(4) The correlation between $X_{t_1}$ and $X_{t_2}$ is a function of $t_1-t_2$: it does not depend upon on $t_1$ and $t_2$, only on their relative positions.
Typically, a time series such as rainfall have seasonal fluctuations. Rainfall would be higher in Monsoon months than in other months. It means that the probability distribution of rainfall time series is changing with time (because the mean is changing with time). This is the reason behind removing the trends and seasonal components before using the classical time-series methods based on the assumption of stationarity.
Yes, autocorrelation implies temporal dependence. But both stationary and non-stationary time series have temporal dependence. The nature of dependence is different for stationary and non-stationary time series. In stationary time series, the autocorrelation depends on relative position of the time-steps only. In non-stationary time series, it can also depend upon the absolute value of time-steps.
Edit: (Based on a comment by Dilip Sarwate) The stationarity definition given above defines strict sense stationarity. However, for time-series analysis, what we typically requires is something called weak sense stationarity. A time series is stationary in weak sense if the expected value does not change with time and autocorrelation at time-steps $t_1$ and $t_2$ is a function of $t_1-t_2$ only. Weak sense stationarity does not satisfy the fact (1) mentioned above. | Does autocorrelation imply temporal dependence? | Removing non-stationarity just makes statistical structure of of your time series independent of absolute time-steps. It will typically reduce the auto-correlation but will not remove them. In some ca | Does autocorrelation imply temporal dependence?
Removing non-stationarity just makes statistical structure of of your time series independent of absolute time-steps. It will typically reduce the auto-correlation but will not remove them. In some cases, it might happen that the time series consists of a deterministic seasonal component with some white-noise superimposed on it. In this case, removing non-stationarity will remove auto-correlations. But this is a special case.
A time series $X_t$ is stationary if the joint probability distribution of ${x_{t_1},x_{t_2},x_{t_3},x_{t_4},...,x_{t_n}}$ and ${x_{t_1+c},x_{t_2+c},x_{t_3+c},x_{t_4+c},...,x_{t_n+c}}$ is same for all $n$ and $c$. Intuitively, it means that if that the joint distribution depends upon the relative position of your time-steps, not the absolute position.
With some work, this definition translates to some important facts:
(1) Probability distribution of $X_t$ is same for all $t$.
(2) Expected value of $X_t$, $E(X_t)$, is independent of $t$: the mean does not change with time.
(3) In fact, the variance and all the higher moments do not change with time.
(4) The correlation between $X_{t_1}$ and $X_{t_2}$ is a function of $t_1-t_2$: it does not depend upon on $t_1$ and $t_2$, only on their relative positions.
Typically, a time series such as rainfall have seasonal fluctuations. Rainfall would be higher in Monsoon months than in other months. It means that the probability distribution of rainfall time series is changing with time (because the mean is changing with time). This is the reason behind removing the trends and seasonal components before using the classical time-series methods based on the assumption of stationarity.
Yes, autocorrelation implies temporal dependence. But both stationary and non-stationary time series have temporal dependence. The nature of dependence is different for stationary and non-stationary time series. In stationary time series, the autocorrelation depends on relative position of the time-steps only. In non-stationary time series, it can also depend upon the absolute value of time-steps.
Edit: (Based on a comment by Dilip Sarwate) The stationarity definition given above defines strict sense stationarity. However, for time-series analysis, what we typically requires is something called weak sense stationarity. A time series is stationary in weak sense if the expected value does not change with time and autocorrelation at time-steps $t_1$ and $t_2$ is a function of $t_1-t_2$ only. Weak sense stationarity does not satisfy the fact (1) mentioned above. | Does autocorrelation imply temporal dependence?
Removing non-stationarity just makes statistical structure of of your time series independent of absolute time-steps. It will typically reduce the auto-correlation but will not remove them. In some ca |
53,515 | Does autocorrelation imply temporal dependence? | Correlation does not imply causation, neither the other way around, nor when it regards time. The same applies to autocorrelation. Correlation(s) measure particular kinds of relationships between variables, while there may be many other non-linear relationships that are possible, so correlation and causation or dependence, while not unrelated, are not the same. | Does autocorrelation imply temporal dependence? | Correlation does not imply causation, neither the other way around, nor when it regards time. The same applies to autocorrelation. Correlation(s) measure particular kinds of relationships between vari | Does autocorrelation imply temporal dependence?
Correlation does not imply causation, neither the other way around, nor when it regards time. The same applies to autocorrelation. Correlation(s) measure particular kinds of relationships between variables, while there may be many other non-linear relationships that are possible, so correlation and causation or dependence, while not unrelated, are not the same. | Does autocorrelation imply temporal dependence?
Correlation does not imply causation, neither the other way around, nor when it regards time. The same applies to autocorrelation. Correlation(s) measure particular kinds of relationships between vari |
53,516 | Does autocorrelation imply temporal dependence? | No, a stationary TS can still have a ACF showing a temporal dependency. Autocorrelation is the dependency of on point on the previous ones. This temporal dependency can be a drift or an oscillation and those parts will indeed be removed by making it stationary. But you can still have a dependency on the previous point, if your points are not independent one from another.
E.g. think of rain or temperature measured each hour. Lets make it stationary: we take out seasonal temperature variation and say, climate change leading to a slow temperature increase. Still, if last hour you had a certain temperature, it will be unlikely that you will have a completely different temperature now. So you still have temporal dependency, but it is dependent on only the last points.
You might want to look at ARMA models to understand that. | Does autocorrelation imply temporal dependence? | No, a stationary TS can still have a ACF showing a temporal dependency. Autocorrelation is the dependency of on point on the previous ones. This temporal dependency can be a drift or an oscillation an | Does autocorrelation imply temporal dependence?
No, a stationary TS can still have a ACF showing a temporal dependency. Autocorrelation is the dependency of on point on the previous ones. This temporal dependency can be a drift or an oscillation and those parts will indeed be removed by making it stationary. But you can still have a dependency on the previous point, if your points are not independent one from another.
E.g. think of rain or temperature measured each hour. Lets make it stationary: we take out seasonal temperature variation and say, climate change leading to a slow temperature increase. Still, if last hour you had a certain temperature, it will be unlikely that you will have a completely different temperature now. So you still have temporal dependency, but it is dependent on only the last points.
You might want to look at ARMA models to understand that. | Does autocorrelation imply temporal dependence?
No, a stationary TS can still have a ACF showing a temporal dependency. Autocorrelation is the dependency of on point on the previous ones. This temporal dependency can be a drift or an oscillation an |
53,517 | Do you need large amounts of data to estimate parameters in extreme value distributions? | It's good to have more data , always :) However, consider why we have EVT: to work with less data! Why would you need EVT if you could collect infinite amount of data? You'd simply fit the underlying distribution and calculate any metrics on it. Because only a fraction of data goes to tails, we'd need to collect enormous amount of data before we get something going in the tails. That's where EVT comes handy: it focuses on the tails. So it allows us to study tails with much smaller data sets than otherwise would be required | Do you need large amounts of data to estimate parameters in extreme value distributions? | It's good to have more data , always :) However, consider why we have EVT: to work with less data! Why would you need EVT if you could collect infinite amount of data? You'd simply fit the underlying | Do you need large amounts of data to estimate parameters in extreme value distributions?
It's good to have more data , always :) However, consider why we have EVT: to work with less data! Why would you need EVT if you could collect infinite amount of data? You'd simply fit the underlying distribution and calculate any metrics on it. Because only a fraction of data goes to tails, we'd need to collect enormous amount of data before we get something going in the tails. That's where EVT comes handy: it focuses on the tails. So it allows us to study tails with much smaller data sets than otherwise would be required | Do you need large amounts of data to estimate parameters in extreme value distributions?
It's good to have more data , always :) However, consider why we have EVT: to work with less data! Why would you need EVT if you could collect infinite amount of data? You'd simply fit the underlying |
53,518 | Do you need large amounts of data to estimate parameters in extreme value distributions? | The Fisher information matrix tells you how much information there is in each observed value about your parameters. If your observations are independent, then the information in $n$ samples is $n$ times the Fisher information matrix. The inverse of the Fisher information matrix is a lower bound on the covariance of your (unbiased) estimate (Cramer-Rao bound). So if you know how accurately you want to measure your parameters, you can invert that and divide by the elements of the Fisher Information to get a rough estimate for $n$. If your estimators are not efficient you may need more.
There's an R package mle.tools for calculating Fisher information - I've not looked to see if it handles the generalised Pareto distribution, but if not, it should at least give you a starting point for some references. Or if you have the log-likelihood, the hessian() function in package numDeriv may help.
As a rule, extreme value distributions are not necessarily any harder to estimate. It depends instead on how varying the parameters changes the shape of the distribution. If varying the parameters affects the tails but leave the central part virtually unchanged, then you need data from the tails to get a good estimate. But if the parameters change the shape of the central part as well, the bulk of the information comes from here. If you're interested, you can investigate that by considering $f(x,p) \log(f(x,p+\epsilon)/f(x,p))$ where $f(x,p)$ is the pdf at $x$ for parameter $p$, subject to a small perturbation $\epsilon$. The term $\log(f(x,p+\epsilon)/f(x,p))$ is proportional to the information in the observation of an outcome $x$ about the parameter $p$, and multiplying by $f(x,p)$ gives you the average per observation. It tells you where in the distribution the information about that parameter is typically coming from. | Do you need large amounts of data to estimate parameters in extreme value distributions? | The Fisher information matrix tells you how much information there is in each observed value about your parameters. If your observations are independent, then the information in $n$ samples is $n$ tim | Do you need large amounts of data to estimate parameters in extreme value distributions?
The Fisher information matrix tells you how much information there is in each observed value about your parameters. If your observations are independent, then the information in $n$ samples is $n$ times the Fisher information matrix. The inverse of the Fisher information matrix is a lower bound on the covariance of your (unbiased) estimate (Cramer-Rao bound). So if you know how accurately you want to measure your parameters, you can invert that and divide by the elements of the Fisher Information to get a rough estimate for $n$. If your estimators are not efficient you may need more.
There's an R package mle.tools for calculating Fisher information - I've not looked to see if it handles the generalised Pareto distribution, but if not, it should at least give you a starting point for some references. Or if you have the log-likelihood, the hessian() function in package numDeriv may help.
As a rule, extreme value distributions are not necessarily any harder to estimate. It depends instead on how varying the parameters changes the shape of the distribution. If varying the parameters affects the tails but leave the central part virtually unchanged, then you need data from the tails to get a good estimate. But if the parameters change the shape of the central part as well, the bulk of the information comes from here. If you're interested, you can investigate that by considering $f(x,p) \log(f(x,p+\epsilon)/f(x,p))$ where $f(x,p)$ is the pdf at $x$ for parameter $p$, subject to a small perturbation $\epsilon$. The term $\log(f(x,p+\epsilon)/f(x,p))$ is proportional to the information in the observation of an outcome $x$ about the parameter $p$, and multiplying by $f(x,p)$ gives you the average per observation. It tells you where in the distribution the information about that parameter is typically coming from. | Do you need large amounts of data to estimate parameters in extreme value distributions?
The Fisher information matrix tells you how much information there is in each observed value about your parameters. If your observations are independent, then the information in $n$ samples is $n$ tim |
53,519 | Trying to figure out which statistical method to use | First, I hope that this score is not the only criterion used for ranking residency applicants. When I was on a residency admissions committee, scores from interviews with faculty were only one part of the process. We would then meet together to go down the list carefully, reviewing all the candidates and frequently reordering before submitting the final rank list. Otherwise you are doing the candidates, your program, and your specialty a potential disservice.
Second, you can start getting scores to agree better among interviewers by standardizing each interviewer's scores to a mean of 0 and a standard deviation of 1. For each interviewer, subtract the mean value of all that interviewer's scores from each score, then divide by the standard deviation among that interviewer's scores. That helps take into account both differences in overall levels of scores among interviewers and differences in how widely interviewers spread their scores across the range. The "average" applicant seen by each faculty member then has a score of 0, with about 2/3 of applicants between -1 and +1. Scores among all faculty members then should be on about the same overall scales.
A related approach would be to use the rank orders among candidates for each interviewer. That could work well if all faculty conduced the same number of interviews, but could lead to problems otherwise as it might be hard to have results on the same scale among faculty.
Neither of those approaches will completely get around the problem of different applicants being interviewed by different faculty, but it should help a lot.
Those approaches should help you get an initial list for further discussion, discussion that might explicitly include the differences among interviewers in scoring.
Third, averaging scores even after standardization doesn't deal with potential candidate-specific bias (positive or negative) by a single interviewer. With a larger admissions committee of about a dozen, we agreed to throw out the top and bottom scores for each applicant to try to minimize that type of problem. That would be harder with your small number of interviewers. In that regard, you might consider having 2 faculty interview each candidate together (same room, same time). That helps make sure that interview results are reported back to the full committee faithfully. Otherwise there's a danger that a single interviewer might have misunderstood (or misstated) some applicant's responses.
Finally, as the comment on the question from @Dave notes, adjustment of scores might not be your biggest problem in selecting a rank order among applicants. Do ask whether your entire process is leading to the best selection of new residents. | Trying to figure out which statistical method to use | First, I hope that this score is not the only criterion used for ranking residency applicants. When I was on a residency admissions committee, scores from interviews with faculty were only one part of | Trying to figure out which statistical method to use
First, I hope that this score is not the only criterion used for ranking residency applicants. When I was on a residency admissions committee, scores from interviews with faculty were only one part of the process. We would then meet together to go down the list carefully, reviewing all the candidates and frequently reordering before submitting the final rank list. Otherwise you are doing the candidates, your program, and your specialty a potential disservice.
Second, you can start getting scores to agree better among interviewers by standardizing each interviewer's scores to a mean of 0 and a standard deviation of 1. For each interviewer, subtract the mean value of all that interviewer's scores from each score, then divide by the standard deviation among that interviewer's scores. That helps take into account both differences in overall levels of scores among interviewers and differences in how widely interviewers spread their scores across the range. The "average" applicant seen by each faculty member then has a score of 0, with about 2/3 of applicants between -1 and +1. Scores among all faculty members then should be on about the same overall scales.
A related approach would be to use the rank orders among candidates for each interviewer. That could work well if all faculty conduced the same number of interviews, but could lead to problems otherwise as it might be hard to have results on the same scale among faculty.
Neither of those approaches will completely get around the problem of different applicants being interviewed by different faculty, but it should help a lot.
Those approaches should help you get an initial list for further discussion, discussion that might explicitly include the differences among interviewers in scoring.
Third, averaging scores even after standardization doesn't deal with potential candidate-specific bias (positive or negative) by a single interviewer. With a larger admissions committee of about a dozen, we agreed to throw out the top and bottom scores for each applicant to try to minimize that type of problem. That would be harder with your small number of interviewers. In that regard, you might consider having 2 faculty interview each candidate together (same room, same time). That helps make sure that interview results are reported back to the full committee faithfully. Otherwise there's a danger that a single interviewer might have misunderstood (or misstated) some applicant's responses.
Finally, as the comment on the question from @Dave notes, adjustment of scores might not be your biggest problem in selecting a rank order among applicants. Do ask whether your entire process is leading to the best selection of new residents. | Trying to figure out which statistical method to use
First, I hope that this score is not the only criterion used for ranking residency applicants. When I was on a residency admissions committee, scores from interviews with faculty were only one part of |
53,520 | GLM negative binomial - what to do when one category has only zeros? | Adding to the other answers with some experimental calculations. The large standard error for managementD is caused by small sample size. The standard error you've got is based on an approximation, based on the loglikelihood function being approximately quadratic, which it is not. We can try to get a confidence interval by profiling, but the R profile function do not work with glm.nb, so I try a workaround using the package bbmle:
counts <- (c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87,
62, 67, 85, 89, 63, 86, 97, 44, 0, 0, 0, 0, 0, 0))
management <- rep(LETTERS[1:4], each = 6)
mydf <- data.frame(counts, management)
model.bbmle <- mle2(counts ~ dnbinom(mu=exp(logmu), size=exp(logtheta)),
method= "BFGS", parameters=list(logmu ~ 0 +
management),
data=mydf, start=list(logmu=0,
logtheta=2.42),
control=list(trace=1) )
summary(model.bbmle)
Maximum likelihood estimation
Call:
mle2(minuslogl = counts ~ dnbinom(mu = exp(logmu), size = exp(logtheta)),
start = list(logmu = 0, logtheta = 2.42), method = "BFGS",
data = mydf, parameters = list(logmu ~ 0 + management), control = list(trace = 1))
Coefficients:
Estimate Std. Error z value Pr(z)
logmu.managementA 5.06898 0.12585 40.2783 < 2.2e-16 ***
logmu.managementB 4.56265 0.12856 35.4897 < 2.2e-16 ***
logmu.managementC 4.34811 0.13017 33.4038 < 2.2e-16 ***
logmu.managementD -11.55880 132.09516 -0.0875 0.9303
logtheta 2.42213 0.36435 6.6478 2.976e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
-2 log L: 175.9181
The fit is comparable to yours with glm.nb, the maximized loglikelihood is equal, (note the changed parametrization), the standard error is lower, but still huge!
Now, we can try profiling, but this do not work very well, so I will only give the code:
prof.4 <- bbmle::profile(model.bbmle, which=4, maxsteps=1000,
alpha=0.005, trace=TRUE)
confint(prof.4)
2.5 % 97.5 %
NA -89.53398
Warning messages:
1: In .local(object, parm, level, ...) :
non-monotonic spline fit to profile (logmu.managementD): reverting from spline to linear approximation
2: In regularize.values(x, y, ties, missing(ties),
na.rm = na.rm) :
collapsing to unique 'x' values
The returned interval (remember log scale!) do not make sense, and we will soon understand why. I will show a plot of a section of the loglikelihood function, along the D axes (this is not the same as profiling, since the other parameters are held fixed). This is some ugly code I do not fully understand (caused by bbmleusing S4 object system):
B <- coef(model.bbmle)
minuslogl_0 <- slot(model.bbmle, "minuslogl")
minuslogl <- function(B) do.call("minuslogl_0", namedrop(as.list(B)))
But now we can make a plot of a section of the minusloglikelihood function along the D axes, where the other parameters are held at the maxlik estimated values:
on the xaxis is the deviation of the D parameter from its maxlik value. One can see that no lower bound can be set (or, on the original scale, 0 is the lower bound), but a sharp upper bound an be set, and it will be smaller than what indicated by the standard error calculation. The code used is
delta <- 10
plot( Vectorize( function(x) minuslogl(B + c(0, 0, 0, x, 0)) ),
from=-delta, to=delta, ylab="minusloglik",
main="Section of negative loglikelihood function",
col="red") | GLM negative binomial - what to do when one category has only zeros? | Adding to the other answers with some experimental calculations. The large standard error for managementD is caused by small sample size. The standard error you've got is based on an approximation, ba | GLM negative binomial - what to do when one category has only zeros?
Adding to the other answers with some experimental calculations. The large standard error for managementD is caused by small sample size. The standard error you've got is based on an approximation, based on the loglikelihood function being approximately quadratic, which it is not. We can try to get a confidence interval by profiling, but the R profile function do not work with glm.nb, so I try a workaround using the package bbmle:
counts <- (c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87,
62, 67, 85, 89, 63, 86, 97, 44, 0, 0, 0, 0, 0, 0))
management <- rep(LETTERS[1:4], each = 6)
mydf <- data.frame(counts, management)
model.bbmle <- mle2(counts ~ dnbinom(mu=exp(logmu), size=exp(logtheta)),
method= "BFGS", parameters=list(logmu ~ 0 +
management),
data=mydf, start=list(logmu=0,
logtheta=2.42),
control=list(trace=1) )
summary(model.bbmle)
Maximum likelihood estimation
Call:
mle2(minuslogl = counts ~ dnbinom(mu = exp(logmu), size = exp(logtheta)),
start = list(logmu = 0, logtheta = 2.42), method = "BFGS",
data = mydf, parameters = list(logmu ~ 0 + management), control = list(trace = 1))
Coefficients:
Estimate Std. Error z value Pr(z)
logmu.managementA 5.06898 0.12585 40.2783 < 2.2e-16 ***
logmu.managementB 4.56265 0.12856 35.4897 < 2.2e-16 ***
logmu.managementC 4.34811 0.13017 33.4038 < 2.2e-16 ***
logmu.managementD -11.55880 132.09516 -0.0875 0.9303
logtheta 2.42213 0.36435 6.6478 2.976e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
-2 log L: 175.9181
The fit is comparable to yours with glm.nb, the maximized loglikelihood is equal, (note the changed parametrization), the standard error is lower, but still huge!
Now, we can try profiling, but this do not work very well, so I will only give the code:
prof.4 <- bbmle::profile(model.bbmle, which=4, maxsteps=1000,
alpha=0.005, trace=TRUE)
confint(prof.4)
2.5 % 97.5 %
NA -89.53398
Warning messages:
1: In .local(object, parm, level, ...) :
non-monotonic spline fit to profile (logmu.managementD): reverting from spline to linear approximation
2: In regularize.values(x, y, ties, missing(ties),
na.rm = na.rm) :
collapsing to unique 'x' values
The returned interval (remember log scale!) do not make sense, and we will soon understand why. I will show a plot of a section of the loglikelihood function, along the D axes (this is not the same as profiling, since the other parameters are held fixed). This is some ugly code I do not fully understand (caused by bbmleusing S4 object system):
B <- coef(model.bbmle)
minuslogl_0 <- slot(model.bbmle, "minuslogl")
minuslogl <- function(B) do.call("minuslogl_0", namedrop(as.list(B)))
But now we can make a plot of a section of the minusloglikelihood function along the D axes, where the other parameters are held at the maxlik estimated values:
on the xaxis is the deviation of the D parameter from its maxlik value. One can see that no lower bound can be set (or, on the original scale, 0 is the lower bound), but a sharp upper bound an be set, and it will be smaller than what indicated by the standard error calculation. The code used is
delta <- 10
plot( Vectorize( function(x) minuslogl(B + c(0, 0, 0, x, 0)) ),
from=-delta, to=delta, ylab="minusloglik",
main="Section of negative loglikelihood function",
col="red") | GLM negative binomial - what to do when one category has only zeros?
Adding to the other answers with some experimental calculations. The large standard error for managementD is caused by small sample size. The standard error you've got is based on an approximation, ba |
53,521 | GLM negative binomial - what to do when one category has only zeros? | I dissent somewhat from the first answer by @dariober.
Adding 1 is a fudge.
There is no substantive reason for disbelieving zeros as recorded in the sample.
Most important, model fits are reasonable, the only oddity being the rather wide confidence intervals in one case. There is some robustness, as Poisson and negative binomial fits are essentially identical in fitted values. (Indeed, for this structure, all plausible models, and some not so plausible ones, essentially return group means as fitted values. The only differences are inferential small print, and if you're queasy about this you really need a bigger dataset! Easy to say....)
A graph shows it all:
For completeness, here is the Stata code I used. Naturally, the calculations are simple in any modern statistical environment.
clear
mat counts = (67,194,155,135,146,257,114,134,111,87,62,67,85,89,63,86,97,44,0,0,0,0,0,0)
set obs 24
gen counts = counts[1, _n]
egen management = seq(), block(6)
label define management 1 A 2 B 3 C 4 D
label val management management
glm counts i.management , family(poisson)
predict poisson
glm counts i.management , family(nbinomial)
predict nbinomial
* uncomment next if you need to install
* ssc install stripplot
gen management1 = management - 0.1
gen management2 = management - 0.2
stripplot counts , over(management) vertical stack height(0.3) legend(on order(1 "data" 2 "Poisson fit" 3 "Negative binomial fit")) yla(, ang(h)) ///
addplot(scatter poisson management2, ms(D) || scatter nbinomial management1, ms(T))
EDIT For a slightly less ad hoc method of injecting Bayes flavour than just adding 1 to all counts, I used quasi-Bayes smoothing as suggested by I.J. Good (for a self-contained account see this paper; typo fix within this paper, pp.494-495) before pushing those adjusted counts through a Poisson GLM (using robust (sandwich-Huber-Eicker-White) standard errors). The P-values make more sense while at the same time predicted means are not that different from any other fit. There will be other and arguably better ways to do this.
-------------------------------------------------------------
management | mean Poisson nbinomial qs_Poisson
-------------+-----------------------------------------------
A | 159.0 159.0 159.0 157.7
B | 95.8 95.8 95.8 95.6
C | 77.3 77.3 77.3 77.4
D | 0.0 0.0 0.0 1.5
------------------------------------------------------------- | GLM negative binomial - what to do when one category has only zeros? | I dissent somewhat from the first answer by @dariober.
Adding 1 is a fudge.
There is no substantive reason for disbelieving zeros as recorded in the sample.
Most important, model fits are reasonabl | GLM negative binomial - what to do when one category has only zeros?
I dissent somewhat from the first answer by @dariober.
Adding 1 is a fudge.
There is no substantive reason for disbelieving zeros as recorded in the sample.
Most important, model fits are reasonable, the only oddity being the rather wide confidence intervals in one case. There is some robustness, as Poisson and negative binomial fits are essentially identical in fitted values. (Indeed, for this structure, all plausible models, and some not so plausible ones, essentially return group means as fitted values. The only differences are inferential small print, and if you're queasy about this you really need a bigger dataset! Easy to say....)
A graph shows it all:
For completeness, here is the Stata code I used. Naturally, the calculations are simple in any modern statistical environment.
clear
mat counts = (67,194,155,135,146,257,114,134,111,87,62,67,85,89,63,86,97,44,0,0,0,0,0,0)
set obs 24
gen counts = counts[1, _n]
egen management = seq(), block(6)
label define management 1 A 2 B 3 C 4 D
label val management management
glm counts i.management , family(poisson)
predict poisson
glm counts i.management , family(nbinomial)
predict nbinomial
* uncomment next if you need to install
* ssc install stripplot
gen management1 = management - 0.1
gen management2 = management - 0.2
stripplot counts , over(management) vertical stack height(0.3) legend(on order(1 "data" 2 "Poisson fit" 3 "Negative binomial fit")) yla(, ang(h)) ///
addplot(scatter poisson management2, ms(D) || scatter nbinomial management1, ms(T))
EDIT For a slightly less ad hoc method of injecting Bayes flavour than just adding 1 to all counts, I used quasi-Bayes smoothing as suggested by I.J. Good (for a self-contained account see this paper; typo fix within this paper, pp.494-495) before pushing those adjusted counts through a Poisson GLM (using robust (sandwich-Huber-Eicker-White) standard errors). The P-values make more sense while at the same time predicted means are not that different from any other fit. There will be other and arguably better ways to do this.
-------------------------------------------------------------
management | mean Poisson nbinomial qs_Poisson
-------------+-----------------------------------------------
A | 159.0 159.0 159.0 157.7
B | 95.8 95.8 95.8 95.6
C | 77.3 77.3 77.3 77.4
D | 0.0 0.0 0.0 1.5
------------------------------------------------------------- | GLM negative binomial - what to do when one category has only zeros?
I dissent somewhat from the first answer by @dariober.
Adding 1 is a fudge.
There is no substantive reason for disbelieving zeros as recorded in the sample.
Most important, model fits are reasonabl |
53,522 | GLM negative binomial - what to do when one category has only zeros? | As requested by the OP in comments, I'm going to give an example in R of applying likelihood ratio test (LRT) to test differences between management groups as suggested by @GordonSmyth. I'm not sure I'm getting this right so please check it - credit goes to Gordon, faults are mine.
With LRT we check for significant differences between nested models. To apply it to this case, we need to expand the factors in management to a matrix (I guess glm does this internally anyway). Then we can drop each factor in turn and see if the simpler model differs from the full one:
library(MASS)
counts <- c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87,
62, 67, 85, 89, 63, 86, 97, 44, 0, 0, 0, 0, 0, 0)
management <- rep(LETTERS[1:4], each = 6)
design <- model.matrix(~ management)
design
(Intercept) managementB managementC managementD
1 1 0 0 0
2 1 0 0 0
3 1 0 0 0
4 1 0 0 0
5 1 0 0 0
6 1 0 0 0
7 1 1 0 0
8 1 1 0 0
9 1 1 0 0
10 1 1 0 0
11 1 1 0 0
12 1 1 0 0
13 1 0 1 0
14 1 0 1 0
15 1 0 1 0
16 1 0 1 0
17 1 0 1 0
18 1 0 1 0
19 1 0 0 1
20 1 0 0 1
21 1 0 0 1
22 1 0 0 1
23 1 0 0 1
24 1 0 0 1
Fit the full model, we tell glm.nb to omit the intercept since this is already encoded in the design matrix. You may want to check that this is the same using glm.nb(counts ~ management):
fit_full <- glm.nb(counts ~ 0 + design)
Now we drop group B, fit the reduced model and compare with the full one. This should be equivalent to assessing the significance of difference difference group A and group B. We get a p-value of ~0.01:
design_red <- design[, - which(colnames(design) == 'managementB')]
fit_red <- glm.nb(counts ~ 0 + design_red)
anova(fit_full, fit_red)
Likelihood ratio tests of Negative Binomial Models
Response: counts
Model theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi)
1 0 + design_red 7.668 21 -182.4
2 0 + design 11.274 20 -175.9 1 vs 2 1 6.531 0.0106
We can do the same for group D:
design_red <- design[, - which(colnames(design) == 'managementD')]
fit_red <- glm.nb(counts ~ 0 + design_red)
anova(fit_full, fit_red)
Likelihood ratio tests of Negative Binomial Models
Response: counts
Model theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi)
1 0 + design_red 829670.23 21 -1640.8
2 0 + design 11.27 20 -175.9 1 vs 2 1 1465 0
Unsurprisingly, the p-value for the difference between A and D is next to 0. Note that the theta parameter for the reduced model is huge and glm.nb issues warnings. I'm not sure how to interpret these but I guess it's not surprising since the intercept includes large-ish values with a string of zeros.
To test the difference between, say, B and C I would recode the full matrix to use B instead of A as intercept and proceed as above - I think there are better ways though.
Hope this helps and I got it right. However, I still think my other solution adding pseudocounts is worth considering. | GLM negative binomial - what to do when one category has only zeros? | As requested by the OP in comments, I'm going to give an example in R of applying likelihood ratio test (LRT) to test differences between management groups as suggested by @GordonSmyth. I'm not sure | GLM negative binomial - what to do when one category has only zeros?
As requested by the OP in comments, I'm going to give an example in R of applying likelihood ratio test (LRT) to test differences between management groups as suggested by @GordonSmyth. I'm not sure I'm getting this right so please check it - credit goes to Gordon, faults are mine.
With LRT we check for significant differences between nested models. To apply it to this case, we need to expand the factors in management to a matrix (I guess glm does this internally anyway). Then we can drop each factor in turn and see if the simpler model differs from the full one:
library(MASS)
counts <- c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87,
62, 67, 85, 89, 63, 86, 97, 44, 0, 0, 0, 0, 0, 0)
management <- rep(LETTERS[1:4], each = 6)
design <- model.matrix(~ management)
design
(Intercept) managementB managementC managementD
1 1 0 0 0
2 1 0 0 0
3 1 0 0 0
4 1 0 0 0
5 1 0 0 0
6 1 0 0 0
7 1 1 0 0
8 1 1 0 0
9 1 1 0 0
10 1 1 0 0
11 1 1 0 0
12 1 1 0 0
13 1 0 1 0
14 1 0 1 0
15 1 0 1 0
16 1 0 1 0
17 1 0 1 0
18 1 0 1 0
19 1 0 0 1
20 1 0 0 1
21 1 0 0 1
22 1 0 0 1
23 1 0 0 1
24 1 0 0 1
Fit the full model, we tell glm.nb to omit the intercept since this is already encoded in the design matrix. You may want to check that this is the same using glm.nb(counts ~ management):
fit_full <- glm.nb(counts ~ 0 + design)
Now we drop group B, fit the reduced model and compare with the full one. This should be equivalent to assessing the significance of difference difference group A and group B. We get a p-value of ~0.01:
design_red <- design[, - which(colnames(design) == 'managementB')]
fit_red <- glm.nb(counts ~ 0 + design_red)
anova(fit_full, fit_red)
Likelihood ratio tests of Negative Binomial Models
Response: counts
Model theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi)
1 0 + design_red 7.668 21 -182.4
2 0 + design 11.274 20 -175.9 1 vs 2 1 6.531 0.0106
We can do the same for group D:
design_red <- design[, - which(colnames(design) == 'managementD')]
fit_red <- glm.nb(counts ~ 0 + design_red)
anova(fit_full, fit_red)
Likelihood ratio tests of Negative Binomial Models
Response: counts
Model theta Resid. df 2 x log-lik. Test df LR stat. Pr(Chi)
1 0 + design_red 829670.23 21 -1640.8
2 0 + design 11.27 20 -175.9 1 vs 2 1 1465 0
Unsurprisingly, the p-value for the difference between A and D is next to 0. Note that the theta parameter for the reduced model is huge and glm.nb issues warnings. I'm not sure how to interpret these but I guess it's not surprising since the intercept includes large-ish values with a string of zeros.
To test the difference between, say, B and C I would recode the full matrix to use B instead of A as intercept and proceed as above - I think there are better ways though.
Hope this helps and I got it right. However, I still think my other solution adding pseudocounts is worth considering. | GLM negative binomial - what to do when one category has only zeros?
As requested by the OP in comments, I'm going to give an example in R of applying likelihood ratio test (LRT) to test differences between management groups as suggested by @GordonSmyth. I'm not sure |
53,523 | GLM negative binomial - what to do when one category has only zeros? | For an explanation of why this is happening see GLM for count data with all zeroes in one category.
I know I could do a bayesian approach but before I go into that, I would like to know if there's a work-around that allows me to do this analysis using a frequentist
I think you could just add 1 to all observations and then get sensible estimates:
model <- glm.nb(counts+1 ~ management)
> summary(model)
Call:
glm.nb(formula = counts + 1 ~ management, init.theta =
11.87310622, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.4588 -0.3605 0.0000 0.4648 1.7339
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.0752 0.1228 41.330 < 2e-16 ***
managementB -0.5022 0.1756 -2.860 0.00424 **
managementC -0.7142 0.1768 -4.041 5.33e-05 ***
managementD -5.0752 0.4425 -11.470 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(11.8731) family taken to be 1)
Null deviance: 310.516 on 23 degrees of freedom
Residual deviance: 18.652 on 20 degrees of freedom
AIC: 198.44
Number of Fisher Scoring iterations: 1
Theta: 11.87
Std. Err.: 4.23
2 x log-likelihood: -188.443
The rationale would be that zeros are not really a possible outcome and they are just a consequence of the sampling. By adding 1 you reset to a more realistic lower bound. Of course, this is acceptable if adding 1 doesn't skew the dataset too much, which doesn't seem to be the case here. I think the logic is not too dissimilar from a Bayesian approach.
I add simulation results to illustrate the effect of adding 1 to the raw counts. Incidentally, I think this also illustrates the tradeoff between bias and variance.
I keep the counts for groups A, B, and C constant (I don't think this matters here). For group D I simulate counts from a negative binomial distribution with varying mean and size. Then I fit the glm.nb model with and without adding 1. For simplicity, I made group D to be the intercept.
Here's a summary plot of the estimates for group D, note that I reset the estimates below -10 to -10 for ease of visualization:
Horizontal lines are the true means. Each box is 500 simulations.
When the true mean is very low (about < 1), the estimates from corrected counts noticeably overestimate the true mean (they are biased) but they are very consistent between replicates (low variance).
Conversely, with low means, the estimates from raw counts are very unstable between replicates (high variance). After repeating the same experiment you could get considerably different results.
When the true mean is above about 5, the effect of adding 1 starts vanishing.
I think it's up to the analyst to decide how to proceed but by gut feeling I would say that adding 1 "make sense" here. I think the issue is not so much the numerical stability of the uncorrected estimates but rather whether they are meaningful.
Code:
library(data.table)
library(ggplot2)
counts <- c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87, 62,
67, 85, 89, 63, 86, 97, 44)
management <- rep(LETTERS[1:4], c(6, 6, 6, 6))
management <- relevel(as.factor(management), ref= 'D')
seed <- 1
dat <- list()
for(i in 1:500) {
for(mu in c(0.1, 1, 5, 10)) {
for(size in c(0.1, 1, 10)) {
set.seed(seed)
d <- data.table(
seed= seed,
mu= mu,
size= size,
counts= c(counts, rnbinom(n= sum(
management == 'D'), mu= mu, size= size)),
management= management
)
dat[[length(dat) + 1]] <- d
seed <- seed + 1
}
}
}
dat <- rbindlist(dat)
estimates <- dat[, list(raw= coef(glm.nb(counts ~
management))[1], corrected= coef(glm.nb(counts + 1 ~
management))[1]), by= list(seed, mu, size)]
estimates <- melt(estimates, id.vars= c('seed', 'mu', 'size'), variable.name= 'method', value.name= 'estimate')
estimates[, label := sprintf('True mean count= %s', mu)]
estimates[, label := factor(label, sprintf('True mean count= %s', unique(sort(mu))))]
gg <- ggplot(data= estimates, aes(x= as.factor(size), y=
ifelse(estimate < -10, -10, estimate), colour= method)) +
geom_hline(data= unique(estimates[,list(mu, label)]),
aes(yintercept= log(mu)), colour= 'grey30',
linetype= 'dashed') +
geom_boxplot() +
xlab('Size (dispersion parameter)') +
ylab('Estimate (capped to -10)') +
theme_light() +
theme(strip.text= element_text(colour= 'black',
size= 12)) +
facet_wrap(~ label, scales= 'free_y')
EDIT: I see quite a bit of skepticism about this solution and I'm slightly surprised about it since adding 1 to count data is not unheard of.
For example, gene expression data from sequencing technology come as count data. A very respected method of differential expression analysis is limma-voom (Gordon Smyth is a coauthor, my apologies if I'm misquoting). From the paper:
The counts are offset away from zero by 0.5 to avoid taking the log of zero, and to reduce the variability of log-cpm for low expression genes.
This is exactly what I'm proposing here.
Gene expression is never really zero and the difference between a count of 0 and a count of 1 is biologically irrelevant anyway, typically. Is it the same for the OP's case? I don't know but I suspect it is. Does it really matter whether group D has exactly 0 zebras or just 1 or 2 that went undetected in just 6 observations? If the answer is no, then adding 1 is more sensible than producing -Inf coefficients. | GLM negative binomial - what to do when one category has only zeros? | For an explanation of why this is happening see GLM for count data with all zeroes in one category.
I know I could do a bayesian approach but before I go into that, I would like to know if there's a | GLM negative binomial - what to do when one category has only zeros?
For an explanation of why this is happening see GLM for count data with all zeroes in one category.
I know I could do a bayesian approach but before I go into that, I would like to know if there's a work-around that allows me to do this analysis using a frequentist
I think you could just add 1 to all observations and then get sensible estimates:
model <- glm.nb(counts+1 ~ management)
> summary(model)
Call:
glm.nb(formula = counts + 1 ~ management, init.theta =
11.87310622, link = log)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.4588 -0.3605 0.0000 0.4648 1.7339
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 5.0752 0.1228 41.330 < 2e-16 ***
managementB -0.5022 0.1756 -2.860 0.00424 **
managementC -0.7142 0.1768 -4.041 5.33e-05 ***
managementD -5.0752 0.4425 -11.470 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for Negative Binomial(11.8731) family taken to be 1)
Null deviance: 310.516 on 23 degrees of freedom
Residual deviance: 18.652 on 20 degrees of freedom
AIC: 198.44
Number of Fisher Scoring iterations: 1
Theta: 11.87
Std. Err.: 4.23
2 x log-likelihood: -188.443
The rationale would be that zeros are not really a possible outcome and they are just a consequence of the sampling. By adding 1 you reset to a more realistic lower bound. Of course, this is acceptable if adding 1 doesn't skew the dataset too much, which doesn't seem to be the case here. I think the logic is not too dissimilar from a Bayesian approach.
I add simulation results to illustrate the effect of adding 1 to the raw counts. Incidentally, I think this also illustrates the tradeoff between bias and variance.
I keep the counts for groups A, B, and C constant (I don't think this matters here). For group D I simulate counts from a negative binomial distribution with varying mean and size. Then I fit the glm.nb model with and without adding 1. For simplicity, I made group D to be the intercept.
Here's a summary plot of the estimates for group D, note that I reset the estimates below -10 to -10 for ease of visualization:
Horizontal lines are the true means. Each box is 500 simulations.
When the true mean is very low (about < 1), the estimates from corrected counts noticeably overestimate the true mean (they are biased) but they are very consistent between replicates (low variance).
Conversely, with low means, the estimates from raw counts are very unstable between replicates (high variance). After repeating the same experiment you could get considerably different results.
When the true mean is above about 5, the effect of adding 1 starts vanishing.
I think it's up to the analyst to decide how to proceed but by gut feeling I would say that adding 1 "make sense" here. I think the issue is not so much the numerical stability of the uncorrected estimates but rather whether they are meaningful.
Code:
library(data.table)
library(ggplot2)
counts <- c(67, 194, 155, 135, 146, 257, 114, 134, 111, 87, 62,
67, 85, 89, 63, 86, 97, 44)
management <- rep(LETTERS[1:4], c(6, 6, 6, 6))
management <- relevel(as.factor(management), ref= 'D')
seed <- 1
dat <- list()
for(i in 1:500) {
for(mu in c(0.1, 1, 5, 10)) {
for(size in c(0.1, 1, 10)) {
set.seed(seed)
d <- data.table(
seed= seed,
mu= mu,
size= size,
counts= c(counts, rnbinom(n= sum(
management == 'D'), mu= mu, size= size)),
management= management
)
dat[[length(dat) + 1]] <- d
seed <- seed + 1
}
}
}
dat <- rbindlist(dat)
estimates <- dat[, list(raw= coef(glm.nb(counts ~
management))[1], corrected= coef(glm.nb(counts + 1 ~
management))[1]), by= list(seed, mu, size)]
estimates <- melt(estimates, id.vars= c('seed', 'mu', 'size'), variable.name= 'method', value.name= 'estimate')
estimates[, label := sprintf('True mean count= %s', mu)]
estimates[, label := factor(label, sprintf('True mean count= %s', unique(sort(mu))))]
gg <- ggplot(data= estimates, aes(x= as.factor(size), y=
ifelse(estimate < -10, -10, estimate), colour= method)) +
geom_hline(data= unique(estimates[,list(mu, label)]),
aes(yintercept= log(mu)), colour= 'grey30',
linetype= 'dashed') +
geom_boxplot() +
xlab('Size (dispersion parameter)') +
ylab('Estimate (capped to -10)') +
theme_light() +
theme(strip.text= element_text(colour= 'black',
size= 12)) +
facet_wrap(~ label, scales= 'free_y')
EDIT: I see quite a bit of skepticism about this solution and I'm slightly surprised about it since adding 1 to count data is not unheard of.
For example, gene expression data from sequencing technology come as count data. A very respected method of differential expression analysis is limma-voom (Gordon Smyth is a coauthor, my apologies if I'm misquoting). From the paper:
The counts are offset away from zero by 0.5 to avoid taking the log of zero, and to reduce the variability of log-cpm for low expression genes.
This is exactly what I'm proposing here.
Gene expression is never really zero and the difference between a count of 0 and a count of 1 is biologically irrelevant anyway, typically. Is it the same for the OP's case? I don't know but I suspect it is. Does it really matter whether group D has exactly 0 zebras or just 1 or 2 that went undetected in just 6 observations? If the answer is no, then adding 1 is more sensible than producing -Inf coefficients. | GLM negative binomial - what to do when one category has only zeros?
For an explanation of why this is happening see GLM for count data with all zeroes in one category.
I know I could do a bayesian approach but before I go into that, I would like to know if there's a |
53,524 | Can probability distributions be used as an alternative for regression models? | You can, but not without consequences.
Linear regression is a pretty flexible model in terms of your ability to define the functional relationship between the features and the dependent variable. If you use multivariate distribution, you are limited by what kind of relationships between variables are possible under the distribution.
For fitting the distribution you need to make more assumptions, for example, if you choose multivariate normal distribution you assume that all the variables follow normal distribution vs only $Y$ (conditionally) as in linear regression.
For fitting the distribution you may need much more data. Think of discrete distribution: in conditional distribution case (regression) you need only enough data to observe relations of the other variables with $Y$, with the joint distribution you need data for all the combinations of all the levels of all the variables.
It is easier to focus only on the conditional distribution and conditional expectation, as we do with linear regression. | Can probability distributions be used as an alternative for regression models? | You can, but not without consequences.
Linear regression is a pretty flexible model in terms of your ability to define the functional relationship between the features and the dependent variable. If | Can probability distributions be used as an alternative for regression models?
You can, but not without consequences.
Linear regression is a pretty flexible model in terms of your ability to define the functional relationship between the features and the dependent variable. If you use multivariate distribution, you are limited by what kind of relationships between variables are possible under the distribution.
For fitting the distribution you need to make more assumptions, for example, if you choose multivariate normal distribution you assume that all the variables follow normal distribution vs only $Y$ (conditionally) as in linear regression.
For fitting the distribution you may need much more data. Think of discrete distribution: in conditional distribution case (regression) you need only enough data to observe relations of the other variables with $Y$, with the joint distribution you need data for all the combinations of all the levels of all the variables.
It is easier to focus only on the conditional distribution and conditional expectation, as we do with linear regression. | Can probability distributions be used as an alternative for regression models?
You can, but not without consequences.
Linear regression is a pretty flexible model in terms of your ability to define the functional relationship between the features and the dependent variable. If |
53,525 | Can probability distributions be used as an alternative for regression models? | I am fairly sure it's valid to treat regression model as a joint probability distribution. Let $s,h,w$ be salary, height, weight, then a linear regression
$$
s = \beta_0 + \beta_1h + \beta_2w + \epsilon \\
\epsilon \sim N(0, \sigma^2)
$$
where $\beta_0, \beta_1, \beta_2$ are regression coefficients posits the distribution
$$
s \sim N(\beta_0 + \beta_1h + \beta_2w, \sigma^2)
$$
The mean salary for a specific combination of weight and height is exactly what is being modelled in a linear regression, and this is also exactly what a conditional expectation of salary for a given weight and height would aim to provide in your approach.
Of course the accuracy of this prediction depends on the normality of the error term $\epsilon$; if this assumption is violated then predictions will not be accurate. | Can probability distributions be used as an alternative for regression models? | I am fairly sure it's valid to treat regression model as a joint probability distribution. Let $s,h,w$ be salary, height, weight, then a linear regression
$$
s = \beta_0 + \beta_1h + \beta_2w + \eps | Can probability distributions be used as an alternative for regression models?
I am fairly sure it's valid to treat regression model as a joint probability distribution. Let $s,h,w$ be salary, height, weight, then a linear regression
$$
s = \beta_0 + \beta_1h + \beta_2w + \epsilon \\
\epsilon \sim N(0, \sigma^2)
$$
where $\beta_0, \beta_1, \beta_2$ are regression coefficients posits the distribution
$$
s \sim N(\beta_0 + \beta_1h + \beta_2w, \sigma^2)
$$
The mean salary for a specific combination of weight and height is exactly what is being modelled in a linear regression, and this is also exactly what a conditional expectation of salary for a given weight and height would aim to provide in your approach.
Of course the accuracy of this prediction depends on the normality of the error term $\epsilon$; if this assumption is violated then predictions will not be accurate. | Can probability distributions be used as an alternative for regression models?
I am fairly sure it's valid to treat regression model as a joint probability distribution. Let $s,h,w$ be salary, height, weight, then a linear regression
$$
s = \beta_0 + \beta_1h + \beta_2w + \eps |
53,526 | Can probability distributions be used as an alternative for regression models? | Yes, you definitely can. Moreover, it does not have to be any linear kind of regression. In the most general case, the following method will allow you to effectively turn any parametric or even non-parametric probability density estimation into a regressor for the desired output (i.e., the 3rd variable in your case, which is salary).
The following method is from the book:
Deep Learning, by Goodfellow, Bengio, and Courville, 2016 (page 103 or 104, from section 5.1.3), https://www.deeplearningbook.org/contents/ml.html.
Note: Despite the name of the book, the following method is completely general, and suitable beyond deep learning or any other neural network, or even other machine learning methods, for that matter.
The method:
Assume you've modeled the probability distribution over the input vector $\textbf{v}$ as $p(\textbf{v})$ by any parametric or non-parametric probability density estimation technique. In your example: $\textbf{v}$ = [height, weight, salary] $\in \mathbb{R}^3$.
Now, you decide to estimate one component $y$ of the vector $\textbf{v}$ from the remaining "input" components $\textbf{x}$, i.e., estimate salary from height and weight. Let's denote: $\textbf{v} = (\textbf{x}, y)$, where $y$ is the desired component to be estimated, and $\textbf{x}$ are the remaining "input" components.
Using the definition of conditional probability, the estimation of the probability of $y$ given the other components $\textbf{x}$ is:
$$p(y | \textbf{x}) = \frac{p(\textbf{x}, y)}{p(\textbf{x})} = \frac{p(\textbf{v})}{\sum_{y'}{p(\textbf{x}, y')}}$$
where: $$p(\textbf{x}) = \sum_{y'}{p(\textbf{x}, y')}$$ by the law of total probability, over quantized (discretized) values $y'$ of the component $y$. But you can use un-quantized continuous values using a suitable 1D integration technique of your choice:
$$p(\textbf{x}) = \int_{y'}{p(\textbf{x}, y')}dy'$$ by the law of total probability for continuous values $y'$ of the component $y$.
Observe that instead of a specific point-estimate of $y$ you obtain a posterior probability density estimation $p(y | \textbf{x})$ of $y$ given the $\textbf{x}$ inputs. If you only need a point-estimate, you can simply choose for example: $$\hat{y} = argmax_{y'} p(y | \textbf{x})$$ where $\hat{y}$ is the maximum aposteriori point-estimate. | Can probability distributions be used as an alternative for regression models? | Yes, you definitely can. Moreover, it does not have to be any linear kind of regression. In the most general case, the following method will allow you to effectively turn any parametric or even non-pa | Can probability distributions be used as an alternative for regression models?
Yes, you definitely can. Moreover, it does not have to be any linear kind of regression. In the most general case, the following method will allow you to effectively turn any parametric or even non-parametric probability density estimation into a regressor for the desired output (i.e., the 3rd variable in your case, which is salary).
The following method is from the book:
Deep Learning, by Goodfellow, Bengio, and Courville, 2016 (page 103 or 104, from section 5.1.3), https://www.deeplearningbook.org/contents/ml.html.
Note: Despite the name of the book, the following method is completely general, and suitable beyond deep learning or any other neural network, or even other machine learning methods, for that matter.
The method:
Assume you've modeled the probability distribution over the input vector $\textbf{v}$ as $p(\textbf{v})$ by any parametric or non-parametric probability density estimation technique. In your example: $\textbf{v}$ = [height, weight, salary] $\in \mathbb{R}^3$.
Now, you decide to estimate one component $y$ of the vector $\textbf{v}$ from the remaining "input" components $\textbf{x}$, i.e., estimate salary from height and weight. Let's denote: $\textbf{v} = (\textbf{x}, y)$, where $y$ is the desired component to be estimated, and $\textbf{x}$ are the remaining "input" components.
Using the definition of conditional probability, the estimation of the probability of $y$ given the other components $\textbf{x}$ is:
$$p(y | \textbf{x}) = \frac{p(\textbf{x}, y)}{p(\textbf{x})} = \frac{p(\textbf{v})}{\sum_{y'}{p(\textbf{x}, y')}}$$
where: $$p(\textbf{x}) = \sum_{y'}{p(\textbf{x}, y')}$$ by the law of total probability, over quantized (discretized) values $y'$ of the component $y$. But you can use un-quantized continuous values using a suitable 1D integration technique of your choice:
$$p(\textbf{x}) = \int_{y'}{p(\textbf{x}, y')}dy'$$ by the law of total probability for continuous values $y'$ of the component $y$.
Observe that instead of a specific point-estimate of $y$ you obtain a posterior probability density estimation $p(y | \textbf{x})$ of $y$ given the $\textbf{x}$ inputs. If you only need a point-estimate, you can simply choose for example: $$\hat{y} = argmax_{y'} p(y | \textbf{x})$$ where $\hat{y}$ is the maximum aposteriori point-estimate. | Can probability distributions be used as an alternative for regression models?
Yes, you definitely can. Moreover, it does not have to be any linear kind of regression. In the most general case, the following method will allow you to effectively turn any parametric or even non-pa |
53,527 | Delta method for Poisson ratio | Use a vector version of the delta method. You have convergence of
$$\sqrt{n}(\bar X-\lambda,\, \bar Y-\theta)$$ to a bivariate Normal, and
the function
$$f(\bar X, \bar Y)=\frac{\bar X}{\bar X+\bar Y}$$
is differentiable (away from $\lambda=\theta=0$), so the delta method applies.
That isn't how I'd actually work out the answer, though. I would argue that conditional on $N=\sum_i X_i+Y_i$, the sum $\sum_i X_i$ is Binomial$(N, \lambda/(\lambda+\theta))$, so the ratio you're interested in is (conditionally) a binomial proportion, which is asymptotically Normal. Then I would note that $N/n\stackrel{a.s.}{\to}\lambda+\theta$, so that
$$\sqrt{n}\left(\frac{\bar X}{\bar X+\bar Y}-p\right)\stackrel{d}{\to} N\left(0, \frac{p(1-p)}{\lambda+\theta}\right)$$
where $p=\lambda/(\lambda+\theta)$ | Delta method for Poisson ratio | Use a vector version of the delta method. You have convergence of
$$\sqrt{n}(\bar X-\lambda,\, \bar Y-\theta)$$ to a bivariate Normal, and
the function
$$f(\bar X, \bar Y)=\frac{\bar X}{\bar X+\bar Y | Delta method for Poisson ratio
Use a vector version of the delta method. You have convergence of
$$\sqrt{n}(\bar X-\lambda,\, \bar Y-\theta)$$ to a bivariate Normal, and
the function
$$f(\bar X, \bar Y)=\frac{\bar X}{\bar X+\bar Y}$$
is differentiable (away from $\lambda=\theta=0$), so the delta method applies.
That isn't how I'd actually work out the answer, though. I would argue that conditional on $N=\sum_i X_i+Y_i$, the sum $\sum_i X_i$ is Binomial$(N, \lambda/(\lambda+\theta))$, so the ratio you're interested in is (conditionally) a binomial proportion, which is asymptotically Normal. Then I would note that $N/n\stackrel{a.s.}{\to}\lambda+\theta$, so that
$$\sqrt{n}\left(\frac{\bar X}{\bar X+\bar Y}-p\right)\stackrel{d}{\to} N\left(0, \frac{p(1-p)}{\lambda+\theta}\right)$$
where $p=\lambda/(\lambda+\theta)$ | Delta method for Poisson ratio
Use a vector version of the delta method. You have convergence of
$$\sqrt{n}(\bar X-\lambda,\, \bar Y-\theta)$$ to a bivariate Normal, and
the function
$$f(\bar X, \bar Y)=\frac{\bar X}{\bar X+\bar Y |
53,528 | Delta method for Poisson ratio | This is just a visual comment on Thomas Lumley's answer (+1), illustrating it by simulation (blue) against his approximating normal distribution (red) using R
set.seed(2021)
lambda <- 2
theta <- 5
n <- 1000
cases <- 10^5
Xbar <- rpois(cases, n * lambda) / n
Ybar <- rpois(cases, n * theta ) / n
ratio <- Xbar / (Xbar + Ybar)
plot(density(ratio), col="blue")
curve(dnorm(x, lambda/(lambda+theta), sqrt(lambda*theta/(lambda+theta)^3/n)),
from=min(ratio), to=max(ratio), col="red", add=TRUE)
As a couple of extra comments:
There is a slight issue that there is always a positive probability that you get $\frac00$. So the actual ratio distribution is not well defined, though if $n\lambda$ and $n\mu$ are both large the probability of this is extremely small
You do not have to take means, as the ratio of the sums $\frac{\sum X_i}{\sum X_i +\sum Y_i}$ has the same distribution
Since the sums are themselves Poisson distributed, you might then, in a handwaving way, say that for large $\lambda$ and $\theta$ the distribution of the ratio $\frac{X}{X+Y}$ is approximately $N\left(\frac{\lambda}{\lambda+\theta}, \frac{\lambda\theta}{(\lambda+\theta)^3}\right)$ and that the probability of seeing $\frac00$ is only $e^{-(\lambda+\theta)}$ | Delta method for Poisson ratio | This is just a visual comment on Thomas Lumley's answer (+1), illustrating it by simulation (blue) against his approximating normal distribution (red) using R
set.seed(2021)
lambda <- 2
theta <- 5
n | Delta method for Poisson ratio
This is just a visual comment on Thomas Lumley's answer (+1), illustrating it by simulation (blue) against his approximating normal distribution (red) using R
set.seed(2021)
lambda <- 2
theta <- 5
n <- 1000
cases <- 10^5
Xbar <- rpois(cases, n * lambda) / n
Ybar <- rpois(cases, n * theta ) / n
ratio <- Xbar / (Xbar + Ybar)
plot(density(ratio), col="blue")
curve(dnorm(x, lambda/(lambda+theta), sqrt(lambda*theta/(lambda+theta)^3/n)),
from=min(ratio), to=max(ratio), col="red", add=TRUE)
As a couple of extra comments:
There is a slight issue that there is always a positive probability that you get $\frac00$. So the actual ratio distribution is not well defined, though if $n\lambda$ and $n\mu$ are both large the probability of this is extremely small
You do not have to take means, as the ratio of the sums $\frac{\sum X_i}{\sum X_i +\sum Y_i}$ has the same distribution
Since the sums are themselves Poisson distributed, you might then, in a handwaving way, say that for large $\lambda$ and $\theta$ the distribution of the ratio $\frac{X}{X+Y}$ is approximately $N\left(\frac{\lambda}{\lambda+\theta}, \frac{\lambda\theta}{(\lambda+\theta)^3}\right)$ and that the probability of seeing $\frac00$ is only $e^{-(\lambda+\theta)}$ | Delta method for Poisson ratio
This is just a visual comment on Thomas Lumley's answer (+1), illustrating it by simulation (blue) against his approximating normal distribution (red) using R
set.seed(2021)
lambda <- 2
theta <- 5
n |
53,529 | Central Limit Theorem with Bounded Sum of Variances? | Heuristically, if the sum of the variances isn't infinite, there is some residual shape information in the sum about some of the individual variables. For example, if
$$\mathrm{var}[X_1]=\epsilon\sum_i \mathrm{var}[X_i]$$
then $X_1$ makes up $\epsilon>0$ of the limiting random variable and the shape of $X_i$ (tails, moments, etc) has $\epsilon$ effect on the shape of the limiting distribution.
One way to think about why there's a problem is the Levy-Cramér theorem, which says that if $Y_1$ and $Y_2$ are independent and not constant, and $Y_1+Y_2$ has a Normal distribution, then both $Y_1$ and $Y_2$ have Normal distributions.
Now take $Y_1$ to be $X_1$ and $Y_2$ to be the sum of the rest of the sequence. If $\sum_i \mathrm{var}[X_i]=S^2<\infty$, then $Y_1$ and $Y_2$ are non-constant independent random variables. Unless they are both Normal, their sum isn't Normal -- and so the sum of the series isn't Normal. You can see this argument breaks down if $S^2$ is infinite, as then $Y_1/S$ is constant.
In special cases you can think about this in terms of moments. For example, suppose $Y_2$ is Normal but $Y_1$ has non-zero skewness. Then $Y_1+Y_2$ will have non-zero skewness. The Levy-Cramér argument does the same sort of thing, only a lot more general.
[If only finitely many $\mathrm{var}[X_i]$ are non-zero, the Levy-Cramér argument is much more direct, since the ones with non-zero variance immediately have to be Normal, but that's a special case] | Central Limit Theorem with Bounded Sum of Variances? | Heuristically, if the sum of the variances isn't infinite, there is some residual shape information in the sum about some of the individual variables. For example, if
$$\mathrm{var}[X_1]=\epsilon\sum_ | Central Limit Theorem with Bounded Sum of Variances?
Heuristically, if the sum of the variances isn't infinite, there is some residual shape information in the sum about some of the individual variables. For example, if
$$\mathrm{var}[X_1]=\epsilon\sum_i \mathrm{var}[X_i]$$
then $X_1$ makes up $\epsilon>0$ of the limiting random variable and the shape of $X_i$ (tails, moments, etc) has $\epsilon$ effect on the shape of the limiting distribution.
One way to think about why there's a problem is the Levy-Cramér theorem, which says that if $Y_1$ and $Y_2$ are independent and not constant, and $Y_1+Y_2$ has a Normal distribution, then both $Y_1$ and $Y_2$ have Normal distributions.
Now take $Y_1$ to be $X_1$ and $Y_2$ to be the sum of the rest of the sequence. If $\sum_i \mathrm{var}[X_i]=S^2<\infty$, then $Y_1$ and $Y_2$ are non-constant independent random variables. Unless they are both Normal, their sum isn't Normal -- and so the sum of the series isn't Normal. You can see this argument breaks down if $S^2$ is infinite, as then $Y_1/S$ is constant.
In special cases you can think about this in terms of moments. For example, suppose $Y_2$ is Normal but $Y_1$ has non-zero skewness. Then $Y_1+Y_2$ will have non-zero skewness. The Levy-Cramér argument does the same sort of thing, only a lot more general.
[If only finitely many $\mathrm{var}[X_i]$ are non-zero, the Levy-Cramér argument is much more direct, since the ones with non-zero variance immediately have to be Normal, but that's a special case] | Central Limit Theorem with Bounded Sum of Variances?
Heuristically, if the sum of the variances isn't infinite, there is some residual shape information in the sum about some of the individual variables. For example, if
$$\mathrm{var}[X_1]=\epsilon\sum_ |
53,530 | Definition of a one-sided test | $H_0: \theta = \theta_0$ versus $H_1: \theta \gt \theta_0$ is OK, but some authors might write
$H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0.$
$H_0: \theta = \theta_0$ versus $H_1: \theta \lt \theta_0$ is OK, but
some authors might write
$H_0: \theta \ge \theta_0$ versus $H_1: \theta \lt \theta_0.$
In each instance above, a test of $H_0$ would use $\theta = \theta_0$ to get the null distribution, used to compute the P-value of the test.
Both of the formulations below are wrong, because $H_0$ must always contain
an $=$-sign, whether as $\theta = \theta_0,$ $\theta \le \theta_0,$ or as $\theta \ge \theta_0.$
$H_0: \theta < \theta_0$ versus $H_1: \theta = \theta_0.$
$H_0: \theta > \theta_0$ versus $H_1: \theta = \theta_0.$ | Definition of a one-sided test | $H_0: \theta = \theta_0$ versus $H_1: \theta \gt \theta_0$ is OK, but some authors might write
$H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0.$
$H_0: \theta = \theta_0$ versus $H_1: \thet | Definition of a one-sided test
$H_0: \theta = \theta_0$ versus $H_1: \theta \gt \theta_0$ is OK, but some authors might write
$H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0.$
$H_0: \theta = \theta_0$ versus $H_1: \theta \lt \theta_0$ is OK, but
some authors might write
$H_0: \theta \ge \theta_0$ versus $H_1: \theta \lt \theta_0.$
In each instance above, a test of $H_0$ would use $\theta = \theta_0$ to get the null distribution, used to compute the P-value of the test.
Both of the formulations below are wrong, because $H_0$ must always contain
an $=$-sign, whether as $\theta = \theta_0,$ $\theta \le \theta_0,$ or as $\theta \ge \theta_0.$
$H_0: \theta < \theta_0$ versus $H_1: \theta = \theta_0.$
$H_0: \theta > \theta_0$ versus $H_1: \theta = \theta_0.$ | Definition of a one-sided test
$H_0: \theta = \theta_0$ versus $H_1: \theta \gt \theta_0$ is OK, but some authors might write
$H_0: \theta \le \theta_0$ versus $H_1: \theta \gt \theta_0.$
$H_0: \theta = \theta_0$ versus $H_1: \thet |
53,531 | Definition of a one-sided test | It would be more appropriate to write them this way.
$H_0: \theta = \theta_0~~$ versus $~~H_1: \theta \ne \theta_0$ (two-sided)
$H_0: \theta \le \theta_0~~$ versus $~~H_1: \theta > \theta_0$ (one-sided [upper-tailed])
$H_0: \theta \ge \theta_0~~$ versus $~~H_1: \theta < \theta_0$ (one-sided [lower-tailed]) | Definition of a one-sided test | It would be more appropriate to write them this way.
$H_0: \theta = \theta_0~~$ versus $~~H_1: \theta \ne \theta_0$ (two-sided)
$H_0: \theta \le \theta_0~~$ versus $~~H_1: \theta > \theta_0$ (one-side | Definition of a one-sided test
It would be more appropriate to write them this way.
$H_0: \theta = \theta_0~~$ versus $~~H_1: \theta \ne \theta_0$ (two-sided)
$H_0: \theta \le \theta_0~~$ versus $~~H_1: \theta > \theta_0$ (one-sided [upper-tailed])
$H_0: \theta \ge \theta_0~~$ versus $~~H_1: \theta < \theta_0$ (one-sided [lower-tailed]) | Definition of a one-sided test
It would be more appropriate to write them this way.
$H_0: \theta = \theta_0~~$ versus $~~H_1: \theta \ne \theta_0$ (two-sided)
$H_0: \theta \le \theta_0~~$ versus $~~H_1: \theta > \theta_0$ (one-side |
53,532 | Test to determine whether coin is fair or not | Ten tosses of a coin. Test $H_0: p = 1/2$ against $H_0: p \ne 1/2.$ Comment at the start: there is not a lot of information in only ten tosses of a coin, so in order to reject $H_0$ we will have to observe very few heads (0 or 1) or very many (9 or 10).
Normal approximation: Under $H_0,$ Number $X$ of heads see in $n = 10$ independent tosses has $X \sim\mathsf{Binom}(n=10,p=1/2,$ which
has $\mu = E(X) = np = 5,$ and $\sigma = \sqrt{np(1-p)} = \sqrt{2.5} = 1.581139.$ Then $Z = \frac{X=\mu}{\sigma} \stackrel{aprx}{\sim}
\mathsf{Norm}(0,1).$ So we reject $H_0$ at about the 5% level by
rejecting for $|Z| \ge 1.96.$
Example: Suppose you observe $x = 3$ heads in $n = 10$ tosses.
then $|Z|=|\frac{3-5}{1.5811}| = |-1.265| < 1.96,$ so you do not
have sufficient evidence to reject $H_0$ at the 5% level of significance.
Exact binomial test. This two-sided test rejects $H_0$ when
$X$ is sufficiently far from the expected value $\mu=5$ under $H_0.$
For observed value $x,$ the P-value is $P(X \le x)+P(X \ge n-x).$
Example: Same as above: $x = 3.$
We seek $P(X \le 3) + P(X \ge 7) = 0.3438 > 0.05 = 5\%,$ so we do
not reject $H_0$ at the 5% level of significance.
sum(dbinom(c(0:3, 7:10), 10, .5))
[1] 0.34375
This exact binomial test is implemented in R as 'binom.test', which gives the same P-value $0.3438$ that we obtained from the binomial distribution above.
binom.test(x=3, n=10, p=.5)
Exact binomial test
data: 3 and 10
number of successes = 3, number of trials = 10, p-value = 0.3438
alternative hypothesis:
true probability of success is not equal to 0.5
95 percent confidence interval:
0.06673951 0.65245285
sample estimates:
probability of success
0.3
Notes: (a) Back to the comment at the beginning. For the exact
binomial test the rejection region for $H_0: p=.5$ against
$H_a: p \ne .5$ is to observe $0,1,9,$ or $10$ Heads, so
this is really a test at about the 2% level. (Including $2$ and $8$ in the rejection region would make it a test at about the 11% level. Because of the discreteness of binomial distributions a straightforward test at the 5% level is not
possible.
sum(dbinom(c(0,1,9,10), 10, .5))
[1] 0.02148437
sum(dbinom(c(0,1,2,8,9,10), 10, .5))
[1] 0.109375
Thus the power to detect as biased a coin with P(H) = 0.3
is only about 15%.
sum(dbinom(c(0,1,9,10), 10, .3))
[1] 0.149452
(b) There are two difficulties with an approximate normal test in this situation. (i) $n = 10$ is not quite large enough
to guarantee a good approximation to binomial probabilities.
(ii) The approximate test may make it appear that a test at the 5% level is possible, but with $n=10, p=0.5$ values of $|Z|$ near
1.96 are not possible, so the actual significance level
is closer to 2% than 5%. | Test to determine whether coin is fair or not | Ten tosses of a coin. Test $H_0: p = 1/2$ against $H_0: p \ne 1/2.$ Comment at the start: there is not a lot of information in only ten tosses of a coin, so in order to reject $H_0$ we will have to o | Test to determine whether coin is fair or not
Ten tosses of a coin. Test $H_0: p = 1/2$ against $H_0: p \ne 1/2.$ Comment at the start: there is not a lot of information in only ten tosses of a coin, so in order to reject $H_0$ we will have to observe very few heads (0 or 1) or very many (9 or 10).
Normal approximation: Under $H_0,$ Number $X$ of heads see in $n = 10$ independent tosses has $X \sim\mathsf{Binom}(n=10,p=1/2,$ which
has $\mu = E(X) = np = 5,$ and $\sigma = \sqrt{np(1-p)} = \sqrt{2.5} = 1.581139.$ Then $Z = \frac{X=\mu}{\sigma} \stackrel{aprx}{\sim}
\mathsf{Norm}(0,1).$ So we reject $H_0$ at about the 5% level by
rejecting for $|Z| \ge 1.96.$
Example: Suppose you observe $x = 3$ heads in $n = 10$ tosses.
then $|Z|=|\frac{3-5}{1.5811}| = |-1.265| < 1.96,$ so you do not
have sufficient evidence to reject $H_0$ at the 5% level of significance.
Exact binomial test. This two-sided test rejects $H_0$ when
$X$ is sufficiently far from the expected value $\mu=5$ under $H_0.$
For observed value $x,$ the P-value is $P(X \le x)+P(X \ge n-x).$
Example: Same as above: $x = 3.$
We seek $P(X \le 3) + P(X \ge 7) = 0.3438 > 0.05 = 5\%,$ so we do
not reject $H_0$ at the 5% level of significance.
sum(dbinom(c(0:3, 7:10), 10, .5))
[1] 0.34375
This exact binomial test is implemented in R as 'binom.test', which gives the same P-value $0.3438$ that we obtained from the binomial distribution above.
binom.test(x=3, n=10, p=.5)
Exact binomial test
data: 3 and 10
number of successes = 3, number of trials = 10, p-value = 0.3438
alternative hypothesis:
true probability of success is not equal to 0.5
95 percent confidence interval:
0.06673951 0.65245285
sample estimates:
probability of success
0.3
Notes: (a) Back to the comment at the beginning. For the exact
binomial test the rejection region for $H_0: p=.5$ against
$H_a: p \ne .5$ is to observe $0,1,9,$ or $10$ Heads, so
this is really a test at about the 2% level. (Including $2$ and $8$ in the rejection region would make it a test at about the 11% level. Because of the discreteness of binomial distributions a straightforward test at the 5% level is not
possible.
sum(dbinom(c(0,1,9,10), 10, .5))
[1] 0.02148437
sum(dbinom(c(0,1,2,8,9,10), 10, .5))
[1] 0.109375
Thus the power to detect as biased a coin with P(H) = 0.3
is only about 15%.
sum(dbinom(c(0,1,9,10), 10, .3))
[1] 0.149452
(b) There are two difficulties with an approximate normal test in this situation. (i) $n = 10$ is not quite large enough
to guarantee a good approximation to binomial probabilities.
(ii) The approximate test may make it appear that a test at the 5% level is possible, but with $n=10, p=0.5$ values of $|Z|$ near
1.96 are not possible, so the actual significance level
is closer to 2% than 5%. | Test to determine whether coin is fair or not
Ten tosses of a coin. Test $H_0: p = 1/2$ against $H_0: p \ne 1/2.$ Comment at the start: there is not a lot of information in only ten tosses of a coin, so in order to reject $H_0$ we will have to o |
53,533 | Test to determine whether coin is fair or not | Another way to extract probabilities is to simulate :
Imagine a fair coin.
Toss it 10 times and write down the number of heads.
According to the central limit theorem, the number of heads in 10 tosses will follow a Normal-like distribution with mean
Then do it all over again. And again. And again. N times (maybe N = 1000000 times)
Then calculate the 2.5th and 97.5th percentiles of the distribution of the N simulated number of heads.
Now you toss your real coin ten times and write the number of heads.
If you got less heads than the 2.5th percentile, or more heads than the 97.5th, then you decide that the coin is not fair.
Don't forget that this time when you tossed your real coin 10 times and rejected the hypothesis of the fair coin, could be among the 5% of identical experiments where the result would have been extreme enough to get rejected despite the coin being fair. | Test to determine whether coin is fair or not | Another way to extract probabilities is to simulate :
Imagine a fair coin.
Toss it 10 times and write down the number of heads.
According to the central limit theorem, the number of heads in 10 tosses | Test to determine whether coin is fair or not
Another way to extract probabilities is to simulate :
Imagine a fair coin.
Toss it 10 times and write down the number of heads.
According to the central limit theorem, the number of heads in 10 tosses will follow a Normal-like distribution with mean
Then do it all over again. And again. And again. N times (maybe N = 1000000 times)
Then calculate the 2.5th and 97.5th percentiles of the distribution of the N simulated number of heads.
Now you toss your real coin ten times and write the number of heads.
If you got less heads than the 2.5th percentile, or more heads than the 97.5th, then you decide that the coin is not fair.
Don't forget that this time when you tossed your real coin 10 times and rejected the hypothesis of the fair coin, could be among the 5% of identical experiments where the result would have been extreme enough to get rejected despite the coin being fair. | Test to determine whether coin is fair or not
Another way to extract probabilities is to simulate :
Imagine a fair coin.
Toss it 10 times and write down the number of heads.
According to the central limit theorem, the number of heads in 10 tosses |
53,534 | Test to determine whether coin is fair or not | The formula to calculate the approximate confidence limits for a binomial test is:
$z_{alpha/2}*\sqrt{p*q/n}$
In your case for a fair coin p = q = 0.5 and using $z_{alpha/2}=1.96$ for a 95% confidence limit.
The range of heads for 10 flips is expected to be between
$ 10*(0.5 \pm 1.96*\sqrt{0.025})$ or 1.9 to 8.1 heads
with a 95% confidence level.
Or to rearrange to calculate the test statistic:
$test statistic = \frac{(observed - n*p_{expected})}{\sqrt{n*p*q}}$
In R use binom.test(5, 10, p=0.5) | Test to determine whether coin is fair or not | The formula to calculate the approximate confidence limits for a binomial test is:
$z_{alpha/2}*\sqrt{p*q/n}$
In your case for a fair coin p = q = 0.5 and using $z_{alpha/2}=1.96$ for a 95% confidence | Test to determine whether coin is fair or not
The formula to calculate the approximate confidence limits for a binomial test is:
$z_{alpha/2}*\sqrt{p*q/n}$
In your case for a fair coin p = q = 0.5 and using $z_{alpha/2}=1.96$ for a 95% confidence limit.
The range of heads for 10 flips is expected to be between
$ 10*(0.5 \pm 1.96*\sqrt{0.025})$ or 1.9 to 8.1 heads
with a 95% confidence level.
Or to rearrange to calculate the test statistic:
$test statistic = \frac{(observed - n*p_{expected})}{\sqrt{n*p*q}}$
In R use binom.test(5, 10, p=0.5) | Test to determine whether coin is fair or not
The formula to calculate the approximate confidence limits for a binomial test is:
$z_{alpha/2}*\sqrt{p*q/n}$
In your case for a fair coin p = q = 0.5 and using $z_{alpha/2}=1.96$ for a 95% confidence |
53,535 | Test to determine whether coin is fair or not | The Red Bead experiment and Deming is where I start.
An unfair coin is a special or assignable cause of variation. When should we look for a special cause? When we see something beyond 3 standard deviations.
10 flips might not be enough.
.5 +/- 3 times sqrt of (.5*.5/N), where N is number of flips.
More than ~ 9.743 H or Ts might be worth looking at. But can you have .7 of a head? No. So you should round 9.7 to 10, in which case probably not meaningful to investigate if 10 flips in a row is all you have.
But a better approach might be to assume the coin is unfair and follow Von Neumann who described a procedure like this:
Toss the coin twice.
If the outcome of both coins is the same (HH or TT), start over and disregard the current toss.
If the outcome of both coins is different (HT or TH), take the first coin as the result and forget the second. | Test to determine whether coin is fair or not | The Red Bead experiment and Deming is where I start.
An unfair coin is a special or assignable cause of variation. When should we look for a special cause? When we see something beyond 3 standard devi | Test to determine whether coin is fair or not
The Red Bead experiment and Deming is where I start.
An unfair coin is a special or assignable cause of variation. When should we look for a special cause? When we see something beyond 3 standard deviations.
10 flips might not be enough.
.5 +/- 3 times sqrt of (.5*.5/N), where N is number of flips.
More than ~ 9.743 H or Ts might be worth looking at. But can you have .7 of a head? No. So you should round 9.7 to 10, in which case probably not meaningful to investigate if 10 flips in a row is all you have.
But a better approach might be to assume the coin is unfair and follow Von Neumann who described a procedure like this:
Toss the coin twice.
If the outcome of both coins is the same (HH or TT), start over and disregard the current toss.
If the outcome of both coins is different (HT or TH), take the first coin as the result and forget the second. | Test to determine whether coin is fair or not
The Red Bead experiment and Deming is where I start.
An unfair coin is a special or assignable cause of variation. When should we look for a special cause? When we see something beyond 3 standard devi |
53,536 | Iteratively Reweighted Least Squares - Weights Confusion | The IWLS algorithm for generalised linear models is different from that for a heteroscedastic linear model because it accounts for two things:
the non-linear link function
the variance-mean relationship
The likelihood score equations look like
$$\frac{d\mu}{d\beta}\frac{1}{V(\mu)}(Y-\mu)=0$$
so the variance is in the denominator, as you expect. We can expand ${d\mu}/{d\beta}$:
$$\frac{d\eta}{d\beta}\frac{d\mu}{d\eta}\frac{1}{V(\mu)}(Y-\mu)=0$$
and $d\eta/d\beta$ is just $X^T$, so
$$X^T\frac{d\mu}{d\eta}\frac{1}{V(\mu)}(Y-\mu)=0$$
We want to define a new response variable $Z$ and weight variable $W$ so that the WLS equations
$$X^TW(Z-X\beta)=0$$
match the likelihood equations. This is done with
working response $Z=(Y-X\beta)\frac{d\eta}{d\mu}$, which is a first-order approximation to transforming $Y$ with the link function
working weights $W=(\frac{d\mu}{d\eta})^2\frac{1}{V(\mu)}$
Note that the variance is still in the denominator. However, for the so-called canonical link function for each distribution, it so happens that $d\mu/d\eta=V(\mu)$ and the working weights are equal to $V(\mu)^2V(\mu)^{-1}=V(\mu)$. That is, it looks as though the variance has been put in the numerator instead.
You can see the variance is really going in the denominator by looking at the IWLS algorithm for non-canonical links, such as the identity link for a binomial or Poisson model, where $d\mu/d\eta=1$. | Iteratively Reweighted Least Squares - Weights Confusion | The IWLS algorithm for generalised linear models is different from that for a heteroscedastic linear model because it accounts for two things:
the non-linear link function
the variance-mean relations | Iteratively Reweighted Least Squares - Weights Confusion
The IWLS algorithm for generalised linear models is different from that for a heteroscedastic linear model because it accounts for two things:
the non-linear link function
the variance-mean relationship
The likelihood score equations look like
$$\frac{d\mu}{d\beta}\frac{1}{V(\mu)}(Y-\mu)=0$$
so the variance is in the denominator, as you expect. We can expand ${d\mu}/{d\beta}$:
$$\frac{d\eta}{d\beta}\frac{d\mu}{d\eta}\frac{1}{V(\mu)}(Y-\mu)=0$$
and $d\eta/d\beta$ is just $X^T$, so
$$X^T\frac{d\mu}{d\eta}\frac{1}{V(\mu)}(Y-\mu)=0$$
We want to define a new response variable $Z$ and weight variable $W$ so that the WLS equations
$$X^TW(Z-X\beta)=0$$
match the likelihood equations. This is done with
working response $Z=(Y-X\beta)\frac{d\eta}{d\mu}$, which is a first-order approximation to transforming $Y$ with the link function
working weights $W=(\frac{d\mu}{d\eta})^2\frac{1}{V(\mu)}$
Note that the variance is still in the denominator. However, for the so-called canonical link function for each distribution, it so happens that $d\mu/d\eta=V(\mu)$ and the working weights are equal to $V(\mu)^2V(\mu)^{-1}=V(\mu)$. That is, it looks as though the variance has been put in the numerator instead.
You can see the variance is really going in the denominator by looking at the IWLS algorithm for non-canonical links, such as the identity link for a binomial or Poisson model, where $d\mu/d\eta=1$. | Iteratively Reweighted Least Squares - Weights Confusion
The IWLS algorithm for generalised linear models is different from that for a heteroscedastic linear model because it accounts for two things:
the non-linear link function
the variance-mean relations |
53,537 | Does it make sense to do PCA before a Tree-Boosting model? | In general, good features will improve the performance of any model, and should require fewer steps / result in faster convergence. One nice example of this is whether you want to use the distance from the hole for modeling the golf putting probability of success, or whether you design a new feature based on the geometry (hole size, ball size, tolerance for deviation from the optimal angle).
Whether PCA helps depends on whether resulting representation is useful for the modeling problem. That is not in general clear - e.g. if you are predicting some decision that happens in real-life and people use a threshold on a single one of your features, then dimension reducing with PCA can only hurt performance (probably not a situation in which you'd want to use Machine Learning, really). It might also be that some dimension reduction is desirable, but the "ideal" representation would use some kind of very non-linear transformation (as e.g. UMAP might be able to achieve).
My intuition would be that PCA might help the most when there's a huge number of initial features (or a really complex signal), especially if that includes ones that are slightly different, but correlated versions of the same types of features similar
Update: Without dimensionality reduction, here is an example, where we can immediately predict that PCA will not help (first column of figures; orange = outcome 1, green = 0) and where PCA will help (second column; blue = 1, red = 0).
We should expect the scenario where xgboost (and other similar tree based methods like random forrest, LightGBM etc.) simply needs to learn a cut-off along a single dimension to be the best scenario for it, while the cases where it needs to learn a diagonal (or even more complex) boundary should be harder for it. In the first column of plots PCA actually takes us away from the right feature representation, while in the second column of plots it takes us to a better representation.
The R code for this is below, but in the first case we get
Column 1 without PCA: train-logloss:0.011828+0.000477 test-logloss:0.016520+0.008104 (best iteration after 185 iterations)
Column 1 with PCA: train-logloss:0.013675+0.000435 test-logloss:0.025791+0.011341 (best iteration after 313 iterations)
Column 2 without PCA: train-logloss:0.022771+0.000451 test-logloss:0.049508+0.009839 (best iteration after 467 iterations)
Column 2 with PCA: train-logloss:0.019837+0.000593 test-logloss:0.026960+0.009282 (best iteration after 131 iterations)
So, in one case we need fewer iterations and get better accuracy without PCA, while in the other case we need fewer iterations and get better accuracy when using the PCA features.
Of course, you could argue that you should just use both sets of features, but - again - there's no way of knowing that either leaving the features as is or using PCA will be the best way to transform the features. And sure, perhaps there's better hyperparameter choices that would minimize these differences. And sure, the CV-performance after early stopping might be a tiny bit biased, but the general pattern is clear enough that we do not need to do anything more complicated to see it, I think
library(tidyverse)
library(patchwork)
library(xgboost)
set.seed(1234)
simulated = tibble(`original x`=rnorm(5000, mean=0, sd=5),
`original y`=rnorm(5000, mean=0, sd=20),
x = `original x` * cos(2*pi/8) - `original y` * sin(2*pi/8),
y = `original x` * sin(2*pi/8) + `original y` * cos(2*pi/8),
outcome1 = (x+rnorm(5000,0,0.25)>5)*1L,
outcome2 = (`original x`+rnorm(5000,0,0.25)>5)*1L)
pca = prcomp(simulated %>% dplyr::select(x,y), center = TRUE,scale. = TRUE)
simulated = simulated %>%
mutate(`PCA x` = pca$x[,1],
`PCA y` = pca$x[,2])
p1 = simulated %>%
ggplot(aes(x=x, y=y, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p2 = simulated %>%
ggplot(aes(x=`original x`, y=`original y`, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p3 =simulated %>%
ggplot(aes(x=x, y=y, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
p4 = simulated %>%
ggplot(aes(x=`original x`, y=`original y`, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
p5 = simulated %>%
ggplot(aes(x=`PCA x`, y=`PCA y`, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p6 =simulated %>%
ggplot(aes(x=`PCA x`, y=`PCA y`, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
#(p2 + p4) /
(p1 + p3) / (p5 + p6)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(`PCA x`,`PCA y`)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome2,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(`PCA x`,`PCA y`)),
nrounds=1000,
nfold=10,
label = simulated$outcome2,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100) | Does it make sense to do PCA before a Tree-Boosting model? | In general, good features will improve the performance of any model, and should require fewer steps / result in faster convergence. One nice example of this is whether you want to use the distance fro | Does it make sense to do PCA before a Tree-Boosting model?
In general, good features will improve the performance of any model, and should require fewer steps / result in faster convergence. One nice example of this is whether you want to use the distance from the hole for modeling the golf putting probability of success, or whether you design a new feature based on the geometry (hole size, ball size, tolerance for deviation from the optimal angle).
Whether PCA helps depends on whether resulting representation is useful for the modeling problem. That is not in general clear - e.g. if you are predicting some decision that happens in real-life and people use a threshold on a single one of your features, then dimension reducing with PCA can only hurt performance (probably not a situation in which you'd want to use Machine Learning, really). It might also be that some dimension reduction is desirable, but the "ideal" representation would use some kind of very non-linear transformation (as e.g. UMAP might be able to achieve).
My intuition would be that PCA might help the most when there's a huge number of initial features (or a really complex signal), especially if that includes ones that are slightly different, but correlated versions of the same types of features similar
Update: Without dimensionality reduction, here is an example, where we can immediately predict that PCA will not help (first column of figures; orange = outcome 1, green = 0) and where PCA will help (second column; blue = 1, red = 0).
We should expect the scenario where xgboost (and other similar tree based methods like random forrest, LightGBM etc.) simply needs to learn a cut-off along a single dimension to be the best scenario for it, while the cases where it needs to learn a diagonal (or even more complex) boundary should be harder for it. In the first column of plots PCA actually takes us away from the right feature representation, while in the second column of plots it takes us to a better representation.
The R code for this is below, but in the first case we get
Column 1 without PCA: train-logloss:0.011828+0.000477 test-logloss:0.016520+0.008104 (best iteration after 185 iterations)
Column 1 with PCA: train-logloss:0.013675+0.000435 test-logloss:0.025791+0.011341 (best iteration after 313 iterations)
Column 2 without PCA: train-logloss:0.022771+0.000451 test-logloss:0.049508+0.009839 (best iteration after 467 iterations)
Column 2 with PCA: train-logloss:0.019837+0.000593 test-logloss:0.026960+0.009282 (best iteration after 131 iterations)
So, in one case we need fewer iterations and get better accuracy without PCA, while in the other case we need fewer iterations and get better accuracy when using the PCA features.
Of course, you could argue that you should just use both sets of features, but - again - there's no way of knowing that either leaving the features as is or using PCA will be the best way to transform the features. And sure, perhaps there's better hyperparameter choices that would minimize these differences. And sure, the CV-performance after early stopping might be a tiny bit biased, but the general pattern is clear enough that we do not need to do anything more complicated to see it, I think
library(tidyverse)
library(patchwork)
library(xgboost)
set.seed(1234)
simulated = tibble(`original x`=rnorm(5000, mean=0, sd=5),
`original y`=rnorm(5000, mean=0, sd=20),
x = `original x` * cos(2*pi/8) - `original y` * sin(2*pi/8),
y = `original x` * sin(2*pi/8) + `original y` * cos(2*pi/8),
outcome1 = (x+rnorm(5000,0,0.25)>5)*1L,
outcome2 = (`original x`+rnorm(5000,0,0.25)>5)*1L)
pca = prcomp(simulated %>% dplyr::select(x,y), center = TRUE,scale. = TRUE)
simulated = simulated %>%
mutate(`PCA x` = pca$x[,1],
`PCA y` = pca$x[,2])
p1 = simulated %>%
ggplot(aes(x=x, y=y, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p2 = simulated %>%
ggplot(aes(x=`original x`, y=`original y`, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p3 =simulated %>%
ggplot(aes(x=x, y=y, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
p4 = simulated %>%
ggplot(aes(x=`original x`, y=`original y`, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
p5 = simulated %>%
ggplot(aes(x=`PCA x`, y=`PCA y`, col=factor(outcome1))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Dark2")
p6 =simulated %>%
ggplot(aes(x=`PCA x`, y=`PCA y`, col=factor(outcome2))) +
geom_point(alpha=0.2) +
theme_bw(base_size=18) +
theme(legend.position="none") +
scale_color_brewer(palette="Set1")
#(p2 + p4) /
(p1 + p3) / (p5 + p6)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(`PCA x`,`PCA y`)),
nrounds=1000,
nfold=10,
label = simulated$outcome1,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(x,y)),
nrounds=1000,
nfold=10,
label = simulated$outcome2,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100)
xgb.cv(
params = list(booster="gbtree",
objective="binary:logistic",
eta=0.05,
max_depth=128,
min_child_weight=4,
subsample=0.65,
colsample_bytree=1),
data = data.matrix(simulated %>% dplyr::select(`PCA x`,`PCA y`)),
nrounds=1000,
nfold=10,
label = simulated$outcome2,
showsd = TRUE,
metrics = "logloss",
early_stopping_rounds = 20,
print_every_n=100) | Does it make sense to do PCA before a Tree-Boosting model?
In general, good features will improve the performance of any model, and should require fewer steps / result in faster convergence. One nice example of this is whether you want to use the distance fro |
53,538 | Is multivariate normal the only distribution with this property? | No, the bivariate normal is not the only distribution with the property that $E[X\mid Y=y]$ is a linear function of $y$ and also that $E[Y\mid X=x]$ is a linear function of $x$; many other distributions enjoy the same property.
For example, suppose that $(X,Y)$ is uniformly distributed on the triangle with vertices $(0,0), (1,1), (0,1)$ so that the joint density $f_{X,Y}(x,y)$ has value $2$ on the interior of the triangle. As motivation, note that this is the joint pdf of $\left(\min(U,V),\max(U,V)\right)$ where $U$ and $V$ are i.i.d. $\mathcal U(0,1)$ random variables. Now, notice that given $Y=y, y \in (0,1)$, the conditional distribution of $X$ is uniform on $(0,y)$ and so $E[X\mid Y=y] = y/2$ is a linear function of $y$. Similarly, given that $X=x, x \in (0,1)$, the conditional distribution of $Y$ is uniform on $(x,1)$ and so $E[Y\mid X=x] = \frac 12 + \frac x2$ is a linear function of $x$. | Is multivariate normal the only distribution with this property? | No, the bivariate normal is not the only distribution with the property that $E[X\mid Y=y]$ is a linear function of $y$ and also that $E[Y\mid X=x]$ is a linear function of $x$; many other distributio | Is multivariate normal the only distribution with this property?
No, the bivariate normal is not the only distribution with the property that $E[X\mid Y=y]$ is a linear function of $y$ and also that $E[Y\mid X=x]$ is a linear function of $x$; many other distributions enjoy the same property.
For example, suppose that $(X,Y)$ is uniformly distributed on the triangle with vertices $(0,0), (1,1), (0,1)$ so that the joint density $f_{X,Y}(x,y)$ has value $2$ on the interior of the triangle. As motivation, note that this is the joint pdf of $\left(\min(U,V),\max(U,V)\right)$ where $U$ and $V$ are i.i.d. $\mathcal U(0,1)$ random variables. Now, notice that given $Y=y, y \in (0,1)$, the conditional distribution of $X$ is uniform on $(0,y)$ and so $E[X\mid Y=y] = y/2$ is a linear function of $y$. Similarly, given that $X=x, x \in (0,1)$, the conditional distribution of $Y$ is uniform on $(x,1)$ and so $E[Y\mid X=x] = \frac 12 + \frac x2$ is a linear function of $x$. | Is multivariate normal the only distribution with this property?
No, the bivariate normal is not the only distribution with the property that $E[X\mid Y=y]$ is a linear function of $y$ and also that $E[Y\mid X=x]$ is a linear function of $x$; many other distributio |
53,539 | Is multivariate normal the only distribution with this property? | No - it is not just a property of bivariate normals. For example
Let $A,B,C$ be i.i.d. with finite mean $\mu$. Then let $X=A+B$ and $Y=A+C$.
$E[A \mid X=x] =E[B \mid X=x] = \frac12 E[A+B \mid X=x]=\frac12 E[X \mid X=x]= \frac 12x$.
So $E[Y \mid X=x]=E[A \mid X=x] +E[C \mid X=x] = \frac 12x+\mu$ which is linear in $x$.
Similarly $E[X \mid Y=y]=E[A \mid Y=y] +E[B \mid Y=y] = \frac 12y+\mu$. | Is multivariate normal the only distribution with this property? | No - it is not just a property of bivariate normals. For example
Let $A,B,C$ be i.i.d. with finite mean $\mu$. Then let $X=A+B$ and $Y=A+C$.
$E[A \mid X=x] =E[B \mid X=x] = \frac12 E[A+B \mid X=x]=\ | Is multivariate normal the only distribution with this property?
No - it is not just a property of bivariate normals. For example
Let $A,B,C$ be i.i.d. with finite mean $\mu$. Then let $X=A+B$ and $Y=A+C$.
$E[A \mid X=x] =E[B \mid X=x] = \frac12 E[A+B \mid X=x]=\frac12 E[X \mid X=x]= \frac 12x$.
So $E[Y \mid X=x]=E[A \mid X=x] +E[C \mid X=x] = \frac 12x+\mu$ which is linear in $x$.
Similarly $E[X \mid Y=y]=E[A \mid Y=y] +E[B \mid Y=y] = \frac 12y+\mu$. | Is multivariate normal the only distribution with this property?
No - it is not just a property of bivariate normals. For example
Let $A,B,C$ be i.i.d. with finite mean $\mu$. Then let $X=A+B$ and $Y=A+C$.
$E[A \mid X=x] =E[B \mid X=x] = \frac12 E[A+B \mid X=x]=\ |
53,540 | Calculate single absolute standardized difference across levels of a categorical treatment variable cobalt::bal.tab | Author of cobalt here. What the reviewer is requesting doesn't really make a lot of sense. The bias in an effect estimate is a function of the mean difference of each level of the categorical variable. You could create a one-dimensional summary of balance for that categorical variable, e.g., as the maximum SMD for that variable, and then just mention the interpretation of that summary in the caption of your table. There isn't a way to do this automatically in cobalt, and it seems to me to lose important information.
I will note that there has been a value proposed as an equivalent to the SMD for categorical variables. This was proposed by Yang and Dalton (2012). It doesn't have an intuitive interpretation except vaguely as the equivalent of the SMD. It is calculated as the Mahalanobis distance between the two samples based on the categorical variable. A nice aspect of its interpretation is that for two-level variables, the formula reduces to the SMD. The bias in the effect estimate is not a function of this value, though, at it seems to me that it would be possible to have an extreme imbalance in one level of the variable that is masked by the other levels.
It is not in cobalt because the current framework wouldn't work with it (cobalt turns the supplied dataset into a numerical matrix, losing the relationship among levels of the categorical variable by turning them into dummy variables). There is another package that can be used to assess balance called tableone, which does compute this modified SMD for categorical variables. It also produces beautiful tables, much nicer than those from cobalt, the latter of which are mainly for use in balance assessment rather than reporting. You can see examples of such a table using the modified SMD on the package vignette.
This is such a minute detail for a reviewer to focus on. SMDs themselves are an arbitrary method to assess balance, so it's not clear to me why a specific method of computing a summary SMD should be preferred over another. Given that this value the reviewer wants you to compute is just a summary of the balance for each categorical variable, you could choose any summary and explain it in a caption. No one summary is superior to another. As long as you can convincingly demonstrate balance, you should be okay. | Calculate single absolute standardized difference across levels of a categorical treatment variable | Author of cobalt here. What the reviewer is requesting doesn't really make a lot of sense. The bias in an effect estimate is a function of the mean difference of each level of the categorical variable | Calculate single absolute standardized difference across levels of a categorical treatment variable cobalt::bal.tab
Author of cobalt here. What the reviewer is requesting doesn't really make a lot of sense. The bias in an effect estimate is a function of the mean difference of each level of the categorical variable. You could create a one-dimensional summary of balance for that categorical variable, e.g., as the maximum SMD for that variable, and then just mention the interpretation of that summary in the caption of your table. There isn't a way to do this automatically in cobalt, and it seems to me to lose important information.
I will note that there has been a value proposed as an equivalent to the SMD for categorical variables. This was proposed by Yang and Dalton (2012). It doesn't have an intuitive interpretation except vaguely as the equivalent of the SMD. It is calculated as the Mahalanobis distance between the two samples based on the categorical variable. A nice aspect of its interpretation is that for two-level variables, the formula reduces to the SMD. The bias in the effect estimate is not a function of this value, though, at it seems to me that it would be possible to have an extreme imbalance in one level of the variable that is masked by the other levels.
It is not in cobalt because the current framework wouldn't work with it (cobalt turns the supplied dataset into a numerical matrix, losing the relationship among levels of the categorical variable by turning them into dummy variables). There is another package that can be used to assess balance called tableone, which does compute this modified SMD for categorical variables. It also produces beautiful tables, much nicer than those from cobalt, the latter of which are mainly for use in balance assessment rather than reporting. You can see examples of such a table using the modified SMD on the package vignette.
This is such a minute detail for a reviewer to focus on. SMDs themselves are an arbitrary method to assess balance, so it's not clear to me why a specific method of computing a summary SMD should be preferred over another. Given that this value the reviewer wants you to compute is just a summary of the balance for each categorical variable, you could choose any summary and explain it in a caption. No one summary is superior to another. As long as you can convincingly demonstrate balance, you should be okay. | Calculate single absolute standardized difference across levels of a categorical treatment variable
Author of cobalt here. What the reviewer is requesting doesn't really make a lot of sense. The bias in an effect estimate is a function of the mean difference of each level of the categorical variable |
53,541 | Loss function in for gamma objective function in regression in XGBoost? | I'm not actually sure whether "gamma regression" is officially defined (it doesn't appear to have a wikipedia page for example), but if I were to define it (and some googling around suggests I'm not alone here), I would define it as setting up my regression problem so that for a given input vector $\underline{x}$, I predict a value y. When I do this, I meant that I think that the true value of the target will be gamma distributed with mean y.
How does this differ from least squares? One setup which leads to OLS is that you say that for given $\underline{x}$, the target variable will be normally distributed and your prediction $y(\underline{x})$ is the mean of that distribution. Of course a normal distribution is parametrised through its mean and variance, but it turns out that you don't need to know the variance in order to calculate the cost function you need to optimise, and thus this parameter doesn't need to be passed to xgboost.
For the gamma distribution however, this is different. Let's go through the maths. For the gamma distribution parametrised as $\frac{1}{\Gamma (k)\theta ^{k}}x^{k-1}e^{-\frac{x}{\theta}}$, the mean is given by $k\theta$ and the variance by $k \theta ^{2}$
Thus let's reparametrise in terms of $\mu$ and $\theta$ so that the distribution is given by $\frac{1}{\Gamma (\frac{\mu}{\theta})\theta ^{\frac{\mu}{\theta}}}x^{\frac{\mu}{\theta}-1}e^{-\frac{x}{\theta}}$
So for a given dataset, if you predict a bunch of $\hat{y}_{i}$ for target values $y_{i}$, the likelihood is given by
$\prod _{i=1}^{N} \frac{1}{\Gamma (\frac{\hat{y}_{i}}{\theta})\theta ^{\frac{\hat{y}_{i}}{\theta}}}y_{i}^{\frac{\hat{y}_{i}}{\theta} -1}e^{-\frac{y_{i}}{\theta}}$
and thus the negative (xgboost assumes a cost function you're trying to minimise) log-likelihood is
$\sum _{i=1}^{N} \ln \Gamma (\frac{\hat{y}_{i}}{\theta}) + \frac{\hat{y}_{i}}{\theta}\ln \theta - (\frac{\hat{y}_{i}}{\theta}-1) \ln y_{i} + \frac{y_{i}}{\theta}$
Compare this to the Gaussian Regression case where the negative log likelihood is given by
$\frac{1}{\sigma ^{2}}\sum _{i=1}^{N} \left(y_{i} - \hat{y}_{i}\right)^{2}$
The the latter case, the $\frac{1}{\sigma ^{2}}$ is a constant term out the front. If you were doing linear regression or even xgboost without regularisation, this would mean that no matter what value you changed $\sigma$ to, the linear regressor/xgboost you trained would turn out to be exactly the same, so "Gaussian regression with $\sigma = 10$ and Gaussian regression with $\sigma = 1$ lead to the same predictions". This is no longer true when you have a regulariser, but you can always suck the value of $\sigma$ into the definition of the regulariser to get around this, and this is why the OLS formula never includes a $\sigma$ in it.
In the gamma case however, because of the $\theta$ factor contained in the $\Gamma$ function and the $\ln \theta$, you can't just pull the factor of $\theta$ outside of the summation.
For xgboost, you now need to pass it the elementwise first and second derivatives of the cost function wrt $\hat{y}_{i}$. This is where basic calculus doesn't get you all the way, you'll likely need to look up that the derivative of the logarithm of the gamma function is given by the digramma function $\psi (z)$.
The (elementwise) first derivative of the loss will be given by (by the xgboost definition which is $G_{i}=\frac{\partial L}{\partial \hat{y}_{i}}$):
$G_{i} = \frac{1}{\theta}\psi (\frac{\hat{y}_{i}}{\theta}) + \frac{1}{\theta}\ln \theta - \frac{1}{\theta}\ln y_{i} $
The second derivative will require derivatives of the digamma function, I don't know much about this but some googling tells me you need the trigamma function $\psi _{1}(z)$ which is the derivative of the digamma function, thus
$H_{i}=\frac{1}{\theta ^{2}}\psi_{1}(\frac{\hat{y}_{i}}{\theta})$
Again, note that you still have to supply $\theta$ up front as a hyperparameter, pass this to xgboost and then train a new xgboost model every time you wish to investigate another $\theta$
Finally, it's worth noting that I just did this derivation myself today, I haven't lifted anything other than the definition of the gamma distribution from elsewhere, so there could easily be minor algebra error, I'd feel more comfortable if somebody else independently verified my workings.
Edit: Alternately, you could parametrise the other way around:
You could use k as your free parameter and $\theta = \frac{\mu}{k}$, thus your gamma distribution is $\frac{1}{\Gamma(k)(\frac{\mu}{k})^{k}}x^{k-1}e^{-\frac{xk}{\mu}}$
and thus your negative log-likelihood is given by
$L = \sum_{i=1}^{N}\left[\ln \Gamma (k) + k\ln \hat{y}_{i} - k\ln k - (k-1)\ln y_{i} + \frac{y_{i}k}{\hat{y}_{i}}\right]$
this parametrisation is easier to differentiate, you get
$G_{i}=\frac{k}{\hat{y}_{i}} - \frac{y_{i}k}{\hat{y}_{i}^{2}}$
and
$H_{i}=-\frac{k}{\hat{y}_{i}^{2}}+ 2\frac{y_{i}k}{\hat{y}_{i}^{3}}$
but you still need to pass k as a hyperparameter | Loss function in for gamma objective function in regression in XGBoost? | I'm not actually sure whether "gamma regression" is officially defined (it doesn't appear to have a wikipedia page for example), but if I were to define it (and some googling around suggests I'm not a | Loss function in for gamma objective function in regression in XGBoost?
I'm not actually sure whether "gamma regression" is officially defined (it doesn't appear to have a wikipedia page for example), but if I were to define it (and some googling around suggests I'm not alone here), I would define it as setting up my regression problem so that for a given input vector $\underline{x}$, I predict a value y. When I do this, I meant that I think that the true value of the target will be gamma distributed with mean y.
How does this differ from least squares? One setup which leads to OLS is that you say that for given $\underline{x}$, the target variable will be normally distributed and your prediction $y(\underline{x})$ is the mean of that distribution. Of course a normal distribution is parametrised through its mean and variance, but it turns out that you don't need to know the variance in order to calculate the cost function you need to optimise, and thus this parameter doesn't need to be passed to xgboost.
For the gamma distribution however, this is different. Let's go through the maths. For the gamma distribution parametrised as $\frac{1}{\Gamma (k)\theta ^{k}}x^{k-1}e^{-\frac{x}{\theta}}$, the mean is given by $k\theta$ and the variance by $k \theta ^{2}$
Thus let's reparametrise in terms of $\mu$ and $\theta$ so that the distribution is given by $\frac{1}{\Gamma (\frac{\mu}{\theta})\theta ^{\frac{\mu}{\theta}}}x^{\frac{\mu}{\theta}-1}e^{-\frac{x}{\theta}}$
So for a given dataset, if you predict a bunch of $\hat{y}_{i}$ for target values $y_{i}$, the likelihood is given by
$\prod _{i=1}^{N} \frac{1}{\Gamma (\frac{\hat{y}_{i}}{\theta})\theta ^{\frac{\hat{y}_{i}}{\theta}}}y_{i}^{\frac{\hat{y}_{i}}{\theta} -1}e^{-\frac{y_{i}}{\theta}}$
and thus the negative (xgboost assumes a cost function you're trying to minimise) log-likelihood is
$\sum _{i=1}^{N} \ln \Gamma (\frac{\hat{y}_{i}}{\theta}) + \frac{\hat{y}_{i}}{\theta}\ln \theta - (\frac{\hat{y}_{i}}{\theta}-1) \ln y_{i} + \frac{y_{i}}{\theta}$
Compare this to the Gaussian Regression case where the negative log likelihood is given by
$\frac{1}{\sigma ^{2}}\sum _{i=1}^{N} \left(y_{i} - \hat{y}_{i}\right)^{2}$
The the latter case, the $\frac{1}{\sigma ^{2}}$ is a constant term out the front. If you were doing linear regression or even xgboost without regularisation, this would mean that no matter what value you changed $\sigma$ to, the linear regressor/xgboost you trained would turn out to be exactly the same, so "Gaussian regression with $\sigma = 10$ and Gaussian regression with $\sigma = 1$ lead to the same predictions". This is no longer true when you have a regulariser, but you can always suck the value of $\sigma$ into the definition of the regulariser to get around this, and this is why the OLS formula never includes a $\sigma$ in it.
In the gamma case however, because of the $\theta$ factor contained in the $\Gamma$ function and the $\ln \theta$, you can't just pull the factor of $\theta$ outside of the summation.
For xgboost, you now need to pass it the elementwise first and second derivatives of the cost function wrt $\hat{y}_{i}$. This is where basic calculus doesn't get you all the way, you'll likely need to look up that the derivative of the logarithm of the gamma function is given by the digramma function $\psi (z)$.
The (elementwise) first derivative of the loss will be given by (by the xgboost definition which is $G_{i}=\frac{\partial L}{\partial \hat{y}_{i}}$):
$G_{i} = \frac{1}{\theta}\psi (\frac{\hat{y}_{i}}{\theta}) + \frac{1}{\theta}\ln \theta - \frac{1}{\theta}\ln y_{i} $
The second derivative will require derivatives of the digamma function, I don't know much about this but some googling tells me you need the trigamma function $\psi _{1}(z)$ which is the derivative of the digamma function, thus
$H_{i}=\frac{1}{\theta ^{2}}\psi_{1}(\frac{\hat{y}_{i}}{\theta})$
Again, note that you still have to supply $\theta$ up front as a hyperparameter, pass this to xgboost and then train a new xgboost model every time you wish to investigate another $\theta$
Finally, it's worth noting that I just did this derivation myself today, I haven't lifted anything other than the definition of the gamma distribution from elsewhere, so there could easily be minor algebra error, I'd feel more comfortable if somebody else independently verified my workings.
Edit: Alternately, you could parametrise the other way around:
You could use k as your free parameter and $\theta = \frac{\mu}{k}$, thus your gamma distribution is $\frac{1}{\Gamma(k)(\frac{\mu}{k})^{k}}x^{k-1}e^{-\frac{xk}{\mu}}$
and thus your negative log-likelihood is given by
$L = \sum_{i=1}^{N}\left[\ln \Gamma (k) + k\ln \hat{y}_{i} - k\ln k - (k-1)\ln y_{i} + \frac{y_{i}k}{\hat{y}_{i}}\right]$
this parametrisation is easier to differentiate, you get
$G_{i}=\frac{k}{\hat{y}_{i}} - \frac{y_{i}k}{\hat{y}_{i}^{2}}$
and
$H_{i}=-\frac{k}{\hat{y}_{i}^{2}}+ 2\frac{y_{i}k}{\hat{y}_{i}^{3}}$
but you still need to pass k as a hyperparameter | Loss function in for gamma objective function in regression in XGBoost?
I'm not actually sure whether "gamma regression" is officially defined (it doesn't appear to have a wikipedia page for example), but if I were to define it (and some googling around suggests I'm not a |
53,542 | Loss function in for gamma objective function in regression in XGBoost? | Thanks everybody for the contributions! I am late to the party, but I wanted to add one more point regarding the log-link function, which was to me still unclear.
I take the formula for the Gamma distribution from the bottom of gazza89's answer:
$$
\frac{1}{\Gamma(k)(\frac{\mu}{k})^k}x^{k-1}e^{-\frac{xk}{\mu}}
$$
Using the logarithm as link function amounts to substituting $\mu$ with $e^\mu$ in this formula:
$$
\frac{1}{\Gamma(k)(\frac{e^\mu}{k})^k}x^{k-1}e^{-\frac{xk}{e^\mu}}
$$
We can then compute the negative log likelihood, which looks like:
$$
L = \sum_{i=1}^N\left[\log\Gamma(k) + k\hat y -k\log k-(k-1)\log y +yke^{-\hat y} \right]
$$
The gradient is then:
$$
G_i = k(1 - ye^{-\hat y})
$$
And the Hessian:
$$
H_i = yke^{-\hat y}
$$
These are the formulas provided in the source code (link):
_out_gpair[_idx] = GradientPair((1 - y / expf(p)) * w, y / expf(p) * w);
with the notation change $k=w$ and $\hat y = p$. | Loss function in for gamma objective function in regression in XGBoost? | Thanks everybody for the contributions! I am late to the party, but I wanted to add one more point regarding the log-link function, which was to me still unclear.
I take the formula for the Gamma dist | Loss function in for gamma objective function in regression in XGBoost?
Thanks everybody for the contributions! I am late to the party, but I wanted to add one more point regarding the log-link function, which was to me still unclear.
I take the formula for the Gamma distribution from the bottom of gazza89's answer:
$$
\frac{1}{\Gamma(k)(\frac{\mu}{k})^k}x^{k-1}e^{-\frac{xk}{\mu}}
$$
Using the logarithm as link function amounts to substituting $\mu$ with $e^\mu$ in this formula:
$$
\frac{1}{\Gamma(k)(\frac{e^\mu}{k})^k}x^{k-1}e^{-\frac{xk}{e^\mu}}
$$
We can then compute the negative log likelihood, which looks like:
$$
L = \sum_{i=1}^N\left[\log\Gamma(k) + k\hat y -k\log k-(k-1)\log y +yke^{-\hat y} \right]
$$
The gradient is then:
$$
G_i = k(1 - ye^{-\hat y})
$$
And the Hessian:
$$
H_i = yke^{-\hat y}
$$
These are the formulas provided in the source code (link):
_out_gpair[_idx] = GradientPair((1 - y / expf(p)) * w, y / expf(p) * w);
with the notation change $k=w$ and $\hat y = p$. | Loss function in for gamma objective function in regression in XGBoost?
Thanks everybody for the contributions! I am late to the party, but I wanted to add one more point regarding the log-link function, which was to me still unclear.
I take the formula for the Gamma dist |
53,543 | Testing Hypothesis with different alternatives | Doing it right the first time is best.
First, in practice, this should be an unlikely situation.
Maybe you have re-engineered a pharmaceutical process hoping
that the new process has a higher yield than the current one
with $\mu_0 = 100,$ so you'd take data from runs of the new process,
average them and test $H_0: \mu= 100$ vs. $H_a: \mu > 100.$
Maybe your town has changed the widths of lanes and the
sequencing of traffic lights on its main road hoping that the
average late afternoon travel average travel time on the main
stretch is reduced from the former $\mu_0 = 20$ min. Then
you'd get travel times under the new configuration to test
$H_0: \mu = 20$ vs $H_a: \mu < 20.$
Maybe your old supplier whose product had 200mg of active
ingredient per bottle has gone out of business and you are
checking to see the amount of active ingredient in a former competitive
supplier is the same as for the old supplier. Then you'd
test $H_0: \mu = 200$ vs $H_a: \mu \ne 200,$ based on the average
of $n$ randomly selected bottles from the prospective new supplier.
So ordinarily you would test one of the three kinds of tests
and act according to the results of the test. One hopes you would have done
a 'power and sample size' computation beforehand so you'd take
a large enough sample $n$ in order to have a good chance (say 90%)
of rejecting if there is a meaningful difference from $\mu_0.$
Then you would likely take the result of the one test as sufficiently
good evidence to act upon.
But best-laid plans don't always work out. However, as a direct answer to your question, let's suppose you
have taken data to test $H_0: \mu = 100$ vs. $H_1: \mu < 100$
at the 5% level, and cannot reject. Here are data simulated in R
that would give such a result.
set.seed(806)
x = rnorm(10, 98, 15)
t.test(x, mu=100, alt="less")
One Sample t-test
data: x
t = -0.69053, df = 9, p-value = 0.2536
alternative hypothesis: true mean is less than 100
95 percent confidence interval:
-Inf 104.6308
sample estimates:
mean of x
97.20135
The mean of my $n=10$ observations is $\bar X = 97.2,$ which is
below the hypothetical mean $\mu = 100,$ but not enough smaller
to be considered statistically significant. Maybe we put the wrong
assumptions into our power computation so we didn't use a large
enough $n.$ In this case, there's no use testing $H_0: \mu = 100$ vs. $H_1: \mu > 100$ because $\bar X < 100$ could never lead to rejection.
But what do we do if we guessed completely wrong and got data such as those
in the simulation below?
set.seed(806)
x = rnorm(10, 110, 15)
t.test(x, mu=100, alt="less")\
One Sample t-test
data: x
t = 2.2703, df = 9, p-value = 0.9753
alternative hypothesis: true mean is less than 100
95 percent confidence interval:
-Inf 116.6308
sample estimates:
mean of x
109.2014
Of course, we can't reject in favor of $H_a: \mu < 100$ based
on a sample mean $\bar X = 109.2.$ Then we might be tempted to
try testing $H_0: \mu = 100$ vs. $H_1: \mu > 100.$ [In R, the notation
p.val gives just the P-value of the test, not the full printout.]
t.test(x, mu=100, alt="gr")$p.val
[1] 0.02466914
So we could have rejected a test of $H_0: \mu = 100$ vs. $H_1: \mu > 100$
at the 5% level because the P-value $0.025 < 0.05 = 5\%.$ Doing multiple
tests on the same data is always dangerous. If we try enough different
things, we might accidentally get a rejection on one of our tries--just by chance. (The result would be a 'false discovery'.)
Rejecting at the 2% level isn't a really strong result, but if is really important to resolve the true value of $\mu,$ then we might consider getting fresh data and doing the right test
the second time. Or maybe making a two-sided 95% confidence interval
to get a good guess at the actual value of $\mu$ in order to plan our
course of action. | Testing Hypothesis with different alternatives | Doing it right the first time is best.
First, in practice, this should be an unlikely situation.
Maybe you have re-engineered a pharmaceutical process hoping
that the new process has a higher yield t | Testing Hypothesis with different alternatives
Doing it right the first time is best.
First, in practice, this should be an unlikely situation.
Maybe you have re-engineered a pharmaceutical process hoping
that the new process has a higher yield than the current one
with $\mu_0 = 100,$ so you'd take data from runs of the new process,
average them and test $H_0: \mu= 100$ vs. $H_a: \mu > 100.$
Maybe your town has changed the widths of lanes and the
sequencing of traffic lights on its main road hoping that the
average late afternoon travel average travel time on the main
stretch is reduced from the former $\mu_0 = 20$ min. Then
you'd get travel times under the new configuration to test
$H_0: \mu = 20$ vs $H_a: \mu < 20.$
Maybe your old supplier whose product had 200mg of active
ingredient per bottle has gone out of business and you are
checking to see the amount of active ingredient in a former competitive
supplier is the same as for the old supplier. Then you'd
test $H_0: \mu = 200$ vs $H_a: \mu \ne 200,$ based on the average
of $n$ randomly selected bottles from the prospective new supplier.
So ordinarily you would test one of the three kinds of tests
and act according to the results of the test. One hopes you would have done
a 'power and sample size' computation beforehand so you'd take
a large enough sample $n$ in order to have a good chance (say 90%)
of rejecting if there is a meaningful difference from $\mu_0.$
Then you would likely take the result of the one test as sufficiently
good evidence to act upon.
But best-laid plans don't always work out. However, as a direct answer to your question, let's suppose you
have taken data to test $H_0: \mu = 100$ vs. $H_1: \mu < 100$
at the 5% level, and cannot reject. Here are data simulated in R
that would give such a result.
set.seed(806)
x = rnorm(10, 98, 15)
t.test(x, mu=100, alt="less")
One Sample t-test
data: x
t = -0.69053, df = 9, p-value = 0.2536
alternative hypothesis: true mean is less than 100
95 percent confidence interval:
-Inf 104.6308
sample estimates:
mean of x
97.20135
The mean of my $n=10$ observations is $\bar X = 97.2,$ which is
below the hypothetical mean $\mu = 100,$ but not enough smaller
to be considered statistically significant. Maybe we put the wrong
assumptions into our power computation so we didn't use a large
enough $n.$ In this case, there's no use testing $H_0: \mu = 100$ vs. $H_1: \mu > 100$ because $\bar X < 100$ could never lead to rejection.
But what do we do if we guessed completely wrong and got data such as those
in the simulation below?
set.seed(806)
x = rnorm(10, 110, 15)
t.test(x, mu=100, alt="less")\
One Sample t-test
data: x
t = 2.2703, df = 9, p-value = 0.9753
alternative hypothesis: true mean is less than 100
95 percent confidence interval:
-Inf 116.6308
sample estimates:
mean of x
109.2014
Of course, we can't reject in favor of $H_a: \mu < 100$ based
on a sample mean $\bar X = 109.2.$ Then we might be tempted to
try testing $H_0: \mu = 100$ vs. $H_1: \mu > 100.$ [In R, the notation
p.val gives just the P-value of the test, not the full printout.]
t.test(x, mu=100, alt="gr")$p.val
[1] 0.02466914
So we could have rejected a test of $H_0: \mu = 100$ vs. $H_1: \mu > 100$
at the 5% level because the P-value $0.025 < 0.05 = 5\%.$ Doing multiple
tests on the same data is always dangerous. If we try enough different
things, we might accidentally get a rejection on one of our tries--just by chance. (The result would be a 'false discovery'.)
Rejecting at the 2% level isn't a really strong result, but if is really important to resolve the true value of $\mu,$ then we might consider getting fresh data and doing the right test
the second time. Or maybe making a two-sided 95% confidence interval
to get a good guess at the actual value of $\mu$ in order to plan our
course of action. | Testing Hypothesis with different alternatives
Doing it right the first time is best.
First, in practice, this should be an unlikely situation.
Maybe you have re-engineered a pharmaceutical process hoping
that the new process has a higher yield t |
53,544 | Testing Hypothesis with different alternatives | Imagine testing $\mu=0$. You do your calculations and find that $\bar{x}=99$ and your z-statistic (or t-stat) is 123.
I would have serious doubts about hypothesis 1 and very much believe hypothesis 2. | Testing Hypothesis with different alternatives | Imagine testing $\mu=0$. You do your calculations and find that $\bar{x}=99$ and your z-statistic (or t-stat) is 123.
I would have serious doubts about hypothesis 1 and very much believe hypothesis 2. | Testing Hypothesis with different alternatives
Imagine testing $\mu=0$. You do your calculations and find that $\bar{x}=99$ and your z-statistic (or t-stat) is 123.
I would have serious doubts about hypothesis 1 and very much believe hypothesis 2. | Testing Hypothesis with different alternatives
Imagine testing $\mu=0$. You do your calculations and find that $\bar{x}=99$ and your z-statistic (or t-stat) is 123.
I would have serious doubts about hypothesis 1 and very much believe hypothesis 2. |
53,545 | Testing Hypothesis with different alternatives | The research question is king. The job of the hypothesis test is to answer the research question. The job of the data (and statistics) is to help you perform the hypothesis test.
I think you are somewhat confused about how to set up your hypotheses. Your hypotheses should be formed from your research question and before you even look at your data! In particular, let's consider your statement
"Let's suppose you are not allowed to use $H_1: \mu \neq \mu_0$ here."
There is no such circumstance! You are always allowed to specify any valid alternative hypothesis you like, and this $H_1$ is a valid alternative hypothesis except in the pathological case where $\mu$ can only possibly take the single value $\mu_0$.
If you are struggling to work out which alternative hypothesis to use, it is easiest to write out what you are trying to test in words and then try to put it into algebra. Dave gives some examples of this in his answer, but the question will fall into one of three categories:
(A) You want to see if the mean is above some threshold or not. If the mean is $\leq$ the threshold you don't really care whether it is equal or lower. (Maybe you are seeing if a new, expensive drug is more effective than an existing cheap drug. If it isn't more effective then you don't care whether it is equally or less effective, you won't pursue it further because it is expensive.)
(B) You want to see if the mean is below some threshold or not. If the mean is $\geq$ the threshold you don't really care whether it is equal or higher. This is just the reverse of (A).
(C) You want to see if the mean is different from some value or not. (was Don Bradman a "100-runs-per-innings" batsman?)
All of these are legitimate research questions. A and B translate to one-sided hypothesis tests, C translates to a two sided test. However, we could have formulated research questions which were asking whether the two drugs had the same efficacy as each other (this is important for regulatory reasons in some cases), or whether Don Bradman's batting skill was over 100 runs per over. Those would have led to different alternative hypothesis. | Testing Hypothesis with different alternatives | The research question is king. The job of the hypothesis test is to answer the research question. The job of the data (and statistics) is to help you perform the hypothesis test.
I think you are somew | Testing Hypothesis with different alternatives
The research question is king. The job of the hypothesis test is to answer the research question. The job of the data (and statistics) is to help you perform the hypothesis test.
I think you are somewhat confused about how to set up your hypotheses. Your hypotheses should be formed from your research question and before you even look at your data! In particular, let's consider your statement
"Let's suppose you are not allowed to use $H_1: \mu \neq \mu_0$ here."
There is no such circumstance! You are always allowed to specify any valid alternative hypothesis you like, and this $H_1$ is a valid alternative hypothesis except in the pathological case where $\mu$ can only possibly take the single value $\mu_0$.
If you are struggling to work out which alternative hypothesis to use, it is easiest to write out what you are trying to test in words and then try to put it into algebra. Dave gives some examples of this in his answer, but the question will fall into one of three categories:
(A) You want to see if the mean is above some threshold or not. If the mean is $\leq$ the threshold you don't really care whether it is equal or lower. (Maybe you are seeing if a new, expensive drug is more effective than an existing cheap drug. If it isn't more effective then you don't care whether it is equally or less effective, you won't pursue it further because it is expensive.)
(B) You want to see if the mean is below some threshold or not. If the mean is $\geq$ the threshold you don't really care whether it is equal or higher. This is just the reverse of (A).
(C) You want to see if the mean is different from some value or not. (was Don Bradman a "100-runs-per-innings" batsman?)
All of these are legitimate research questions. A and B translate to one-sided hypothesis tests, C translates to a two sided test. However, we could have formulated research questions which were asking whether the two drugs had the same efficacy as each other (this is important for regulatory reasons in some cases), or whether Don Bradman's batting skill was over 100 runs per over. Those would have led to different alternative hypothesis. | Testing Hypothesis with different alternatives
The research question is king. The job of the hypothesis test is to answer the research question. The job of the data (and statistics) is to help you perform the hypothesis test.
I think you are somew |
53,546 | Testing Hypothesis with different alternatives | Yes, it is possible that test 1 fails to reject $H_0$ and testing 2 rejects $H_0$. (Consider for example a t-test with level 1% on $n = 9$ datapoints where $s^2 = 1$, $\overline{X} = \mu_0 + 3$ ).
In such a case, you have to... wonder what is the relevant alternative, and this depends on what you want to test.
Keep in mind that one only rejects the null hypothesis in favor of the alternative. And one can only have a positive conclusion in case of a reject of $H_0$. Not rejecting $H_0$ does not allow you to accpet it.
So in the case you described, you cannot conclude that $\mu < \mu_0$ (since testing 1 didn't reject $H_0$) but you can conclude that $\mu > \mu_0$ (since testing 2 did reject $H_0$). If you just want to know if $\mu = \mu_0$ then use the alternative $\mu \neq \mu_0$. | Testing Hypothesis with different alternatives | Yes, it is possible that test 1 fails to reject $H_0$ and testing 2 rejects $H_0$. (Consider for example a t-test with level 1% on $n = 9$ datapoints where $s^2 = 1$, $\overline{X} = \mu_0 + 3$ ).
In | Testing Hypothesis with different alternatives
Yes, it is possible that test 1 fails to reject $H_0$ and testing 2 rejects $H_0$. (Consider for example a t-test with level 1% on $n = 9$ datapoints where $s^2 = 1$, $\overline{X} = \mu_0 + 3$ ).
In such a case, you have to... wonder what is the relevant alternative, and this depends on what you want to test.
Keep in mind that one only rejects the null hypothesis in favor of the alternative. And one can only have a positive conclusion in case of a reject of $H_0$. Not rejecting $H_0$ does not allow you to accpet it.
So in the case you described, you cannot conclude that $\mu < \mu_0$ (since testing 1 didn't reject $H_0$) but you can conclude that $\mu > \mu_0$ (since testing 2 did reject $H_0$). If you just want to know if $\mu = \mu_0$ then use the alternative $\mu \neq \mu_0$. | Testing Hypothesis with different alternatives
Yes, it is possible that test 1 fails to reject $H_0$ and testing 2 rejects $H_0$. (Consider for example a t-test with level 1% on $n = 9$ datapoints where $s^2 = 1$, $\overline{X} = \mu_0 + 3$ ).
In |
53,547 | How are artificially balanced datasets corrected for? | I have practical experience with training classifiers from imbalanced training sets. There are problems with this. Basically, the variances of the parameters associated with the less frequent classes - these variances grow large. The more uneven the prior distribution is in the training set, the more volatile your classifier outcomes become.
My best practice solution - which works well for probabilistic classifiers - is to train from a completely balanced training set. This means that you have about equally many examples of each class or category. This classifier training on a balanced training set must afterwards be calibrated to the correct distribution in the application domain, in your case a clinical setting. That is - you need to incorporate the skewed real-world prior distribution into the outcome probabilities of your classifier.
The following formula does precisely this by correcting for the lack of skewness in the training set:
$
\begin{split}
&P_{corrected}(class=j \mid {\bf x}) = \\
&\frac{\frac{P_{corrected}(class=j)}{P_{balanced}(class=j)}\; P_{balanced}(class=j \mid {\bf x})}{\frac{P_{corrected}(class=j)}{P_{balanced}(class=j)}\; P_{balanced}(class=j \mid {\bf x}) + \frac{1-P_{corrected}(class=j)}{1-P_{balanced}(class=j)}\; \left(1- P_{balanced}(class=j \mid {\bf x}) \right) }
\end{split}
$
In the above formula, the following terms are used:
$P_{balanced}(class=j)$ the prior probability that outcome $j$ occurs in your balanced training set, e.g. probability of 'No-Tumor', which would be around $0.5$ in a two-class situation, around $0.33$ in a three-class classification domain, etc.
$P_{corrected}(class=j)$ the prior probability that outcome $j$ occurs in your real-world domain, e.g. true probability of 'Tumor' in your clinical setting
$P_{balanced}(class=j \mid {\bf x})$ is the outcome probability (the posterior probability) of your classifier trained with the balanced training set.
$P_{corrected}(class=j \mid {\bf x})$ is the outcome probability (the posterior probability) of your classifier correctly adjusted to the clinical setting.
Example
Correct posterior probability from classifier trained on a balanced training set to domain-applicable posterior probability. We convert to a situation where 'cancer' occurs in only 1% of the images presented to our classifier software:
$
\begin{split}
&P_{corrected}(cancer \mid {\bf x}) =
&\frac{\frac{0.01}{0.5}\; 0.81} {\frac{0.01}{0.5}\; 0.81 + \frac{1-0.01}{1-0.5}\; \left(1- 0.81 \right) }
&=0.04128
\end{split}
$
Derivation of correction formula
We use a capital $P$ to denote a probability (prior or posterior) and a small letter $p$ to indicate a probability density. In image processing, the pixel values are usually assumed to approximately follow a continuous distribution. Hence, the Bayes classifier is calculated using probability densities.
Bayes formula (for any probabilistic classifier)
$
P(class=j \mid {\bf x}) = \frac{P(class=j) \; p({\bf x} \; \mid \; class=j)}
{P(class=j) \; p({\bf x} \; \mid \; class=j) + P(class \neq j) \; p({\bf x} \; \mid \; class \neq j)}
$
where the 'other' classes than $j$ are grouped altogether ($class \neq j$).
From Bayes general formula follows, after rearrangement
$
p({\bf x} \mid class=j) = \frac{P(class=j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class=j)}
$
where $p({\bf x})$ is the joint probability density of ${\bf x}$ over all classes (sum over all conditional densities, each multiplied with the relevant prior).
We now calculate the corrected posterior probability (with a prime) from Bayes formula
$
\begin{split}
&P'(class=j \; \mid \; {\bf x}) = \\
&\; \; \; \; \frac{P'(class=j) \; \frac{P(class=j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class=j)}
}{
P'(class=j) \; \frac{P(class=j \; \mid \; {\bf x})\; p({\bf x})}
{P(class=j) } +
P'(class \neq j) \; \frac{ P(class \neq j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class \neq j)}}
\end{split}
$
where $P'(class=j)$ is the prior in the skewed setting (i.e. corrected) and $P'(class=j \; \mid \; {\bf x})$ the corrected posterior. The smaller fractions in the equation above are actually the conditional densities $p({\bf x} \mid class=j)$ and $p({\bf x} \mid class \neq j)$.
The equation simplifies to the following
$
\begin{split}
&P'(class=j \mid {\bf x}) = \\
&\; \; \; \; \frac{\frac{P'(class=j)}{P(class=j)} \; P(class=j \; \mid \; {\bf x})}
{\frac{P'(class=j)}{P(class=j)} \; P(class=j \; \mid \; {\bf x}) +
\frac{P'(class \neq j)}{P(class \neq j)} \; P(class \neq j \; \mid \; {\bf x})}
\end{split}
$
Q.E.D.
This correction formula applies to $2, 3, \ldots, n$ classes.
Application
You can apply this formula to probabilities from discriminant analysis, sigmoid feed-forward neural networks, and probabilistic random forest classifiers. Basically each type of classifier that produces posterior probability estimates can be adapted to any uneven prior distribution after successful training.
A final word on training. Many learning algorithms have difficulties with training well from uneven training sets. This certainly holds for back-propagation applied to multi-layer perceptrons. | How are artificially balanced datasets corrected for? | I have practical experience with training classifiers from imbalanced training sets. There are problems with this. Basically, the variances of the parameters associated with the less frequent classes | How are artificially balanced datasets corrected for?
I have practical experience with training classifiers from imbalanced training sets. There are problems with this. Basically, the variances of the parameters associated with the less frequent classes - these variances grow large. The more uneven the prior distribution is in the training set, the more volatile your classifier outcomes become.
My best practice solution - which works well for probabilistic classifiers - is to train from a completely balanced training set. This means that you have about equally many examples of each class or category. This classifier training on a balanced training set must afterwards be calibrated to the correct distribution in the application domain, in your case a clinical setting. That is - you need to incorporate the skewed real-world prior distribution into the outcome probabilities of your classifier.
The following formula does precisely this by correcting for the lack of skewness in the training set:
$
\begin{split}
&P_{corrected}(class=j \mid {\bf x}) = \\
&\frac{\frac{P_{corrected}(class=j)}{P_{balanced}(class=j)}\; P_{balanced}(class=j \mid {\bf x})}{\frac{P_{corrected}(class=j)}{P_{balanced}(class=j)}\; P_{balanced}(class=j \mid {\bf x}) + \frac{1-P_{corrected}(class=j)}{1-P_{balanced}(class=j)}\; \left(1- P_{balanced}(class=j \mid {\bf x}) \right) }
\end{split}
$
In the above formula, the following terms are used:
$P_{balanced}(class=j)$ the prior probability that outcome $j$ occurs in your balanced training set, e.g. probability of 'No-Tumor', which would be around $0.5$ in a two-class situation, around $0.33$ in a three-class classification domain, etc.
$P_{corrected}(class=j)$ the prior probability that outcome $j$ occurs in your real-world domain, e.g. true probability of 'Tumor' in your clinical setting
$P_{balanced}(class=j \mid {\bf x})$ is the outcome probability (the posterior probability) of your classifier trained with the balanced training set.
$P_{corrected}(class=j \mid {\bf x})$ is the outcome probability (the posterior probability) of your classifier correctly adjusted to the clinical setting.
Example
Correct posterior probability from classifier trained on a balanced training set to domain-applicable posterior probability. We convert to a situation where 'cancer' occurs in only 1% of the images presented to our classifier software:
$
\begin{split}
&P_{corrected}(cancer \mid {\bf x}) =
&\frac{\frac{0.01}{0.5}\; 0.81} {\frac{0.01}{0.5}\; 0.81 + \frac{1-0.01}{1-0.5}\; \left(1- 0.81 \right) }
&=0.04128
\end{split}
$
Derivation of correction formula
We use a capital $P$ to denote a probability (prior or posterior) and a small letter $p$ to indicate a probability density. In image processing, the pixel values are usually assumed to approximately follow a continuous distribution. Hence, the Bayes classifier is calculated using probability densities.
Bayes formula (for any probabilistic classifier)
$
P(class=j \mid {\bf x}) = \frac{P(class=j) \; p({\bf x} \; \mid \; class=j)}
{P(class=j) \; p({\bf x} \; \mid \; class=j) + P(class \neq j) \; p({\bf x} \; \mid \; class \neq j)}
$
where the 'other' classes than $j$ are grouped altogether ($class \neq j$).
From Bayes general formula follows, after rearrangement
$
p({\bf x} \mid class=j) = \frac{P(class=j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class=j)}
$
where $p({\bf x})$ is the joint probability density of ${\bf x}$ over all classes (sum over all conditional densities, each multiplied with the relevant prior).
We now calculate the corrected posterior probability (with a prime) from Bayes formula
$
\begin{split}
&P'(class=j \; \mid \; {\bf x}) = \\
&\; \; \; \; \frac{P'(class=j) \; \frac{P(class=j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class=j)}
}{
P'(class=j) \; \frac{P(class=j \; \mid \; {\bf x})\; p({\bf x})}
{P(class=j) } +
P'(class \neq j) \; \frac{ P(class \neq j \; \mid \; {\bf x}) \; p({\bf x})}
{P(class \neq j)}}
\end{split}
$
where $P'(class=j)$ is the prior in the skewed setting (i.e. corrected) and $P'(class=j \; \mid \; {\bf x})$ the corrected posterior. The smaller fractions in the equation above are actually the conditional densities $p({\bf x} \mid class=j)$ and $p({\bf x} \mid class \neq j)$.
The equation simplifies to the following
$
\begin{split}
&P'(class=j \mid {\bf x}) = \\
&\; \; \; \; \frac{\frac{P'(class=j)}{P(class=j)} \; P(class=j \; \mid \; {\bf x})}
{\frac{P'(class=j)}{P(class=j)} \; P(class=j \; \mid \; {\bf x}) +
\frac{P'(class \neq j)}{P(class \neq j)} \; P(class \neq j \; \mid \; {\bf x})}
\end{split}
$
Q.E.D.
This correction formula applies to $2, 3, \ldots, n$ classes.
Application
You can apply this formula to probabilities from discriminant analysis, sigmoid feed-forward neural networks, and probabilistic random forest classifiers. Basically each type of classifier that produces posterior probability estimates can be adapted to any uneven prior distribution after successful training.
A final word on training. Many learning algorithms have difficulties with training well from uneven training sets. This certainly holds for back-propagation applied to multi-layer perceptrons. | How are artificially balanced datasets corrected for?
I have practical experience with training classifiers from imbalanced training sets. There are problems with this. Basically, the variances of the parameters associated with the less frequent classes |
53,548 | How are artificially balanced datasets corrected for? | With fewer equations: Ideally, to make a decision, we need to know the probability that the input vector $x$ belongs to class $i$, using Bayes rule,
$p_t(C_i|x) = \frac{p_t(x|C_i)p_t(C_i)}{p_t(X)}$
where the $t$ subscript represents the conditions given in the training set. Now if the training set is representative of operational conditions, then the output of the classifier will be a good estimate of the probability of class membership in operational conditions as well,i.e. $P_t(C_i|x) \approx P_o(C_i|x)$.
But what if this is not the case. Say we have re-balanced the data set so that the classes are each represented by the same number of examples, but this was done in a way that did not affect the likelihoods, $P_t(x|C_i)$. In this case all we need to do is to multiply by the ratio of the operational and training set prior probabilities, to give un-normalised operational class probabilities,
$q_o(C_i|x) = p_t(x|C_i)p_t(C_i)\times\frac{p_o(C_i)}{p_t(C_i)} = p_t(x|C_i)p_o(C_i) \approx p_o(x|C_i)p_o(C_i)$
The $o$ subscript indicates the operational conditions. We can then just re-normalise these probabilities so we have the probabilities of class membership calibrated for operational conditions,
$p_o(C_i|x) = \frac{q_o(C_i|x)}{\sum_{j}q_o(C_j|x)}$
If you have information about misclassification costs, these can also be factored in in a similar manner.
So basically divide by the training set prior probability to "cancel" it from Bayes rule and multiply by the operational prior probability to "insert" it into Bayes rule, but that will mess up the normalisation constant on the denominator, so re-normalise so that all the probabilites sum to one. | How are artificially balanced datasets corrected for? | With fewer equations: Ideally, to make a decision, we need to know the probability that the input vector $x$ belongs to class $i$, using Bayes rule,
$p_t(C_i|x) = \frac{p_t(x|C_i)p_t(C_i)}{p_t(X)}$
w | How are artificially balanced datasets corrected for?
With fewer equations: Ideally, to make a decision, we need to know the probability that the input vector $x$ belongs to class $i$, using Bayes rule,
$p_t(C_i|x) = \frac{p_t(x|C_i)p_t(C_i)}{p_t(X)}$
where the $t$ subscript represents the conditions given in the training set. Now if the training set is representative of operational conditions, then the output of the classifier will be a good estimate of the probability of class membership in operational conditions as well,i.e. $P_t(C_i|x) \approx P_o(C_i|x)$.
But what if this is not the case. Say we have re-balanced the data set so that the classes are each represented by the same number of examples, but this was done in a way that did not affect the likelihoods, $P_t(x|C_i)$. In this case all we need to do is to multiply by the ratio of the operational and training set prior probabilities, to give un-normalised operational class probabilities,
$q_o(C_i|x) = p_t(x|C_i)p_t(C_i)\times\frac{p_o(C_i)}{p_t(C_i)} = p_t(x|C_i)p_o(C_i) \approx p_o(x|C_i)p_o(C_i)$
The $o$ subscript indicates the operational conditions. We can then just re-normalise these probabilities so we have the probabilities of class membership calibrated for operational conditions,
$p_o(C_i|x) = \frac{q_o(C_i|x)}{\sum_{j}q_o(C_j|x)}$
If you have information about misclassification costs, these can also be factored in in a similar manner.
So basically divide by the training set prior probability to "cancel" it from Bayes rule and multiply by the operational prior probability to "insert" it into Bayes rule, but that will mess up the normalisation constant on the denominator, so re-normalise so that all the probabilites sum to one. | How are artificially balanced datasets corrected for?
With fewer equations: Ideally, to make a decision, we need to know the probability that the input vector $x$ belongs to class $i$, using Bayes rule,
$p_t(C_i|x) = \frac{p_t(x|C_i)p_t(C_i)}{p_t(X)}$
w |
53,549 | How are artificially balanced datasets corrected for? | The accepted answer from Match Maker EE seems right, but because I've had a hard time following the step from $P(class = j | \mathbf{x})$ to $P'(class=j|\mathbf{x})$, I've decided to write my own derivation. Further used notation is more Bishop like.
Firstly let's state that our new (balanced) dataset was created by random sampling (from the original one) that was purely independent of $\mathbf{x}$, but is not independent of $C_k$ ($C_k$ -> being in class $k$). Note that, in general, we are selecting samples with different probabilities in each class, thus $p(S|C_1)$ might differ from $p(S|C_2)$. Where we have marked the event of selecting (randomly) a sample from the original dataset as $S$.
From the definition of sampling we know that
\begin{equation}
\tag{1}\label{eq:independence}
p(\mathbf{x},S) = p(\mathbf{x})p(S)\,.
\end{equation}
See that conditional independence
\begin{equation}
p(\mathbf{x}|C_k, S) = p(\mathbf{x}|C_k)
\end{equation}
also holds, because when sampling at the subspace of class $C_k$ we select each sample with the same probability; thus, the information that we are operating on the subspace of selected samples of class $k$ does not give additional information. From the conditional independence we have
\begin{equation}
p(\mathbf{x},S|C_k) = p(\mathbf{x}|C_k)p(S|C_k)
\end{equation}
so
\begin{equation}
\tag{2}\label{eq:transition_to_sampled}
p(\mathbf{x}|C_k) = \frac{p(\mathbf{x},S|C_k)}{p(S|C_k)}
\end{equation}
We intend to get $p(C_j|\mathbf{x})$ from our $p(C_j|\mathbf{x}, S)$ that we get from the model trained on our newly created (balanced) dataset. With multiple usage of the Bayes rule and equations \ref{eq:independence} and \ref{eq:transition_to_sampled} we get:
\begin{equation}
p(C_j|\mathbf{x})
= \frac{p(\mathbf{x}|C_j)p(C_j)}{p(\mathbf{x})}
= \frac{\frac{p(\mathbf{x},S|C_j)}{p(S|C_j)}p(C_j)}{\sum_k{p(\mathbf{x}|C_k)p(C_k)}}
= \frac{\frac{p(\mathbf{x},S|C_j)}{p(S|C_j)}p(C_j)}{\sum_k{\frac{p(\mathbf{x},S|C_k)}{p(S|C_k)}p(C_k)}}
= \frac{\frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x},S)}{p(C_j)}}{p(S|C_j)}p(C_j)}{\sum_k{\frac{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x},S)}{p(C_k)}}{p(S|C_k)}p(C_k)}}
= \frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x},S)}{p(S|C_j)}}{\sum_k{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x},S)}{p(S|C_k)}}}
= \frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x})p(S)}{\frac{p(C_j|S)p(S)}{p(C_j)}}}{\sum_k{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x})p(S)}{\frac{p(C_k|S)p(S)}{p(C_k)}}}}
= \frac{p(C_j|\mathbf{x},S)\frac{p(C_j)}{p(C_j|S)}}{\sum_k{p(C_k|\mathbf{x},S)\frac{p(C_k)}{p(C_k|S)}}} \, .
\end{equation}
Where the $p(C_k|S)$ is the new prior for the class $k$ after balancing our dataset (0.5 for binary) or in general after defined random subsampling. | How are artificially balanced datasets corrected for? | The accepted answer from Match Maker EE seems right, but because I've had a hard time following the step from $P(class = j | \mathbf{x})$ to $P'(class=j|\mathbf{x})$, I've decided to write my own deri | How are artificially balanced datasets corrected for?
The accepted answer from Match Maker EE seems right, but because I've had a hard time following the step from $P(class = j | \mathbf{x})$ to $P'(class=j|\mathbf{x})$, I've decided to write my own derivation. Further used notation is more Bishop like.
Firstly let's state that our new (balanced) dataset was created by random sampling (from the original one) that was purely independent of $\mathbf{x}$, but is not independent of $C_k$ ($C_k$ -> being in class $k$). Note that, in general, we are selecting samples with different probabilities in each class, thus $p(S|C_1)$ might differ from $p(S|C_2)$. Where we have marked the event of selecting (randomly) a sample from the original dataset as $S$.
From the definition of sampling we know that
\begin{equation}
\tag{1}\label{eq:independence}
p(\mathbf{x},S) = p(\mathbf{x})p(S)\,.
\end{equation}
See that conditional independence
\begin{equation}
p(\mathbf{x}|C_k, S) = p(\mathbf{x}|C_k)
\end{equation}
also holds, because when sampling at the subspace of class $C_k$ we select each sample with the same probability; thus, the information that we are operating on the subspace of selected samples of class $k$ does not give additional information. From the conditional independence we have
\begin{equation}
p(\mathbf{x},S|C_k) = p(\mathbf{x}|C_k)p(S|C_k)
\end{equation}
so
\begin{equation}
\tag{2}\label{eq:transition_to_sampled}
p(\mathbf{x}|C_k) = \frac{p(\mathbf{x},S|C_k)}{p(S|C_k)}
\end{equation}
We intend to get $p(C_j|\mathbf{x})$ from our $p(C_j|\mathbf{x}, S)$ that we get from the model trained on our newly created (balanced) dataset. With multiple usage of the Bayes rule and equations \ref{eq:independence} and \ref{eq:transition_to_sampled} we get:
\begin{equation}
p(C_j|\mathbf{x})
= \frac{p(\mathbf{x}|C_j)p(C_j)}{p(\mathbf{x})}
= \frac{\frac{p(\mathbf{x},S|C_j)}{p(S|C_j)}p(C_j)}{\sum_k{p(\mathbf{x}|C_k)p(C_k)}}
= \frac{\frac{p(\mathbf{x},S|C_j)}{p(S|C_j)}p(C_j)}{\sum_k{\frac{p(\mathbf{x},S|C_k)}{p(S|C_k)}p(C_k)}}
= \frac{\frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x},S)}{p(C_j)}}{p(S|C_j)}p(C_j)}{\sum_k{\frac{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x},S)}{p(C_k)}}{p(S|C_k)}p(C_k)}}
= \frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x},S)}{p(S|C_j)}}{\sum_k{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x},S)}{p(S|C_k)}}}
= \frac{\frac{p(C_j|\mathbf{x},S)p(\mathbf{x})p(S)}{\frac{p(C_j|S)p(S)}{p(C_j)}}}{\sum_k{\frac{p(C_k|\mathbf{x},S)p(\mathbf{x})p(S)}{\frac{p(C_k|S)p(S)}{p(C_k)}}}}
= \frac{p(C_j|\mathbf{x},S)\frac{p(C_j)}{p(C_j|S)}}{\sum_k{p(C_k|\mathbf{x},S)\frac{p(C_k)}{p(C_k|S)}}} \, .
\end{equation}
Where the $p(C_k|S)$ is the new prior for the class $k$ after balancing our dataset (0.5 for binary) or in general after defined random subsampling. | How are artificially balanced datasets corrected for?
The accepted answer from Match Maker EE seems right, but because I've had a hard time following the step from $P(class = j | \mathbf{x})$ to $P'(class=j|\mathbf{x})$, I've decided to write my own deri |
53,550 | How are artificially balanced datasets corrected for? | I know this is a late reply and you probably do not need this answer, however I believe that I can add valuable information for future Pattern Recognition and Machine Learning by Christopher Bishop readers.
Answers from Match Maker EE and Dikran Marsupial provide good explanation on the logic behind the formula of the rebalancing of the classifiers, so I won't go into details on the logic of the formula.
I will instead provide explanation on what the author was intending to convey since that was your question.
The author was explaining three ways predicting input x into its correct classifier. One of the ways was to get the different conditional posterior class probabilities.
The advantage of solving for the posterior class probability is now we are able to apply balancing on our data set using (from Match Maker EE's answer):
$$P′(class=j∣x)=\frac{\frac{P′(class=j)}{P(class=j)}P(class=j∣x)}{\frac{P′(class=j)}{P(class=j)}P(class=j∣x)+\frac{P′(class≠j)}{P(class≠j)}P(class≠j∣x)}$$
Since P(class = j |x), P(class = j) and P'(class = j) are all known.
P(class = j |x) was solved originally.
P(class = j) is the fraction of the original data set that lies in class j.
P'(class = j) is the balancing that we want to apply on the data set.
In short, the author wanted to convey the advantage of getting the different conditional posterior class probabilities.
Hope this is helpful. | How are artificially balanced datasets corrected for? | I know this is a late reply and you probably do not need this answer, however I believe that I can add valuable information for future Pattern Recognition and Machine Learning by Christopher Bishop re | How are artificially balanced datasets corrected for?
I know this is a late reply and you probably do not need this answer, however I believe that I can add valuable information for future Pattern Recognition and Machine Learning by Christopher Bishop readers.
Answers from Match Maker EE and Dikran Marsupial provide good explanation on the logic behind the formula of the rebalancing of the classifiers, so I won't go into details on the logic of the formula.
I will instead provide explanation on what the author was intending to convey since that was your question.
The author was explaining three ways predicting input x into its correct classifier. One of the ways was to get the different conditional posterior class probabilities.
The advantage of solving for the posterior class probability is now we are able to apply balancing on our data set using (from Match Maker EE's answer):
$$P′(class=j∣x)=\frac{\frac{P′(class=j)}{P(class=j)}P(class=j∣x)}{\frac{P′(class=j)}{P(class=j)}P(class=j∣x)+\frac{P′(class≠j)}{P(class≠j)}P(class≠j∣x)}$$
Since P(class = j |x), P(class = j) and P'(class = j) are all known.
P(class = j |x) was solved originally.
P(class = j) is the fraction of the original data set that lies in class j.
P'(class = j) is the balancing that we want to apply on the data set.
In short, the author wanted to convey the advantage of getting the different conditional posterior class probabilities.
Hope this is helpful. | How are artificially balanced datasets corrected for?
I know this is a late reply and you probably do not need this answer, however I believe that I can add valuable information for future Pattern Recognition and Machine Learning by Christopher Bishop re |
53,551 | How to multiply a likelihood by a prior? | Perhaps the multiplication of 'prior' by 'likelihood' to obtain 'posterior' will be clearer if we make a careful comparison of (a) a familiar elementary application of Bayes' Theorem
for a finite partition with (b) the use of a continuous version of Bayes' Theorem
for inference on a parameter.
Bayes' Theorem with a finite partition. Let's begin with a Bayesian problem based on a finite partition. Your factory makes widgets and has $K$
machines: $A_1, A_2, \dots, A_K.$ Every widget is made by exactly one of these
machines, so the $K$ machines can be viewed as a finite partition.
(a) The machines run at various speeds. The $j$th machine makes the (prior) proportion
$P(A_j)$ of widgets, $j = 1,2,\dots K,$ where $\sum_j P(A_j)=1.$
(b) Machines are of varying quality. The likelihood of a defective
widget from machine $A_i,$ is $P(D|A_i).$
(c) If we observe that a widget randomly chosen from the warehouse
is defective, then the (posterior) probability that widget was made
by machine $A_j$ is
$$P(A_j | D) = P(A_jD)/P(D) = P(A_j)P(D|A_j)/C$$
where $C = P(D) = \sum_i P(A_iD) = \sum_i P(A_i)P(D|A_i).$
We can say that the expression on the right in the displayed equation
is the product of the prior probabilities and likelihood, divided by a constant.
Here the likelihood is based on data, the observation that the widget from
the warehouse is defective. Thus, suppressing the constant, we could say that the posterior distribution is proportional to the product of the prior distribution and the likelihood, and write $P(A_i|D) \propto P(A_i) \times P(D|A_i).$
However, in discrete Bayesian applications,
it is unusual to suppress the constant---because it is an easily computed sum
and because it is needed to get numerical results.
Continuous Bayesian situation. Suppose you want to get an interval estimate of a binomial Success probability $\theta,$ where $0 < \theta < 1.$
(a) You have a prior distribution on $\theta,$ which is viewed as a random variable.
Say that the density function
$$f(\theta) = \frac{\Gamma(330+270)}{\Gamma(330)\Gamma(270)}\theta^{330-1}(1-\theta)^{270-1},$$
for $0 < \theta < 1,$ is that of $\mathsf{Beta}(330, 270).$
We use a beta prior distribution because it has support $(0,1)$ and we choose
this particular beta distribution because it puts 95% of its probability in
the interval $(0.51, 0.59),$ which matches our prior opinion that $\theta$ is slightly above $1/2.$ (Other similar beta distributions might have been chosen, but this one seems about right.) In R:
diff(pbeta(c(.51,.59),330,270))
[1] 0.9513758
(b) Then we do an experiment (perhaps, take a poll or test for prevalence of a disease), in which we observe $x = 620$ 'Successes' within $n = 1000$ trials. So the binomial likelihood function is based on a binomial PDF viewed as a function of $\theta,$ denoted
$$g(x|\theta) = {1000 \choose 620}\theta^{620}(1-\theta)^{n-620}.$$
(c) The 'continuous' version of Bayes' Theorem can be stated as follows:
$$h(\theta|x) = \frac{f(\theta)g(x|\theta)}{\int f(\theta)g(x|\theta)\, d\theta}
= \frac{f(\theta)g(x|\theta)}{C} \propto f(\theta) \times g(x|\theta).$$
This is often summarized as $\mathrm{POSTERIOR}\propto \mathrm{PRIOR}\times\mathrm{LIKELIHOOD}.$ (The symbol $\propto$ is read as
"proportional to".)
In the current particular application, we can avoid evaluating the integral $C$ because the beta prior distribution is 'conjugate to' (mathematically compatible with) the binomial likelihood. This makes it possible to recognize the right hand side of the last
displayed equation as
$$h(\theta|x) = f(\theta)g(x|\theta) \propto \theta^{330+620-1}(1-\theta)^{270-(1000-620)-1}\\ = \theta^{950-1}(1-\theta)^{650-1},$$
which is proportional to the density function of $\mathsf{Beta}(950,650).$
Of course, the integral can be evaluated by analytic or computational means, but
it is convenient when we don't need to evaluate the constant $C.$
Finally, we can say that a 95% Bayesian posterior probability interval (also called 'credible interval') is $(0.570, 0.618).$ Specific endpoints
of this interval are influenced both by the prior distribution and (somewhat more strongly) by the data from our experiment.
qbeta(c(.025,.975), 950,650)
[1] 0.5695848 0.6176932
If we had used the 'non-informative' Jeffreys' prior $\mathsf{Beta}(.5,.5),$ then
the 95% posterior interval estimate from our experiment would have been $(0.590, 0.650).$
qbeta(c(.025,.975), 620.5, 380.5)
[1] 0.5896044 0.6497021 | How to multiply a likelihood by a prior? | Perhaps the multiplication of 'prior' by 'likelihood' to obtain 'posterior' will be clearer if we make a careful comparison of (a) a familiar elementary application of Bayes' Theorem
for a finite part | How to multiply a likelihood by a prior?
Perhaps the multiplication of 'prior' by 'likelihood' to obtain 'posterior' will be clearer if we make a careful comparison of (a) a familiar elementary application of Bayes' Theorem
for a finite partition with (b) the use of a continuous version of Bayes' Theorem
for inference on a parameter.
Bayes' Theorem with a finite partition. Let's begin with a Bayesian problem based on a finite partition. Your factory makes widgets and has $K$
machines: $A_1, A_2, \dots, A_K.$ Every widget is made by exactly one of these
machines, so the $K$ machines can be viewed as a finite partition.
(a) The machines run at various speeds. The $j$th machine makes the (prior) proportion
$P(A_j)$ of widgets, $j = 1,2,\dots K,$ where $\sum_j P(A_j)=1.$
(b) Machines are of varying quality. The likelihood of a defective
widget from machine $A_i,$ is $P(D|A_i).$
(c) If we observe that a widget randomly chosen from the warehouse
is defective, then the (posterior) probability that widget was made
by machine $A_j$ is
$$P(A_j | D) = P(A_jD)/P(D) = P(A_j)P(D|A_j)/C$$
where $C = P(D) = \sum_i P(A_iD) = \sum_i P(A_i)P(D|A_i).$
We can say that the expression on the right in the displayed equation
is the product of the prior probabilities and likelihood, divided by a constant.
Here the likelihood is based on data, the observation that the widget from
the warehouse is defective. Thus, suppressing the constant, we could say that the posterior distribution is proportional to the product of the prior distribution and the likelihood, and write $P(A_i|D) \propto P(A_i) \times P(D|A_i).$
However, in discrete Bayesian applications,
it is unusual to suppress the constant---because it is an easily computed sum
and because it is needed to get numerical results.
Continuous Bayesian situation. Suppose you want to get an interval estimate of a binomial Success probability $\theta,$ where $0 < \theta < 1.$
(a) You have a prior distribution on $\theta,$ which is viewed as a random variable.
Say that the density function
$$f(\theta) = \frac{\Gamma(330+270)}{\Gamma(330)\Gamma(270)}\theta^{330-1}(1-\theta)^{270-1},$$
for $0 < \theta < 1,$ is that of $\mathsf{Beta}(330, 270).$
We use a beta prior distribution because it has support $(0,1)$ and we choose
this particular beta distribution because it puts 95% of its probability in
the interval $(0.51, 0.59),$ which matches our prior opinion that $\theta$ is slightly above $1/2.$ (Other similar beta distributions might have been chosen, but this one seems about right.) In R:
diff(pbeta(c(.51,.59),330,270))
[1] 0.9513758
(b) Then we do an experiment (perhaps, take a poll or test for prevalence of a disease), in which we observe $x = 620$ 'Successes' within $n = 1000$ trials. So the binomial likelihood function is based on a binomial PDF viewed as a function of $\theta,$ denoted
$$g(x|\theta) = {1000 \choose 620}\theta^{620}(1-\theta)^{n-620}.$$
(c) The 'continuous' version of Bayes' Theorem can be stated as follows:
$$h(\theta|x) = \frac{f(\theta)g(x|\theta)}{\int f(\theta)g(x|\theta)\, d\theta}
= \frac{f(\theta)g(x|\theta)}{C} \propto f(\theta) \times g(x|\theta).$$
This is often summarized as $\mathrm{POSTERIOR}\propto \mathrm{PRIOR}\times\mathrm{LIKELIHOOD}.$ (The symbol $\propto$ is read as
"proportional to".)
In the current particular application, we can avoid evaluating the integral $C$ because the beta prior distribution is 'conjugate to' (mathematically compatible with) the binomial likelihood. This makes it possible to recognize the right hand side of the last
displayed equation as
$$h(\theta|x) = f(\theta)g(x|\theta) \propto \theta^{330+620-1}(1-\theta)^{270-(1000-620)-1}\\ = \theta^{950-1}(1-\theta)^{650-1},$$
which is proportional to the density function of $\mathsf{Beta}(950,650).$
Of course, the integral can be evaluated by analytic or computational means, but
it is convenient when we don't need to evaluate the constant $C.$
Finally, we can say that a 95% Bayesian posterior probability interval (also called 'credible interval') is $(0.570, 0.618).$ Specific endpoints
of this interval are influenced both by the prior distribution and (somewhat more strongly) by the data from our experiment.
qbeta(c(.025,.975), 950,650)
[1] 0.5695848 0.6176932
If we had used the 'non-informative' Jeffreys' prior $\mathsf{Beta}(.5,.5),$ then
the 95% posterior interval estimate from our experiment would have been $(0.590, 0.650).$
qbeta(c(.025,.975), 620.5, 380.5)
[1] 0.5896044 0.6497021 | How to multiply a likelihood by a prior?
Perhaps the multiplication of 'prior' by 'likelihood' to obtain 'posterior' will be clearer if we make a careful comparison of (a) a familiar elementary application of Bayes' Theorem
for a finite part |
53,552 | How to multiply a likelihood by a prior? | Bruce's answer is correct if—and only if—the prior and the likelihood contain no overlapping information. When that is true, Bayesian evidence combination is done by pointwise product of densities in the continuous case, the pointwise product of masses in the discrete case, etc. This is called product of experts by Geoff Hinton.
However, there can often be overlapping information. For example, it's very common to do Bayesian evidence combination with exponential families. The carrier measure encodes prior information about the parametrization of the support. It would be wrong to use product of experts with exponential families that have nonzero carrier measure since that will double-count the carrier measure. And anyway, the product of experts of a such distribution family may not even be within the exponential family. Luckily, Bayesian evidence combination without double-counting the carrier measure is equivalent to adding natural parameters.
In general, the posterior is proportional the prior times the likelihood divided by the overlapping information. | How to multiply a likelihood by a prior? | Bruce's answer is correct if—and only if—the prior and the likelihood contain no overlapping information. When that is true, Bayesian evidence combination is done by pointwise product of densities in | How to multiply a likelihood by a prior?
Bruce's answer is correct if—and only if—the prior and the likelihood contain no overlapping information. When that is true, Bayesian evidence combination is done by pointwise product of densities in the continuous case, the pointwise product of masses in the discrete case, etc. This is called product of experts by Geoff Hinton.
However, there can often be overlapping information. For example, it's very common to do Bayesian evidence combination with exponential families. The carrier measure encodes prior information about the parametrization of the support. It would be wrong to use product of experts with exponential families that have nonzero carrier measure since that will double-count the carrier measure. And anyway, the product of experts of a such distribution family may not even be within the exponential family. Luckily, Bayesian evidence combination without double-counting the carrier measure is equivalent to adding natural parameters.
In general, the posterior is proportional the prior times the likelihood divided by the overlapping information. | How to multiply a likelihood by a prior?
Bruce's answer is correct if—and only if—the prior and the likelihood contain no overlapping information. When that is true, Bayesian evidence combination is done by pointwise product of densities in |
53,553 | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | I use the following binary variables:
Poop is green: G
Am sick: D
Ate leafy greens: L
First, let's see how you can reach $P(D=1|G=1) = 0.8$. While you "knew" that you had eaten leafy greens and that it could cause green poop, when you thought about it first, you only considered a disease as a potential cause. That is, you only had in mind the probabilistic graph D -> G in mind, meaning $P(D,G) = P(D)P(G|D)$. For example, $P(D=1) = 0.1$ (you felt fine other than the poop), and $P(G=1|D=1)$ is also low (you know very little diseases that cause green poop), therefore $P(D=1,G=1)$ is pretty low. So how come you have $P(D=1|G=1)=0.8$? The alternative $P(D=0|G=1)$ is even lower: yes, $P(D=0)=0.9$ is high, yet having green poop while not being sick is extremely extremely unlikely (because most days, I am fine, yet my poop is not green)! You can check that by fixing actual probabilities.
Now when you learn or are reminded about leafy greens on the internet, you update your graph and add a potential cause "leafy greens". Formally, $P(D,G,L) = P(L) P(D) P(G|D,L)$. Now, because $P(L)=1$ (I know for sure I ate greens yesterday) and $P(G=1|D=d,L=1)$ for any $d$ is high: that's what I was "reminded" about on the internet: sick or not, leafy greens cause green poop.
By Bayes rules, $P(D|G,L) \propto P(D) P(L) P(G|D,L)$ and by fixing concrete probabilities you will find a low probability of disease thanks to the high $P(G=1|D=d,L=1)$.
That's an instance of explaining away: in the V-shaped graph, when you fix the value of the effect (G), the two causes are now dependent (D and L are dependent given G). The observation that one of the cause is present will decrease the probability of the other (in our case, drastically) and vice versa: if one cause is not present, the probability of the other cause will go up (in our case, you didn't eat leafy greens so you'd still think you are sick with high probability).
I tried to find a good reference for explaining away but did not. Pearl's automobile example seems to be frequently given, for example here.
Relating this to Ben's answer
Yes, I did change the model by adding an edge in the graph, and it is not a fully "Bayesian" formalisation of the problem. I am reasoning like a scientist who incrementally builds a Bayesian model.
Your want to model your own thought process: you know that leafy greens are a relevant cause that you used to ignore, and therefore you want to put the variable I in the graph. Thanks to Ben's answer, you realize that the probabilistic graph of causes can be encoded in a very flexible way, where every possible cause can have no to a huge influence on the inference you are trying to draw, via these "gating" variables like I. I think that you were looking for Ben's answer, actually.
However, I want to point out that even though Ben's fully Bayesian model might (might only, see next paragraph) be a good (although HUGE) model for "thought processes", it does not reflect scientific elaboration of models. Imagine that I is binary, 1 if L causes G and 0 otherwise. A Bayesian scientist needs to put a prior over I, and in doing so, should think about whether L causes G. But as you said, you did not learn that $I=1$ on the internet; you were merely reminded about it. So if you had thought about it, you would have put a very probable I as a prior. In that case, you see that there is no updating going on and you just recover the analysis I provided with the second model. On the contrary, if you did not think about the cause, you would have built the first model I presented. In other words, if the Bayesian scientist is not fully satisfied with his model, he needs to build another one and his approach is not "fully Bayesian" (in the extreme, formal and dogmatic sense of the term).
Most importantly, I am still puzzled by Ben's answer, though, because he did not specify the prior over I. If we are modelling thought processes, we could see beliefs of an individual as continually updated throughout his life. For Ben's answer to be fully complete and convincing, we need the "prior" probability (before seeing the information on the internet) $P(I=1)$ to be low. Why would it be the case? I don't think the individual has been exposed to evidence for that in his life. There is something wrong.
Therefore, I am more inclined to imagine that we do approximate Bayesian inference in our heads with very partial graphs that are "instantiated" by extracting pieces of a "full knowledge graph" in an imperfect way.
I am very curious to hear Ben's opinion on that. There are probably tons of resources discussing the problem (maybe in the "objective vs subjective" or "Bayesian vs frequentist" debates?), but I'm not an expert. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | I use the following binary variables:
Poop is green: G
Am sick: D
Ate leafy greens: L
First, let's see how you can reach $P(D=1|G=1) = 0.8$. While you "knew" that you had eaten leafy greens and that | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
I use the following binary variables:
Poop is green: G
Am sick: D
Ate leafy greens: L
First, let's see how you can reach $P(D=1|G=1) = 0.8$. While you "knew" that you had eaten leafy greens and that it could cause green poop, when you thought about it first, you only considered a disease as a potential cause. That is, you only had in mind the probabilistic graph D -> G in mind, meaning $P(D,G) = P(D)P(G|D)$. For example, $P(D=1) = 0.1$ (you felt fine other than the poop), and $P(G=1|D=1)$ is also low (you know very little diseases that cause green poop), therefore $P(D=1,G=1)$ is pretty low. So how come you have $P(D=1|G=1)=0.8$? The alternative $P(D=0|G=1)$ is even lower: yes, $P(D=0)=0.9$ is high, yet having green poop while not being sick is extremely extremely unlikely (because most days, I am fine, yet my poop is not green)! You can check that by fixing actual probabilities.
Now when you learn or are reminded about leafy greens on the internet, you update your graph and add a potential cause "leafy greens". Formally, $P(D,G,L) = P(L) P(D) P(G|D,L)$. Now, because $P(L)=1$ (I know for sure I ate greens yesterday) and $P(G=1|D=d,L=1)$ for any $d$ is high: that's what I was "reminded" about on the internet: sick or not, leafy greens cause green poop.
By Bayes rules, $P(D|G,L) \propto P(D) P(L) P(G|D,L)$ and by fixing concrete probabilities you will find a low probability of disease thanks to the high $P(G=1|D=d,L=1)$.
That's an instance of explaining away: in the V-shaped graph, when you fix the value of the effect (G), the two causes are now dependent (D and L are dependent given G). The observation that one of the cause is present will decrease the probability of the other (in our case, drastically) and vice versa: if one cause is not present, the probability of the other cause will go up (in our case, you didn't eat leafy greens so you'd still think you are sick with high probability).
I tried to find a good reference for explaining away but did not. Pearl's automobile example seems to be frequently given, for example here.
Relating this to Ben's answer
Yes, I did change the model by adding an edge in the graph, and it is not a fully "Bayesian" formalisation of the problem. I am reasoning like a scientist who incrementally builds a Bayesian model.
Your want to model your own thought process: you know that leafy greens are a relevant cause that you used to ignore, and therefore you want to put the variable I in the graph. Thanks to Ben's answer, you realize that the probabilistic graph of causes can be encoded in a very flexible way, where every possible cause can have no to a huge influence on the inference you are trying to draw, via these "gating" variables like I. I think that you were looking for Ben's answer, actually.
However, I want to point out that even though Ben's fully Bayesian model might (might only, see next paragraph) be a good (although HUGE) model for "thought processes", it does not reflect scientific elaboration of models. Imagine that I is binary, 1 if L causes G and 0 otherwise. A Bayesian scientist needs to put a prior over I, and in doing so, should think about whether L causes G. But as you said, you did not learn that $I=1$ on the internet; you were merely reminded about it. So if you had thought about it, you would have put a very probable I as a prior. In that case, you see that there is no updating going on and you just recover the analysis I provided with the second model. On the contrary, if you did not think about the cause, you would have built the first model I presented. In other words, if the Bayesian scientist is not fully satisfied with his model, he needs to build another one and his approach is not "fully Bayesian" (in the extreme, formal and dogmatic sense of the term).
Most importantly, I am still puzzled by Ben's answer, though, because he did not specify the prior over I. If we are modelling thought processes, we could see beliefs of an individual as continually updated throughout his life. For Ben's answer to be fully complete and convincing, we need the "prior" probability (before seeing the information on the internet) $P(I=1)$ to be low. Why would it be the case? I don't think the individual has been exposed to evidence for that in his life. There is something wrong.
Therefore, I am more inclined to imagine that we do approximate Bayesian inference in our heads with very partial graphs that are "instantiated" by extracting pieces of a "full knowledge graph" in an imperfect way.
I am very curious to hear Ben's opinion on that. There are probably tons of resources discussing the problem (maybe in the "objective vs subjective" or "Bayesian vs frequentist" debates?), but I'm not an expert. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
I use the following binary variables:
Poop is green: G
Am sick: D
Ate leafy greens: L
First, let's see how you can reach $P(D=1|G=1) = 0.8$. While you "knew" that you had eaten leafy greens and that |
53,554 | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | It seems to me you are looking at the Bayes's theorem and in particular at the prior probability.
Your data ($green\;poop, \; etc$) is the same before and after checking the internet. However, initially, your prior probability is either neutral or in favour of disease since green poop is odd. After checking the internet your prior shifts in favour of not-disease and that updates the posterior towards $P(disease|green\,poop,\; etc)=low$. Mathematically, I guess you could use a beta distribution to model your prior belief more or less strongly in favour or against the disease. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | It seems to me you are looking at the Bayes's theorem and in particular at the prior probability.
Your data ($green\;poop, \; etc$) is the same before and after checking the internet. However, initial | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
It seems to me you are looking at the Bayes's theorem and in particular at the prior probability.
Your data ($green\;poop, \; etc$) is the same before and after checking the internet. However, initially, your prior probability is either neutral or in favour of disease since green poop is odd. After checking the internet your prior shifts in favour of not-disease and that updates the posterior towards $P(disease|green\,poop,\; etc)=low$. Mathematically, I guess you could use a beta distribution to model your prior belief more or less strongly in favour or against the disease. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
It seems to me you are looking at the Bayes's theorem and in particular at the prior probability.
Your data ($green\;poop, \; etc$) is the same before and after checking the internet. However, initial |
53,555 | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | This kind of problem can be handled using Bayesian analysis, but it requires a bit of care. The tricky bit here is that there is a distinction between the conditioning event "ate leafy greens" and the other conditioning event "information showing that eating leafy greens causes green poo". You already know you ate leafy greens in both scenarios, so that conditioning event is not what is changing your probability. Rather, it is the additional information you have obtained from your internet search that is telling you that leafy greens cause green poo, and therefore lead you to reduce your inferred probability of disease.
To simplify this analysis, I will assume that the only relevant conditioning event from the previous day is that you ate leafy greens (i.e., the event "ate leafy greens" will be equivalent to "everything I did yesterday). This removes explicit conditioning on the remainder of what happened that day. I will use the following events:
$$\begin{align}
\mathcal{D} & & & \text{Disease}, \\[6pt]
\mathcal{G} & & & \text{Green poop}, \\[6pt]
\mathcal{L} & & & \text{Ate leafy greens}, \\[6pt]
\mathcal{I} & & & \text{Information showing that } \mathcal{L} \text{ causes } \mathcal{G}. \\[6pt]
\end{align}$$
The circumstance you are describing is that $\mathbb{P}(\mathcal{D}|\mathcal{G} \cap \mathcal{L})$ is high but $\mathbb{P}(\mathcal{D}|\mathcal{G} \cap \mathcal{L} \cap \mathcal{I}) $ is low (i.e., the addition of the new information lowers the probability that you have a disease). There are many reasonable ways that you could be led to this outcome, but a general structure would look like the DAG below. Disease can cause green poo, but it can also be cauesd by eating leafy greens. (The joint path for the latter depends on the fact that the causal pathway from leafy greens to green poo is not known unless you obtain the information to that effect.)
In this case, the effect of gaining the information that relates eating leafy greens with green poo is that it "opens the pathway" at the bottom of the DAG, and thereby provides an alternative reason to believe that green poo could occur in the absence of a disease. This leads you to lower the conditional probability of disease accordingly. It would be possible to formalise this analysis further by giving some appropriate probability values to the various events of interest, but I will not pursue that level of detail. Hopefully this structural discussion assists you in understanding the nature of the inference you are making. Suffice to say, your reduction in the inferred probability of disease is a sensible conclusion from the additional conditioning information you obtained. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | This kind of problem can be handled using Bayesian analysis, but it requires a bit of care. The tricky bit here is that there is a distinction between the conditioning event "ate leafy greens" and th | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
This kind of problem can be handled using Bayesian analysis, but it requires a bit of care. The tricky bit here is that there is a distinction between the conditioning event "ate leafy greens" and the other conditioning event "information showing that eating leafy greens causes green poo". You already know you ate leafy greens in both scenarios, so that conditioning event is not what is changing your probability. Rather, it is the additional information you have obtained from your internet search that is telling you that leafy greens cause green poo, and therefore lead you to reduce your inferred probability of disease.
To simplify this analysis, I will assume that the only relevant conditioning event from the previous day is that you ate leafy greens (i.e., the event "ate leafy greens" will be equivalent to "everything I did yesterday). This removes explicit conditioning on the remainder of what happened that day. I will use the following events:
$$\begin{align}
\mathcal{D} & & & \text{Disease}, \\[6pt]
\mathcal{G} & & & \text{Green poop}, \\[6pt]
\mathcal{L} & & & \text{Ate leafy greens}, \\[6pt]
\mathcal{I} & & & \text{Information showing that } \mathcal{L} \text{ causes } \mathcal{G}. \\[6pt]
\end{align}$$
The circumstance you are describing is that $\mathbb{P}(\mathcal{D}|\mathcal{G} \cap \mathcal{L})$ is high but $\mathbb{P}(\mathcal{D}|\mathcal{G} \cap \mathcal{L} \cap \mathcal{I}) $ is low (i.e., the addition of the new information lowers the probability that you have a disease). There are many reasonable ways that you could be led to this outcome, but a general structure would look like the DAG below. Disease can cause green poo, but it can also be cauesd by eating leafy greens. (The joint path for the latter depends on the fact that the causal pathway from leafy greens to green poo is not known unless you obtain the information to that effect.)
In this case, the effect of gaining the information that relates eating leafy greens with green poo is that it "opens the pathway" at the bottom of the DAG, and thereby provides an alternative reason to believe that green poo could occur in the absence of a disease. This leads you to lower the conditional probability of disease accordingly. It would be possible to formalise this analysis further by giving some appropriate probability values to the various events of interest, but I will not pursue that level of detail. Hopefully this structural discussion assists you in understanding the nature of the inference you are making. Suffice to say, your reduction in the inferred probability of disease is a sensible conclusion from the additional conditioning information you obtained. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
This kind of problem can be handled using Bayesian analysis, but it requires a bit of care. The tricky bit here is that there is a distinction between the conditioning event "ate leafy greens" and th |
53,556 | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | $$statistics \neq mathematics$$
We can mathematically express probabilities (like you did two times) but they are not the real probabilities and instead only probabilities according to some model.
So a probability expression has a "probability" to fail. By how much... that depends on the quality of the model.
If your model is considered good (which is not well expressed mathematically), such that the effect of the bias of your model, having an influence on the discrepancy between calculations and reality, is negligible in comparison to the random error/variation occuring within the model, then we may consider the inaccuracies of the model negligible.
In your example we could say that your first model was not very accurate, and that is why it's result is so different from the more accurate second model. There is no contradiction.
Probabilities obtained from models, like p-values or posterior densities, are not real probabilities, and only a reflection of the real situation. These reflections can be distorted to various extents. This distortion is almost never the subject of the (mathematical) considerations/models. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning? | $$statistics \neq mathematics$$
We can mathematically express probabilities (like you did two times) but they are not the real probabilities and instead only probabilities according to some model.
So | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
$$statistics \neq mathematics$$
We can mathematically express probabilities (like you did two times) but they are not the real probabilities and instead only probabilities according to some model.
So a probability expression has a "probability" to fail. By how much... that depends on the quality of the model.
If your model is considered good (which is not well expressed mathematically), such that the effect of the bias of your model, having an influence on the discrepancy between calculations and reality, is negligible in comparison to the random error/variation occuring within the model, then we may consider the inaccuracies of the model negligible.
In your example we could say that your first model was not very accurate, and that is why it's result is so different from the more accurate second model. There is no contradiction.
Probabilities obtained from models, like p-values or posterior densities, are not real probabilities, and only a reflection of the real situation. These reflections can be distorted to various extents. This distortion is almost never the subject of the (mathematical) considerations/models. | Green poop, leafy greens and probability of disease, how can I formalize this reasoning?
$$statistics \neq mathematics$$
We can mathematically express probabilities (like you did two times) but they are not the real probabilities and instead only probabilities according to some model.
So |
53,557 | introductory machine learning concept questions [closed] | The `1 regularization cannot shrink parameters to zero, hence it can
be used for the purpose of feature selection
Yes. You can refer to this answer.
Deep Neural Networks
Many other hyperparameters, like embedding dimension, layer dimension, input length, parameter sharing, reused layers in transfer learning, early stopping strategy, learning rate decay and many others. Here is a good article.
For the hyperparameters, you can refer to the APIs in Tensorflow or sklearn. | introductory machine learning concept questions [closed] | The `1 regularization cannot shrink parameters to zero, hence it can
be used for the purpose of feature selection
Yes. You can refer to this answer.
Deep Neural Networks
Many other hyperparameters, | introductory machine learning concept questions [closed]
The `1 regularization cannot shrink parameters to zero, hence it can
be used for the purpose of feature selection
Yes. You can refer to this answer.
Deep Neural Networks
Many other hyperparameters, like embedding dimension, layer dimension, input length, parameter sharing, reused layers in transfer learning, early stopping strategy, learning rate decay and many others. Here is a good article.
For the hyperparameters, you can refer to the APIs in Tensorflow or sklearn. | introductory machine learning concept questions [closed]
The `1 regularization cannot shrink parameters to zero, hence it can
be used for the purpose of feature selection
Yes. You can refer to this answer.
Deep Neural Networks
Many other hyperparameters, |
53,558 | introductory machine learning concept questions [closed] | #2
A covariance matrix cannot have eigenvalues less than zero, as it is a real, symmetric matrix. However, there is no such restriction on positive/negative/zero in the matrix itself.
As you note, covariance can be less than zero. This happens when variables have correlation less than zero. Therefore, there can be numbers less than zero in the covariance matrix.
Zero is a theoretical possibility in a covariance matrix, if there is zero correlation between two variables. In practice, however, you will not observe this in most data (see Henry’s comment...categorical data could have zero empirical correlation, too). | introductory machine learning concept questions [closed] | #2
A covariance matrix cannot have eigenvalues less than zero, as it is a real, symmetric matrix. However, there is no such restriction on positive/negative/zero in the matrix itself.
As you note, cov | introductory machine learning concept questions [closed]
#2
A covariance matrix cannot have eigenvalues less than zero, as it is a real, symmetric matrix. However, there is no such restriction on positive/negative/zero in the matrix itself.
As you note, covariance can be less than zero. This happens when variables have correlation less than zero. Therefore, there can be numbers less than zero in the covariance matrix.
Zero is a theoretical possibility in a covariance matrix, if there is zero correlation between two variables. In practice, however, you will not observe this in most data (see Henry’s comment...categorical data could have zero empirical correlation, too). | introductory machine learning concept questions [closed]
#2
A covariance matrix cannot have eigenvalues less than zero, as it is a real, symmetric matrix. However, there is no such restriction on positive/negative/zero in the matrix itself.
As you note, cov |
53,559 | introductory machine learning concept questions [closed] | Re: L1 regularization, I think that's a trick question. The conclusion is true, but the antecedent is false -- L1 regularization can shrink parameters to zero, and that's why it can be used for variable selection (any features associated with zero parameters are effectively cut out of the model).
However, as you know, the implication "if X then Y" is true when X is false -- that is, it is a vacuous implication. I don't know how tricky your instructor is. If you answer "yes", I think you should be prepared to explain why. I think a "no" answer would be more consonant with an informal interpretation of the question, but, again, you should be prepared to explain. | introductory machine learning concept questions [closed] | Re: L1 regularization, I think that's a trick question. The conclusion is true, but the antecedent is false -- L1 regularization can shrink parameters to zero, and that's why it can be used for variab | introductory machine learning concept questions [closed]
Re: L1 regularization, I think that's a trick question. The conclusion is true, but the antecedent is false -- L1 regularization can shrink parameters to zero, and that's why it can be used for variable selection (any features associated with zero parameters are effectively cut out of the model).
However, as you know, the implication "if X then Y" is true when X is false -- that is, it is a vacuous implication. I don't know how tricky your instructor is. If you answer "yes", I think you should be prepared to explain why. I think a "no" answer would be more consonant with an informal interpretation of the question, but, again, you should be prepared to explain. | introductory machine learning concept questions [closed]
Re: L1 regularization, I think that's a trick question. The conclusion is true, but the antecedent is false -- L1 regularization can shrink parameters to zero, and that's why it can be used for variab |
53,560 | Interpreting and troubleshooting nls in R with quadratic plateau model | We need better starting values. Fit a non-plateau model, model0, and use the parameters from that to fit all the data points giving model and then use a and b from that and a grid of values for clx (due to its problematic nature) giving model.Ab and model.La. (Note that it will not be able to produce fits from some of the grid's starting values resulting in error messages but nls2 will keep processing further starting values so those errors can be ignored.)
library(nls2)
# ensure data is sorted for plotting
o <- with(healing, order(Type, Days))
h <- healing[o, ]
# last argument specifies whether there is or is not a plateau
quadplat = function(x, a, b, clx, plat = TRUE) {
if (plat) x <- pmin(x, clx)
a + b * x + (-0.5*b/clx) * x * x
}
# fit no plateau model with all data
st <- c(a = 1, b = 1, clx = 1)
model0 <- nls(Area ~ quadplat(Days, a, b, clx, FALSE), h, start = st)
# fit all data model
model <- nls(Area ~ quadplat(Days, a, b, clx), h, start = coef(model0))
co <- coef(model)
We can now fit and plot the subset models using values computed above in the starting values.
if (exists("model.Ab")) rm(model.Ab)
model.Ab <- nls2(Area ~ quadplat(Days, a, b, clx), h, subset = h$Type == "Abrasion",
start = data.frame(a = co[[1]], b = co[[2]], clx = 0:140))
if (exists("model.La")) rm(model.La)
model.La <- nls2(Area ~ quadplat(Days, a, b, clx), h, subset = h$Type == "Laceration",
start = data.frame(a = co[[1]], b = co[[2]], clx = 0:140))
cols <- c(Abrasion = "red", Laceration = "blue")
plot(Area ~ Days, h, col = cols[Type], pch = 20, cex = 1.5)
lines(fitted(model.Ab) ~ Days, subset(h, Type == "Abrasion"),
col = cols["Abrasion"])
lines(fitted(model.La) ~ Days, subset(h, Type == "Laceration"),
col = cols["Laceration"])
(continued after graphics)
Alternate model
If it is ok to consider other models then this model has only two parameters, are easier to fit and despite having fewer parameters have lower residual sums of squares.
model.Ab2 <- nls(Area ~ a * (1 - exp(- b * Days)), h,
subset = Type == "Abrasion", start = c(a = 100, b = .1))
model.La2 <- nls(Area ~ a * (1 - exp(- b * Days)), h,
subset = Type == "Laceration", start = c(a = 100, b = .1))
# plot
cols <- c(Abrasion = "red", Laceration = "blue")
plot(Area ~ Days, h, col = cols[Type], pch = 20, cex = 1.5)
lines(fitted(model.Ab2) ~ Days, subset(h, Type == "Abrasion"),
col = cols["Abrasion"])
lines(fitted(model.La2) ~ Days, subset(h, Type == "Laceration"),
col = cols["Laceration"])
(continued after graphics)
One parameter model
If we fix a = 100 in the 2 parameter model of the last section we get a 1 parameter model which is not statistically distinguishable from the 2 parameter model. That is seen from the p value shown in the anovas which are greater than 0.05 indicating that we cannot reject the null hypothesis that the 1 and 2 parameter models describe the data equally well for each of the two subsets.
model.Ab3 <- nls(Area ~ 100 * (1 - exp(- b * Days)), h,
subset = Type == "Abrasion", start = c(b = .1))
model.La3 <- nls(Area ~ 100 * (1 - exp(- b * Days)), h,
subset = Type == "Laceration", start = c(b = .1))
anova(model.Ab3, model.Ab2)
anova(model.La3, model.La2)
Also note that the point at which it reaches y = 95, i.e. near plateau, is -log(1 - 95/100)/b (based on inverting the model equation). The numerator is approximately 3 so it reaches 95 at roughly 3/b.
Other
If m <- nls(...) then summary(m) will give standard errors of coefficients and other information. | Interpreting and troubleshooting nls in R with quadratic plateau model | We need better starting values. Fit a non-plateau model, model0, and use the parameters from that to fit all the data points giving model and then use a and b from that and a grid of values for clx ( | Interpreting and troubleshooting nls in R with quadratic plateau model
We need better starting values. Fit a non-plateau model, model0, and use the parameters from that to fit all the data points giving model and then use a and b from that and a grid of values for clx (due to its problematic nature) giving model.Ab and model.La. (Note that it will not be able to produce fits from some of the grid's starting values resulting in error messages but nls2 will keep processing further starting values so those errors can be ignored.)
library(nls2)
# ensure data is sorted for plotting
o <- with(healing, order(Type, Days))
h <- healing[o, ]
# last argument specifies whether there is or is not a plateau
quadplat = function(x, a, b, clx, plat = TRUE) {
if (plat) x <- pmin(x, clx)
a + b * x + (-0.5*b/clx) * x * x
}
# fit no plateau model with all data
st <- c(a = 1, b = 1, clx = 1)
model0 <- nls(Area ~ quadplat(Days, a, b, clx, FALSE), h, start = st)
# fit all data model
model <- nls(Area ~ quadplat(Days, a, b, clx), h, start = coef(model0))
co <- coef(model)
We can now fit and plot the subset models using values computed above in the starting values.
if (exists("model.Ab")) rm(model.Ab)
model.Ab <- nls2(Area ~ quadplat(Days, a, b, clx), h, subset = h$Type == "Abrasion",
start = data.frame(a = co[[1]], b = co[[2]], clx = 0:140))
if (exists("model.La")) rm(model.La)
model.La <- nls2(Area ~ quadplat(Days, a, b, clx), h, subset = h$Type == "Laceration",
start = data.frame(a = co[[1]], b = co[[2]], clx = 0:140))
cols <- c(Abrasion = "red", Laceration = "blue")
plot(Area ~ Days, h, col = cols[Type], pch = 20, cex = 1.5)
lines(fitted(model.Ab) ~ Days, subset(h, Type == "Abrasion"),
col = cols["Abrasion"])
lines(fitted(model.La) ~ Days, subset(h, Type == "Laceration"),
col = cols["Laceration"])
(continued after graphics)
Alternate model
If it is ok to consider other models then this model has only two parameters, are easier to fit and despite having fewer parameters have lower residual sums of squares.
model.Ab2 <- nls(Area ~ a * (1 - exp(- b * Days)), h,
subset = Type == "Abrasion", start = c(a = 100, b = .1))
model.La2 <- nls(Area ~ a * (1 - exp(- b * Days)), h,
subset = Type == "Laceration", start = c(a = 100, b = .1))
# plot
cols <- c(Abrasion = "red", Laceration = "blue")
plot(Area ~ Days, h, col = cols[Type], pch = 20, cex = 1.5)
lines(fitted(model.Ab2) ~ Days, subset(h, Type == "Abrasion"),
col = cols["Abrasion"])
lines(fitted(model.La2) ~ Days, subset(h, Type == "Laceration"),
col = cols["Laceration"])
(continued after graphics)
One parameter model
If we fix a = 100 in the 2 parameter model of the last section we get a 1 parameter model which is not statistically distinguishable from the 2 parameter model. That is seen from the p value shown in the anovas which are greater than 0.05 indicating that we cannot reject the null hypothesis that the 1 and 2 parameter models describe the data equally well for each of the two subsets.
model.Ab3 <- nls(Area ~ 100 * (1 - exp(- b * Days)), h,
subset = Type == "Abrasion", start = c(b = .1))
model.La3 <- nls(Area ~ 100 * (1 - exp(- b * Days)), h,
subset = Type == "Laceration", start = c(b = .1))
anova(model.Ab3, model.Ab2)
anova(model.La3, model.La2)
Also note that the point at which it reaches y = 95, i.e. near plateau, is -log(1 - 95/100)/b (based on inverting the model equation). The numerator is approximately 3 so it reaches 95 at roughly 3/b.
Other
If m <- nls(...) then summary(m) will give standard errors of coefficients and other information. | Interpreting and troubleshooting nls in R with quadratic plateau model
We need better starting values. Fit a non-plateau model, model0, and use the parameters from that to fit all the data points giving model and then use a and b from that and a grid of values for clx ( |
53,561 | Interpreting and troubleshooting nls in R with quadratic plateau model | Additionally, if anyone out there is good at interpreting formulas, can you help me by writing up this code into a readable formula?
function(x, a, b, clx) {
ifelse(x < clx, a + b * x + (-0.5*b/clx) * x * x,
a + b * clx + (-0.5*b/clx) * clx * clx)}
$$
f(x, a, b, x_{cl}) =
\begin{cases}
a + bx + (\frac{-0.5b}{x_{cl}}) \times x^2
, & \text{if}\ x < x_{cl} \\
a + bx_{cl} + (\frac{-0.5b}{x_{cl}}) \times {x_{cl}}^2
, & \text{otherwise}
\end{cases}
$$
which simplifies to:
$$
f(x, a, b, x_{cl}) =
\begin{cases}
a + bx \left( 1 - \frac{x}{2x_{cl}} \right)
, & \text{if}\ x < x_{cl} \\
a + \frac{bx_{cl}}{2}
, & \text{otherwise}
\end{cases}
$$
where I have substituted $x_{cl}$ for clx to make it more readable. | Interpreting and troubleshooting nls in R with quadratic plateau model | Additionally, if anyone out there is good at interpreting formulas, can you help me by writing up this code into a readable formula?
function(x, a, b, clx) {
ifelse(x < clx, a + b * x + (-0.5*b/clx | Interpreting and troubleshooting nls in R with quadratic plateau model
Additionally, if anyone out there is good at interpreting formulas, can you help me by writing up this code into a readable formula?
function(x, a, b, clx) {
ifelse(x < clx, a + b * x + (-0.5*b/clx) * x * x,
a + b * clx + (-0.5*b/clx) * clx * clx)}
$$
f(x, a, b, x_{cl}) =
\begin{cases}
a + bx + (\frac{-0.5b}{x_{cl}}) \times x^2
, & \text{if}\ x < x_{cl} \\
a + bx_{cl} + (\frac{-0.5b}{x_{cl}}) \times {x_{cl}}^2
, & \text{otherwise}
\end{cases}
$$
which simplifies to:
$$
f(x, a, b, x_{cl}) =
\begin{cases}
a + bx \left( 1 - \frac{x}{2x_{cl}} \right)
, & \text{if}\ x < x_{cl} \\
a + \frac{bx_{cl}}{2}
, & \text{otherwise}
\end{cases}
$$
where I have substituted $x_{cl}$ for clx to make it more readable. | Interpreting and troubleshooting nls in R with quadratic plateau model
Additionally, if anyone out there is good at interpreting formulas, can you help me by writing up this code into a readable formula?
function(x, a, b, clx) {
ifelse(x < clx, a + b * x + (-0.5*b/clx |
53,562 | Predicting proportions with Machine Learning | You have what is called compositional-data. There is quite some literature on how to model this. Take a look through the tag, or search for the term.
Typically, one would choose a reference category and work with log ratios, or similar. One paper I personally know about predicting compositional data is Snyder at al. (2017, IJF). They use a state space approach, not an NN, but their transformation may still be useful to you. | Predicting proportions with Machine Learning | You have what is called compositional-data. There is quite some literature on how to model this. Take a look through the tag, or search for the term.
Typically, one would choose a reference category a | Predicting proportions with Machine Learning
You have what is called compositional-data. There is quite some literature on how to model this. Take a look through the tag, or search for the term.
Typically, one would choose a reference category and work with log ratios, or similar. One paper I personally know about predicting compositional data is Snyder at al. (2017, IJF). They use a state space approach, not an NN, but their transformation may still be useful to you. | Predicting proportions with Machine Learning
You have what is called compositional-data. There is quite some literature on how to model this. Take a look through the tag, or search for the term.
Typically, one would choose a reference category a |
53,563 | Predicting proportions with Machine Learning | Answering my past self... One elegant solution is to use the cross-entropy with "soft-targets" as loss. This means that your targets will not be in one-hot-encodding format, but they will still sum to one. The original cross-entropy formula formula applies.
The cross-entropy loss with soft targets is widely used in the knowledge-distillation field: ref. | Predicting proportions with Machine Learning | Answering my past self... One elegant solution is to use the cross-entropy with "soft-targets" as loss. This means that your targets will not be in one-hot-encodding format, but they will still sum to | Predicting proportions with Machine Learning
Answering my past self... One elegant solution is to use the cross-entropy with "soft-targets" as loss. This means that your targets will not be in one-hot-encodding format, but they will still sum to one. The original cross-entropy formula formula applies.
The cross-entropy loss with soft targets is widely used in the knowledge-distillation field: ref. | Predicting proportions with Machine Learning
Answering my past self... One elegant solution is to use the cross-entropy with "soft-targets" as loss. This means that your targets will not be in one-hot-encodding format, but they will still sum to |
53,564 | What is the variance of $X^2$ (without assuming normality)? | The general form of this variance depends on the first four moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$, skewness $\gamma$ and kurtosis $\kappa$. The variance of interest exists if $\kappa < \infty$ and does not exist otherwise. Using the relationship between the raw moments and the cumulants, you have the general expression:
$$\begin{equation} \begin{aligned}
\mathbb{V}(X^2)
&= \mathbb{E}(X^4) - \mathbb{E}(X^2)^2 \\[6pt]
&= ( \mu^4 + 6 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + \kappa \sigma^4 ) - ( \mu^2 + \sigma^2 )^2 \\[6pt]
&= ( \mu^4 + 6 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + \kappa \sigma^4 ) - ( \mu^4 + 2 \mu^2 \sigma^2 + \sigma^4 ) \\[6pt]
&= 4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4. \\[6pt]
\end{aligned} \end{equation}$$
The special case for an unskewed mesokurtic distribution (e.g., the normal distribution) occurs when $\gamma = 0$ and $\kappa = 3$, which gives the variance $\mathbb{V}(X^2) = 4 \mu^2 \sigma^2 + 2 \sigma^4$. | What is the variance of $X^2$ (without assuming normality)? | The general form of this variance depends on the first four moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$, skewness $\gamma$ and kur | What is the variance of $X^2$ (without assuming normality)?
The general form of this variance depends on the first four moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$, skewness $\gamma$ and kurtosis $\kappa$. The variance of interest exists if $\kappa < \infty$ and does not exist otherwise. Using the relationship between the raw moments and the cumulants, you have the general expression:
$$\begin{equation} \begin{aligned}
\mathbb{V}(X^2)
&= \mathbb{E}(X^4) - \mathbb{E}(X^2)^2 \\[6pt]
&= ( \mu^4 + 6 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + \kappa \sigma^4 ) - ( \mu^2 + \sigma^2 )^2 \\[6pt]
&= ( \mu^4 + 6 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + \kappa \sigma^4 ) - ( \mu^4 + 2 \mu^2 \sigma^2 + \sigma^4 ) \\[6pt]
&= 4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4. \\[6pt]
\end{aligned} \end{equation}$$
The special case for an unskewed mesokurtic distribution (e.g., the normal distribution) occurs when $\gamma = 0$ and $\kappa = 3$, which gives the variance $\mathbb{V}(X^2) = 4 \mu^2 \sigma^2 + 2 \sigma^4$. | What is the variance of $X^2$ (without assuming normality)?
The general form of this variance depends on the first four moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$, skewness $\gamma$ and kur |
53,565 | Maximum possible number of random variables with the same correlation? | The answer depends on what $\rho$ is. For $-1 \leq \rho < -\frac 12$, the answer is two random variables. More generally, the maximum number of random variables that can have common correlation $\rho$ is $n$ for $\rho$ in the range $\left[-\frac{1}{n-1}, -\frac{1}{n}\right)$. For $\rho \geq 0$, the number of random variables is unbounded. See the answers to this question for some results. | Maximum possible number of random variables with the same correlation? | The answer depends on what $\rho$ is. For $-1 \leq \rho < -\frac 12$, the answer is two random variables. More generally, the maximum number of random variables that can have common correlation $\rho$ | Maximum possible number of random variables with the same correlation?
The answer depends on what $\rho$ is. For $-1 \leq \rho < -\frac 12$, the answer is two random variables. More generally, the maximum number of random variables that can have common correlation $\rho$ is $n$ for $\rho$ in the range $\left[-\frac{1}{n-1}, -\frac{1}{n}\right)$. For $\rho \geq 0$, the number of random variables is unbounded. See the answers to this question for some results. | Maximum possible number of random variables with the same correlation?
The answer depends on what $\rho$ is. For $-1 \leq \rho < -\frac 12$, the answer is two random variables. More generally, the maximum number of random variables that can have common correlation $\rho$ |
53,566 | Maximum possible number of random variables with the same correlation? | To supplement Dilip Sarwate's answer, if you take $I_0 \sim Bernoulli(p_0)$ and any number of independent $I_i \sim Bernoulli(p)$, all independent then
$$cor(I_0 + I_i, I_0 + I_j) = \frac 1 {1 + \frac {p (1-p)} {p_0(1-p_0)}},$$
so you can choose $p_0$ and $p$ to get any $\rho$ in the interval $(0,1)$. | Maximum possible number of random variables with the same correlation? | To supplement Dilip Sarwate's answer, if you take $I_0 \sim Bernoulli(p_0)$ and any number of independent $I_i \sim Bernoulli(p)$, all independent then
$$cor(I_0 + I_i, I_0 + I_j) = \frac 1 {1 + \frac | Maximum possible number of random variables with the same correlation?
To supplement Dilip Sarwate's answer, if you take $I_0 \sim Bernoulli(p_0)$ and any number of independent $I_i \sim Bernoulli(p)$, all independent then
$$cor(I_0 + I_i, I_0 + I_j) = \frac 1 {1 + \frac {p (1-p)} {p_0(1-p_0)}},$$
so you can choose $p_0$ and $p$ to get any $\rho$ in the interval $(0,1)$. | Maximum possible number of random variables with the same correlation?
To supplement Dilip Sarwate's answer, if you take $I_0 \sim Bernoulli(p_0)$ and any number of independent $I_i \sim Bernoulli(p)$, all independent then
$$cor(I_0 + I_i, I_0 + I_j) = \frac 1 {1 + \frac |
53,567 | Frameworks for modeling prior knowledge other than Bayesian statistics | There are alternatives, for example, you can use constrained optimization, or regularization. Notice however, that in most cases those approaches can be thought as Bayesian inference in disguise. For example, constraining range of the parameter during optimization, is the same as using flat prior over this range. Using $L_2$ regularization is the same as using Gaussian priors.
Moreover, in Bayesian inference you don't need nomalization as well. For both MCMC and optimization, you can work with unnormalized densities. With Approximate Bayesian Computation you can even solve problems where likelihood is not specified as a probability distribution.
Finally, one of the reasons for the popularity of Bayesian approach, is that you end up with a probability distribution for the estimates (posterior), that quantifies uncertainty about the estimates. This is not directly available in other approaches. | Frameworks for modeling prior knowledge other than Bayesian statistics | There are alternatives, for example, you can use constrained optimization, or regularization. Notice however, that in most cases those approaches can be thought as Bayesian inference in disguise. For | Frameworks for modeling prior knowledge other than Bayesian statistics
There are alternatives, for example, you can use constrained optimization, or regularization. Notice however, that in most cases those approaches can be thought as Bayesian inference in disguise. For example, constraining range of the parameter during optimization, is the same as using flat prior over this range. Using $L_2$ regularization is the same as using Gaussian priors.
Moreover, in Bayesian inference you don't need nomalization as well. For both MCMC and optimization, you can work with unnormalized densities. With Approximate Bayesian Computation you can even solve problems where likelihood is not specified as a probability distribution.
Finally, one of the reasons for the popularity of Bayesian approach, is that you end up with a probability distribution for the estimates (posterior), that quantifies uncertainty about the estimates. This is not directly available in other approaches. | Frameworks for modeling prior knowledge other than Bayesian statistics
There are alternatives, for example, you can use constrained optimization, or regularization. Notice however, that in most cases those approaches can be thought as Bayesian inference in disguise. For |
53,568 | Frameworks for modeling prior knowledge other than Bayesian statistics | One way in which prior information can be incorporated into the estimator is through the likelihood (or model, depending on how you look at it). That is to say, when we build a standard parametric model, we are constraining ourselves to say that we are going to allow the model to follow a very specific form, that we know up to the values of the parameters themselves. If we are approximately correct about this form, we should have more efficient estimation than a more general model with more parameters. On the other hand, if our "prior knowledge" is grossly inadquate and this constraint is overly restrictive, we should introduce a lot of bias into our model.
As fairly modern example, Convolutional Neural Networks (CNN) are currently the state of the art for image classification, doing considerably better than vanilla fully connected NN's. The only difference between a CNN and a standard NN is that on the top layers, CNNs only allow for local interactions, where as a fully connected NN doesn't care about how close two pixels are to each other. In other words, the CNN models are a proper subset of the vanilla NNs, where many of the top level parameters are set to 0. This is based on the prior knowledge that nearby pixels are very likely to be related, so by constraining the fully connected model, we get more efficient estimation. Empirically, using this prior information about how we think interactions between pixels should work, we have improved our predictions for image classification. | Frameworks for modeling prior knowledge other than Bayesian statistics | One way in which prior information can be incorporated into the estimator is through the likelihood (or model, depending on how you look at it). That is to say, when we build a standard parametric mod | Frameworks for modeling prior knowledge other than Bayesian statistics
One way in which prior information can be incorporated into the estimator is through the likelihood (or model, depending on how you look at it). That is to say, when we build a standard parametric model, we are constraining ourselves to say that we are going to allow the model to follow a very specific form, that we know up to the values of the parameters themselves. If we are approximately correct about this form, we should have more efficient estimation than a more general model with more parameters. On the other hand, if our "prior knowledge" is grossly inadquate and this constraint is overly restrictive, we should introduce a lot of bias into our model.
As fairly modern example, Convolutional Neural Networks (CNN) are currently the state of the art for image classification, doing considerably better than vanilla fully connected NN's. The only difference between a CNN and a standard NN is that on the top layers, CNNs only allow for local interactions, where as a fully connected NN doesn't care about how close two pixels are to each other. In other words, the CNN models are a proper subset of the vanilla NNs, where many of the top level parameters are set to 0. This is based on the prior knowledge that nearby pixels are very likely to be related, so by constraining the fully connected model, we get more efficient estimation. Empirically, using this prior information about how we think interactions between pixels should work, we have improved our predictions for image classification. | Frameworks for modeling prior knowledge other than Bayesian statistics
One way in which prior information can be incorporated into the estimator is through the likelihood (or model, depending on how you look at it). That is to say, when we build a standard parametric mod |
53,569 | We flip a coin 20 times and observe 12 heads. What is the probability that the coin is fair? | You appear to be using a Beta(1,1) prior on $\theta$. Since this is a continuous distribution, the prior (and posterior) probability of the event that the coin is exactly fair, $\theta=1/2$, is zero.
What would perhaps be a more sensible prior (see Lindley 1957 pp. 188-189 for a discussion of similar examples) would be a point mass at $\theta=1/2$ given the event $H_0$ that the coin is fair and $\theta\sim \mbox{Beta}(\alpha,\beta)$ given an unfair coin (the event $H_1$) and some prior probabilities $q$ and $1-q$ that $H_0$ and $H_1$ are true respectively.
The probabilities of observing $X=x$ heads out of $n$ coin flips under each hypothesis would then be,
\begin{align}
P(X=x|H_1)&=\int_0^1 P(X=x|\theta,H_1)f_{\theta|H_1}(\theta)d\theta
\\&=\frac{n!}{x!(n-x)!B(\alpha,\beta)}\int_0^1 \theta^{x+\alpha-1}(1-\theta)^{n-x+\beta-1}d\theta
\\&=\frac{n!B(x+\alpha,n-x+\beta)}{x!(n-x)!B(\alpha,\beta)},
\end{align}
and
$$
P(X=x|H_0)=\frac{n!}{x!(n-x)!2^n}.
$$
Using Bayes theorem, the posterior probability of $H_0$ would be
\begin{align}
P(H_0|X=x)
&=\frac{P(X=x|H_0)P(H_0)}{P(X=x|H_0)P(H_0)+P(X=x|H_1)P(H_1)}
\\&=\frac{q}{q + 2^n(1-q)B(x+\alpha,n-x+\beta)/B(\alpha,\beta)}
\end{align}
instead of zero.
The Figure below shows typical realisations of this posterior probability for increasing sample sizes $n$ for a Beta(1,1) prior and $q=0.5$. For a truly fair coin ($\theta=1/2$, blue curve), the posterior probability of $H_0$ tends to 1 as expected. If the coin is slightly unfair ($\theta=0.55$, red curve) the hypothesis that the coin is fair appear more likely initially until the evidence against $H_0$ eventually becomes overwhelming. | We flip a coin 20 times and observe 12 heads. What is the probability that the coin is fair? | You appear to be using a Beta(1,1) prior on $\theta$. Since this is a continuous distribution, the prior (and posterior) probability of the event that the coin is exactly fair, $\theta=1/2$, is zero. | We flip a coin 20 times and observe 12 heads. What is the probability that the coin is fair?
You appear to be using a Beta(1,1) prior on $\theta$. Since this is a continuous distribution, the prior (and posterior) probability of the event that the coin is exactly fair, $\theta=1/2$, is zero.
What would perhaps be a more sensible prior (see Lindley 1957 pp. 188-189 for a discussion of similar examples) would be a point mass at $\theta=1/2$ given the event $H_0$ that the coin is fair and $\theta\sim \mbox{Beta}(\alpha,\beta)$ given an unfair coin (the event $H_1$) and some prior probabilities $q$ and $1-q$ that $H_0$ and $H_1$ are true respectively.
The probabilities of observing $X=x$ heads out of $n$ coin flips under each hypothesis would then be,
\begin{align}
P(X=x|H_1)&=\int_0^1 P(X=x|\theta,H_1)f_{\theta|H_1}(\theta)d\theta
\\&=\frac{n!}{x!(n-x)!B(\alpha,\beta)}\int_0^1 \theta^{x+\alpha-1}(1-\theta)^{n-x+\beta-1}d\theta
\\&=\frac{n!B(x+\alpha,n-x+\beta)}{x!(n-x)!B(\alpha,\beta)},
\end{align}
and
$$
P(X=x|H_0)=\frac{n!}{x!(n-x)!2^n}.
$$
Using Bayes theorem, the posterior probability of $H_0$ would be
\begin{align}
P(H_0|X=x)
&=\frac{P(X=x|H_0)P(H_0)}{P(X=x|H_0)P(H_0)+P(X=x|H_1)P(H_1)}
\\&=\frac{q}{q + 2^n(1-q)B(x+\alpha,n-x+\beta)/B(\alpha,\beta)}
\end{align}
instead of zero.
The Figure below shows typical realisations of this posterior probability for increasing sample sizes $n$ for a Beta(1,1) prior and $q=0.5$. For a truly fair coin ($\theta=1/2$, blue curve), the posterior probability of $H_0$ tends to 1 as expected. If the coin is slightly unfair ($\theta=0.55$, red curve) the hypothesis that the coin is fair appear more likely initially until the evidence against $H_0$ eventually becomes overwhelming. | We flip a coin 20 times and observe 12 heads. What is the probability that the coin is fair?
You appear to be using a Beta(1,1) prior on $\theta$. Since this is a continuous distribution, the prior (and posterior) probability of the event that the coin is exactly fair, $\theta=1/2$, is zero. |
53,570 | Admissible Empirical Bayes Examples | The question has no clear answer because the empirical Bayes
formulation does not & cannot specify how the hyperparameter is
estimated.
Take the simplest Normal mean estimation problem. When$$X\sim\mathcal N_p(\theta,I_p)\qquad\qquad\theta\sim\mathcal N_p(0,\sigma^2 I_p)$$the Bayes estimator of $\theta$ is$$\delta^\pi(x)=\frac{\sigma^2}{1+\sigma^2}x$$
If $\sigma$ is unknown, a corresponding empirical Bayes estimator is therefore$$\frac{\hat\sigma^2}{1+\hat\sigma^2}x$$where $\hat\sigma^2$ is an estimator of $\sigma^2$ based on the marginal distribution of $x$
$$m(x|\sigma)=\int f(x|\theta) \pi(\theta|\sigma)\,\text{d}\theta$$
But since there is no constraint on the choice of $\hat\sigma^2$, this estimator can be any (positive) function of $x$ and the collection of empirical Bayes estimators thus includes all shrinkage estimators and therefore all admissible generalised Bayes estimators of $\theta$ (see Strawderman and Cohen, 1971).
For instance, the admissible minimax Bayes estimators of Strawderman (1971)
$$\delta_c(x)=\left[ 1 - \frac{\int_0^1 \lambda^{p/2-c+1}e^{-\lambda|x|^2}\text{d}\lambda}{\int_0^1 \lambda^{p/2-c}e^{-\lambda|x|^2}\text{d}\lambda} \right]x\qquad \text{where}\ 3-p/2\le c\le 2$$
are shrinkage estimators that can all be interpreted as empirical Bayes estimators, whatever the constant $c$ is. The estimator $\delta_c$ is furthermore a proper Bayes estimator when $c<1$ (and $p>5$) and possibly a generalised Bayes estimator otherwise. It stems from a hierarchical prior modelling
$$\underbrace{\theta|\lambda\sim\mathcal N_p(0,\lambda^{-1}(1-\lambda) I_p)}_\text{first level}\qquad
\underbrace{\lambda\sim \pi(\lambda)\propto (c+1)\lambda^{-c}}_\text{second level}$$with the hierarchical Bayes estimator above expressed as
$$\delta_c(x)=\left( 1 - \mathbb E[\lambda|x]\right)x$$
A highly interesting if not completely connected paper by Petrone, Rousseau and Scricciolo (2012) studies the connection between empirical Bayes "posteriors" and actual Bayes posteriors, deriving approaches to asymptotically agree (in the number of parameters $n$). They first point out a relevant counterexample of Scott and Berger (2010), for variable selection in regression models.
...consider a Bayesian approach where variable selection is based on a
vector of inclusion $γ∈{0,1}^k$ which selects among k potential
regressors, and the prior on $γ= (γ_1, . . . , γ_k)$ assumes that the
$γ_i$ are independent Bernoulli with parameter $λ$. Scott and Berger
(2010) compare this empirical Bayes approach with a hierarchical Bayes
procedure that assigns a prior on λ. Surprisingly, they (...) show
that the empirical Bayes posterior distribution on the set of models
that can be degenerate on the null model ($γ= (0, . . . ,0)$) or on
the full model ($γ= (1, . . . ,1)$).
They also produce general conditions for the empirical Bayes posterior to be consistent, which is a necessary condition for (asymptotic) admissibility. And study as an example the Normal mean problem when
$$X_i\sim\mathcal N(\theta,1)\qquad\theta|\lambda\sim\mathcal N(\mu,\tau^2)$$
with three situations
$\mu=\lambda$ and $\tau$ fixed, in which case the MLE of $\mu$ is $\bar X_n$ and the empirical Bayes posterior is consistent
$\tau=\lambda$ and $\mu$ is fixed, in which case the empirical Bayes posterior is only consistent when $\mu$ differs from the true value of the parameter, $m\ne\theta_0$
$\lambda=(\mu,\tau)$, in which case the empirical Bayes posterior is degenerate at $\bar X_n$
As noted in this other answer of mine, there may exist admissible estimators that are not Bayes or generalised Bayes estimators. | Admissible Empirical Bayes Examples | The question has no clear answer because the empirical Bayes
formulation does not & cannot specify how the hyperparameter is
estimated.
Take the simplest Normal mean estimation problem. When$$X\sim\m | Admissible Empirical Bayes Examples
The question has no clear answer because the empirical Bayes
formulation does not & cannot specify how the hyperparameter is
estimated.
Take the simplest Normal mean estimation problem. When$$X\sim\mathcal N_p(\theta,I_p)\qquad\qquad\theta\sim\mathcal N_p(0,\sigma^2 I_p)$$the Bayes estimator of $\theta$ is$$\delta^\pi(x)=\frac{\sigma^2}{1+\sigma^2}x$$
If $\sigma$ is unknown, a corresponding empirical Bayes estimator is therefore$$\frac{\hat\sigma^2}{1+\hat\sigma^2}x$$where $\hat\sigma^2$ is an estimator of $\sigma^2$ based on the marginal distribution of $x$
$$m(x|\sigma)=\int f(x|\theta) \pi(\theta|\sigma)\,\text{d}\theta$$
But since there is no constraint on the choice of $\hat\sigma^2$, this estimator can be any (positive) function of $x$ and the collection of empirical Bayes estimators thus includes all shrinkage estimators and therefore all admissible generalised Bayes estimators of $\theta$ (see Strawderman and Cohen, 1971).
For instance, the admissible minimax Bayes estimators of Strawderman (1971)
$$\delta_c(x)=\left[ 1 - \frac{\int_0^1 \lambda^{p/2-c+1}e^{-\lambda|x|^2}\text{d}\lambda}{\int_0^1 \lambda^{p/2-c}e^{-\lambda|x|^2}\text{d}\lambda} \right]x\qquad \text{where}\ 3-p/2\le c\le 2$$
are shrinkage estimators that can all be interpreted as empirical Bayes estimators, whatever the constant $c$ is. The estimator $\delta_c$ is furthermore a proper Bayes estimator when $c<1$ (and $p>5$) and possibly a generalised Bayes estimator otherwise. It stems from a hierarchical prior modelling
$$\underbrace{\theta|\lambda\sim\mathcal N_p(0,\lambda^{-1}(1-\lambda) I_p)}_\text{first level}\qquad
\underbrace{\lambda\sim \pi(\lambda)\propto (c+1)\lambda^{-c}}_\text{second level}$$with the hierarchical Bayes estimator above expressed as
$$\delta_c(x)=\left( 1 - \mathbb E[\lambda|x]\right)x$$
A highly interesting if not completely connected paper by Petrone, Rousseau and Scricciolo (2012) studies the connection between empirical Bayes "posteriors" and actual Bayes posteriors, deriving approaches to asymptotically agree (in the number of parameters $n$). They first point out a relevant counterexample of Scott and Berger (2010), for variable selection in regression models.
...consider a Bayesian approach where variable selection is based on a
vector of inclusion $γ∈{0,1}^k$ which selects among k potential
regressors, and the prior on $γ= (γ_1, . . . , γ_k)$ assumes that the
$γ_i$ are independent Bernoulli with parameter $λ$. Scott and Berger
(2010) compare this empirical Bayes approach with a hierarchical Bayes
procedure that assigns a prior on λ. Surprisingly, they (...) show
that the empirical Bayes posterior distribution on the set of models
that can be degenerate on the null model ($γ= (0, . . . ,0)$) or on
the full model ($γ= (1, . . . ,1)$).
They also produce general conditions for the empirical Bayes posterior to be consistent, which is a necessary condition for (asymptotic) admissibility. And study as an example the Normal mean problem when
$$X_i\sim\mathcal N(\theta,1)\qquad\theta|\lambda\sim\mathcal N(\mu,\tau^2)$$
with three situations
$\mu=\lambda$ and $\tau$ fixed, in which case the MLE of $\mu$ is $\bar X_n$ and the empirical Bayes posterior is consistent
$\tau=\lambda$ and $\mu$ is fixed, in which case the empirical Bayes posterior is only consistent when $\mu$ differs from the true value of the parameter, $m\ne\theta_0$
$\lambda=(\mu,\tau)$, in which case the empirical Bayes posterior is degenerate at $\bar X_n$
As noted in this other answer of mine, there may exist admissible estimators that are not Bayes or generalised Bayes estimators. | Admissible Empirical Bayes Examples
The question has no clear answer because the empirical Bayes
formulation does not & cannot specify how the hyperparameter is
estimated.
Take the simplest Normal mean estimation problem. When$$X\sim\m |
53,571 | Multivariate bayesian inference: learning about the mean of a variable by observing another variable | I think it would be good to have the notations cleared first.
$$
\begin{aligned}
\vec{\mu} &\sim \mathcal{N}(\vec{\mu}_0, \Sigma_0) &&\textrm{prior},\\
\vec{x}_i \vert \vec{\mu} &\sim \mathcal{N}(\vec{\mu}, \Sigma) &&\textrm{likelihood},\\
\vec{\mu} \vert \{\vec{x}_i\} &\sim \mathcal{N}(\vec{\mu}_n, \Sigma_n) &&\textrm{posterior},\\
\end{aligned}
$$
where $\vec{x}_i$ is the $i$-th observation with $\vec{x}_i=(y_1, y_2, \dots)$.
One can see that the $\Sigma_0$ and $\Sigma$ correspond to two different distributions. $\Sigma_0$ describes the variation of $\vec{\mu}$ about $\vec{\mu}_0$, while $\Sigma$ describes the variation of $\vec{x}_i$ about $\vec{\mu}$.
For simplicity, let's set $\vec{\mu}_0 = \vec{0}$ and $\Sigma_0 = I$, the resulting posterior distribution's mean and covariance matrix are given by
$$
\begin{aligned}
\vec{\mu}_n &= \left(I + \frac{1}{n}\Sigma\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^{n}\vec{x}_i\right)\\
&\equiv \left(I + \frac{1}{n}\Sigma\right)^{-1} \langle \vec{x} \rangle,\\
\Sigma_n &= \left(I + \frac{1}{n}\Sigma\right)^{-1}\frac{1}{n}\Sigma.
\end{aligned}
$$
where $\langle \vec{x} \rangle$ is referring to the sample mean. We can see that the equation for $\vec{\mu}_n$ follows our intuition; as $n\to\infty$, $\vec{\mu}_n = \vec{\mu}_{\infty} = \langle \vec{x} \rangle$. But what is the matrix $\left(I + \frac{1}{n}\Sigma\right)^{-1}$ actually doing? Let's take a look at a two dimensional example;
$\vec{\mu}_n$" />
The plot shows the evolution of the $\vec{\mu}_n$ (in orange) as $n$ increases up to 10 (with the initial $\vec{0}$ added as the starting point), the evolution of the sample mean (in blue), and the true value is marked with the purple cross. Therefore, to go from $\vec{\mu}_0$ to $\vec{\mu}_1$, the $\langle\vec{x}\rangle_1$ is taken as the input. Because the relatively small number of samples is used, it has not converged to the true value yet, but this is sufficient for a demonstration. This plot shows an interesting feature of the matrix term. Instead of jumping right towards the current sample mean, the direction is alternated to counter the known correlation $\Sigma$ between the components (as shown by the dashed line).
Therefore, there has never been an update of the mean based on the correlation between components; instead, it is a correction, so we don't obtain a biased result. And if the prior correlation on $\Sigma_0$ is the same as the likelihood correlation $\Sigma$, it simply means that no directional correction is needed at all.
This finding echoes the linearity of mean, therefore
$$
\langle \vec{x} \rangle = (\langle y_1 \rangle, \langle y_2 \rangle, \dots),
$$
regardless of any correlation between components of a random variable, the mean of each component is independent of each other. | Multivariate bayesian inference: learning about the mean of a variable by observing another variable | I think it would be good to have the notations cleared first.
$$
\begin{aligned}
\vec{\mu} &\sim \mathcal{N}(\vec{\mu}_0, \Sigma_0) &&\textrm{prior},\\
\vec{x}_i \vert \vec{\mu} &\sim \mathcal{N}(\vec | Multivariate bayesian inference: learning about the mean of a variable by observing another variable
I think it would be good to have the notations cleared first.
$$
\begin{aligned}
\vec{\mu} &\sim \mathcal{N}(\vec{\mu}_0, \Sigma_0) &&\textrm{prior},\\
\vec{x}_i \vert \vec{\mu} &\sim \mathcal{N}(\vec{\mu}, \Sigma) &&\textrm{likelihood},\\
\vec{\mu} \vert \{\vec{x}_i\} &\sim \mathcal{N}(\vec{\mu}_n, \Sigma_n) &&\textrm{posterior},\\
\end{aligned}
$$
where $\vec{x}_i$ is the $i$-th observation with $\vec{x}_i=(y_1, y_2, \dots)$.
One can see that the $\Sigma_0$ and $\Sigma$ correspond to two different distributions. $\Sigma_0$ describes the variation of $\vec{\mu}$ about $\vec{\mu}_0$, while $\Sigma$ describes the variation of $\vec{x}_i$ about $\vec{\mu}$.
For simplicity, let's set $\vec{\mu}_0 = \vec{0}$ and $\Sigma_0 = I$, the resulting posterior distribution's mean and covariance matrix are given by
$$
\begin{aligned}
\vec{\mu}_n &= \left(I + \frac{1}{n}\Sigma\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^{n}\vec{x}_i\right)\\
&\equiv \left(I + \frac{1}{n}\Sigma\right)^{-1} \langle \vec{x} \rangle,\\
\Sigma_n &= \left(I + \frac{1}{n}\Sigma\right)^{-1}\frac{1}{n}\Sigma.
\end{aligned}
$$
where $\langle \vec{x} \rangle$ is referring to the sample mean. We can see that the equation for $\vec{\mu}_n$ follows our intuition; as $n\to\infty$, $\vec{\mu}_n = \vec{\mu}_{\infty} = \langle \vec{x} \rangle$. But what is the matrix $\left(I + \frac{1}{n}\Sigma\right)^{-1}$ actually doing? Let's take a look at a two dimensional example;
$\vec{\mu}_n$" />
The plot shows the evolution of the $\vec{\mu}_n$ (in orange) as $n$ increases up to 10 (with the initial $\vec{0}$ added as the starting point), the evolution of the sample mean (in blue), and the true value is marked with the purple cross. Therefore, to go from $\vec{\mu}_0$ to $\vec{\mu}_1$, the $\langle\vec{x}\rangle_1$ is taken as the input. Because the relatively small number of samples is used, it has not converged to the true value yet, but this is sufficient for a demonstration. This plot shows an interesting feature of the matrix term. Instead of jumping right towards the current sample mean, the direction is alternated to counter the known correlation $\Sigma$ between the components (as shown by the dashed line).
Therefore, there has never been an update of the mean based on the correlation between components; instead, it is a correction, so we don't obtain a biased result. And if the prior correlation on $\Sigma_0$ is the same as the likelihood correlation $\Sigma$, it simply means that no directional correction is needed at all.
This finding echoes the linearity of mean, therefore
$$
\langle \vec{x} \rangle = (\langle y_1 \rangle, \langle y_2 \rangle, \dots),
$$
regardless of any correlation between components of a random variable, the mean of each component is independent of each other. | Multivariate bayesian inference: learning about the mean of a variable by observing another variable
I think it would be good to have the notations cleared first.
$$
\begin{aligned}
\vec{\mu} &\sim \mathcal{N}(\vec{\mu}_0, \Sigma_0) &&\textrm{prior},\\
\vec{x}_i \vert \vec{\mu} &\sim \mathcal{N}(\vec |
53,572 | Multivariate bayesian inference: learning about the mean of a variable by observing another variable | This might be slightly counterintuitive at first, but the fact that two variables are correlated doesn't mean that you can learn about the mean of one from another.
Suppose for example that $x_1$ is normally distributed with mean $0$ and $x_2 = x_1 + 100$. $x_1$ and $x_2$ are clearly maximally correlated, but you can learn nothing about the mean of $x_2$ (which is 100) from $x_1$.
That is of course unless you have some prior knowledge about the correlation between the means (your $\Sigma_0$). Indeed in that case you get the 'mix' that you expect between the two variables. The fact that this doesn't happen in the case $\Sigma_0=\Sigma$ might be a bit surprising, but as you pointed out this is equivalent to the covariance of the posterior with a non-informative prior, so in a way this doesn't add new information about the correlation between the means that is not already contained in the likelihood function.
So in short, to learn about the mean of one variable from another requires you to have some additional prior knowledge on their correlation. | Multivariate bayesian inference: learning about the mean of a variable by observing another variable | This might be slightly counterintuitive at first, but the fact that two variables are correlated doesn't mean that you can learn about the mean of one from another.
Suppose for example that $x_1$ is n | Multivariate bayesian inference: learning about the mean of a variable by observing another variable
This might be slightly counterintuitive at first, but the fact that two variables are correlated doesn't mean that you can learn about the mean of one from another.
Suppose for example that $x_1$ is normally distributed with mean $0$ and $x_2 = x_1 + 100$. $x_1$ and $x_2$ are clearly maximally correlated, but you can learn nothing about the mean of $x_2$ (which is 100) from $x_1$.
That is of course unless you have some prior knowledge about the correlation between the means (your $\Sigma_0$). Indeed in that case you get the 'mix' that you expect between the two variables. The fact that this doesn't happen in the case $\Sigma_0=\Sigma$ might be a bit surprising, but as you pointed out this is equivalent to the covariance of the posterior with a non-informative prior, so in a way this doesn't add new information about the correlation between the means that is not already contained in the likelihood function.
So in short, to learn about the mean of one variable from another requires you to have some additional prior knowledge on their correlation. | Multivariate bayesian inference: learning about the mean of a variable by observing another variable
This might be slightly counterintuitive at first, but the fact that two variables are correlated doesn't mean that you can learn about the mean of one from another.
Suppose for example that $x_1$ is n |
53,573 | Incorporating Prior Information Into Time Series Prediction | Great Question !
"Could I combine my ARIMA forecast with this prior information somehow to form an ensemble forecast?"
I have been involved with a commercial time series forecasting package called AUTOBOX and have incorporated delphi-type predictor series where the user provides probabilities of intervals and this is then used to monte-carlo a family of possible values for future values of input series where the user normally delivers 1 point estimates based upon perfect knowledge.
The "realizations" developed this way are then inter-joined with the arima simulations providing a family of ensemble forecast values for the dependent series that might also be effected by possible anomalies identified in the analysis stage via Intervention Detection schemes.
You should be able to program this with this advice as I have done. This problem/opportunuties arises quite naturally when the predictor series distribution can be "pre-guessed" such as alternative hypotheses for the price of oil for the next period. Armed with priors like this one can select from alternative offerings those with the greatest expected reward. | Incorporating Prior Information Into Time Series Prediction | Great Question !
"Could I combine my ARIMA forecast with this prior information somehow to form an ensemble forecast?"
I have been involved with a commercial time series forecasting package called AUT | Incorporating Prior Information Into Time Series Prediction
Great Question !
"Could I combine my ARIMA forecast with this prior information somehow to form an ensemble forecast?"
I have been involved with a commercial time series forecasting package called AUTOBOX and have incorporated delphi-type predictor series where the user provides probabilities of intervals and this is then used to monte-carlo a family of possible values for future values of input series where the user normally delivers 1 point estimates based upon perfect knowledge.
The "realizations" developed this way are then inter-joined with the arima simulations providing a family of ensemble forecast values for the dependent series that might also be effected by possible anomalies identified in the analysis stage via Intervention Detection schemes.
You should be able to program this with this advice as I have done. This problem/opportunuties arises quite naturally when the predictor series distribution can be "pre-guessed" such as alternative hypotheses for the price of oil for the next period. Armed with priors like this one can select from alternative offerings those with the greatest expected reward. | Incorporating Prior Information Into Time Series Prediction
Great Question !
"Could I combine my ARIMA forecast with this prior information somehow to form an ensemble forecast?"
I have been involved with a commercial time series forecasting package called AUT |
53,574 | Incorporating Prior Information Into Time Series Prediction | I think a good way to do this is via Bayesian Structural Time Series (BSTS). I found out about this approach via these 2 sites (1, 2). I would still be interested in other approaches.
Here is the example done with the bsts package in R. I use a time series component and a regression component. The regression component incorporates the prior information. A stack and slab prior is used on the regression component.
Below is a plot of the time-series features. I intentionally made C unlike the other features, and the response variable (D), in order to test the feature selection ability of BSTS.
The plot below shows that the BSTS model correctly observes that feature C isn't useful for predicting D.
Actual (Blue) vs Predicted (Red)
Code is below:
library(ggplot2, bsts, ggplotly, data.table)
# generate some data
set.seed(1)
n = 20
train_size = 10
A = seq(1,n) + rnorm(n)
B = seq(1,n) + rnorm(n)
# this variable is not like the others
C = rnorm(n) + 5*sin(seq(1,n))
D = seq(1,n) + rnorm(n)
X = data.table(A, B, C, D)
# transform the data for ggplot
long_data = melt(X)
m[, t := seq_len(.N), by = variable]
g1 = ggplot(data=m, aes(x=t, y=value, colour=variable)) + geom_line() + labs(title="Evolution of Parameters over Time")
ggplotly(g1)
#break the data into training/testing data
train_ind = seq(1,train_size)
train_X = X[train_ind,]
test_X = X[-train_ind,]
ss <- AddLocalLinearTrend(list(), train_X$D)
model4 <- bsts(D ~ .,
state.specification = ss,
niter = 1000,
data = train_X,
expected.model.size = 3)
plot(model4, "components")
# observe that the model can tell that C isn't strongly related to D, but A and B are.
plot(model4, "coef")
pred4 <- predict(model4, newdata = test_X, horizon = 24)
# plot predictions, vs actual (in red)
plot(pred4, ylim=c(0,50))
lines((max(train_ind)+1):nrow(X), test_X$D, col="red") | Incorporating Prior Information Into Time Series Prediction | I think a good way to do this is via Bayesian Structural Time Series (BSTS). I found out about this approach via these 2 sites (1, 2). I would still be interested in other approaches.
Here is the exa | Incorporating Prior Information Into Time Series Prediction
I think a good way to do this is via Bayesian Structural Time Series (BSTS). I found out about this approach via these 2 sites (1, 2). I would still be interested in other approaches.
Here is the example done with the bsts package in R. I use a time series component and a regression component. The regression component incorporates the prior information. A stack and slab prior is used on the regression component.
Below is a plot of the time-series features. I intentionally made C unlike the other features, and the response variable (D), in order to test the feature selection ability of BSTS.
The plot below shows that the BSTS model correctly observes that feature C isn't useful for predicting D.
Actual (Blue) vs Predicted (Red)
Code is below:
library(ggplot2, bsts, ggplotly, data.table)
# generate some data
set.seed(1)
n = 20
train_size = 10
A = seq(1,n) + rnorm(n)
B = seq(1,n) + rnorm(n)
# this variable is not like the others
C = rnorm(n) + 5*sin(seq(1,n))
D = seq(1,n) + rnorm(n)
X = data.table(A, B, C, D)
# transform the data for ggplot
long_data = melt(X)
m[, t := seq_len(.N), by = variable]
g1 = ggplot(data=m, aes(x=t, y=value, colour=variable)) + geom_line() + labs(title="Evolution of Parameters over Time")
ggplotly(g1)
#break the data into training/testing data
train_ind = seq(1,train_size)
train_X = X[train_ind,]
test_X = X[-train_ind,]
ss <- AddLocalLinearTrend(list(), train_X$D)
model4 <- bsts(D ~ .,
state.specification = ss,
niter = 1000,
data = train_X,
expected.model.size = 3)
plot(model4, "components")
# observe that the model can tell that C isn't strongly related to D, but A and B are.
plot(model4, "coef")
pred4 <- predict(model4, newdata = test_X, horizon = 24)
# plot predictions, vs actual (in red)
plot(pred4, ylim=c(0,50))
lines((max(train_ind)+1):nrow(X), test_X$D, col="red") | Incorporating Prior Information Into Time Series Prediction
I think a good way to do this is via Bayesian Structural Time Series (BSTS). I found out about this approach via these 2 sites (1, 2). I would still be interested in other approaches.
Here is the exa |
53,575 | How does R's "poisson.test" function work, mathematically? | When you say "another Poisson rate" ... if that other Poisson rate is derived from data then you are comparing data with data.
I'll assume you mean against some prespecified/theoretical rate (i.e. that you're performing a one-sample test).
You didn't state whether you were doing a one-tailed or two-tailed test. I'll discuss both
What it's doing is using the Poisson distribution with the specified rate you're testing against, and then computing the tail area "at least as extreme" (in the direction of the alternative) as the sample you got.
e.g. consider a one-tailed test; $H_0: \mu \leq 8.5$ vs $H_1: \mu > 8.5$ and the observed Poisson count of 14. Then we can compute that the upper tail at and above 14 has 0.0514 of the probability - e.g.:
> 1-ppois(13,8.5)
[1] 0.05141111
(I realize this is not the best way to compute this in R - we should use the lower.tail argument instead - but wanted to make it more transparent to readers less familiar with R; by comparison ppois(13,8.5,lower.tail=FALSE) looks like an off-by-one error)
This calculation agrees with poisson.test:
> poisson.test(14,r=8.5,alt="greater")
Exact Poisson test
data: 14 time base: 1
number of events = 14, time base = 1, p-value = 0.05141
alternative hypothesis: true event rate is greater than 8.5
95 percent confidence interval:
8.463938 Inf
sample estimates:
event rate
14
With a two-tailed test it sums those values with equal or lower probability (i.e. as with typical Fisher-style exact tests, it uses the likelihood under the null to identify what's "more extreme"):
The probability of a 14 with Poisson mean 8.5 is about 0.024 and in the left tail the largest x-value with probability no larger occurs at 3, so the probabilities of 0,1,2 and 3 are added in:
> 1-ppois(13,8.5)+ppois(3,8.5)
[1] 0.08152019
check against the output:
> poisson.test(14,r=8.5)
Exact Poisson test
data: 14 time base: 1
number of events = 14, time base = 1, p-value = 0.08152
alternative hypothesis: true event rate is not equal to 8.5
95 percent confidence interval:
7.65393 23.48962
sample estimates:
event rate
14
R code is publicly available -- you can check the code; in this case it bears out what I said above. | How does R's "poisson.test" function work, mathematically? | When you say "another Poisson rate" ... if that other Poisson rate is derived from data then you are comparing data with data.
I'll assume you mean against some prespecified/theoretical rate (i.e. tha | How does R's "poisson.test" function work, mathematically?
When you say "another Poisson rate" ... if that other Poisson rate is derived from data then you are comparing data with data.
I'll assume you mean against some prespecified/theoretical rate (i.e. that you're performing a one-sample test).
You didn't state whether you were doing a one-tailed or two-tailed test. I'll discuss both
What it's doing is using the Poisson distribution with the specified rate you're testing against, and then computing the tail area "at least as extreme" (in the direction of the alternative) as the sample you got.
e.g. consider a one-tailed test; $H_0: \mu \leq 8.5$ vs $H_1: \mu > 8.5$ and the observed Poisson count of 14. Then we can compute that the upper tail at and above 14 has 0.0514 of the probability - e.g.:
> 1-ppois(13,8.5)
[1] 0.05141111
(I realize this is not the best way to compute this in R - we should use the lower.tail argument instead - but wanted to make it more transparent to readers less familiar with R; by comparison ppois(13,8.5,lower.tail=FALSE) looks like an off-by-one error)
This calculation agrees with poisson.test:
> poisson.test(14,r=8.5,alt="greater")
Exact Poisson test
data: 14 time base: 1
number of events = 14, time base = 1, p-value = 0.05141
alternative hypothesis: true event rate is greater than 8.5
95 percent confidence interval:
8.463938 Inf
sample estimates:
event rate
14
With a two-tailed test it sums those values with equal or lower probability (i.e. as with typical Fisher-style exact tests, it uses the likelihood under the null to identify what's "more extreme"):
The probability of a 14 with Poisson mean 8.5 is about 0.024 and in the left tail the largest x-value with probability no larger occurs at 3, so the probabilities of 0,1,2 and 3 are added in:
> 1-ppois(13,8.5)+ppois(3,8.5)
[1] 0.08152019
check against the output:
> poisson.test(14,r=8.5)
Exact Poisson test
data: 14 time base: 1
number of events = 14, time base = 1, p-value = 0.08152
alternative hypothesis: true event rate is not equal to 8.5
95 percent confidence interval:
7.65393 23.48962
sample estimates:
event rate
14
R code is publicly available -- you can check the code; in this case it bears out what I said above. | How does R's "poisson.test" function work, mathematically?
When you say "another Poisson rate" ... if that other Poisson rate is derived from data then you are comparing data with data.
I'll assume you mean against some prespecified/theoretical rate (i.e. tha |
53,576 | How does R's "poisson.test" function work, mathematically? | Glen's answer notes that you can check the code for this function, but I'm not sure if you know how to do this, so I'll augment his answer by showing you how. To check the code, just load the relevant library and type in the function name without any arguments:
library(stats)
poisson.test
function (x, T = 1, r = 1, alternative = c("two.sided", "less",
"greater"), conf.level = 0.95)
{
...some code here...
PVAL <- ...some code...
...more code here...
structure(list(statistic = x, parameter = T, p.value = PVAL,
conf.int = CINT, estimate = ESTIMATE, null.value = r,
alternative = alternative, method = "Exact Poisson test",
data.name = DNAME), class = "htest")
}
}
<bytecode: 0x0000000019efa180>
<environment: namespace:stats>
You will see from the code that the poisson.test function creates a htest object (a list that is classed as a hypothesis test) containing calculations for the test statistic, p-value, and confidence interval. The code is quite long, but a lot of it can be ignored. The parts of interest are the code to calculate the test statistic and p-value, which are about 12-15 lines of code each. You might be able to walk through it and see how each of these objects is calculated, which will tell you the mathematics they are using. This will augment Glen's answer, which confirms the output of the test in a particular case. | How does R's "poisson.test" function work, mathematically? | Glen's answer notes that you can check the code for this function, but I'm not sure if you know how to do this, so I'll augment his answer by showing you how. To check the code, just load the relevan | How does R's "poisson.test" function work, mathematically?
Glen's answer notes that you can check the code for this function, but I'm not sure if you know how to do this, so I'll augment his answer by showing you how. To check the code, just load the relevant library and type in the function name without any arguments:
library(stats)
poisson.test
function (x, T = 1, r = 1, alternative = c("two.sided", "less",
"greater"), conf.level = 0.95)
{
...some code here...
PVAL <- ...some code...
...more code here...
structure(list(statistic = x, parameter = T, p.value = PVAL,
conf.int = CINT, estimate = ESTIMATE, null.value = r,
alternative = alternative, method = "Exact Poisson test",
data.name = DNAME), class = "htest")
}
}
<bytecode: 0x0000000019efa180>
<environment: namespace:stats>
You will see from the code that the poisson.test function creates a htest object (a list that is classed as a hypothesis test) containing calculations for the test statistic, p-value, and confidence interval. The code is quite long, but a lot of it can be ignored. The parts of interest are the code to calculate the test statistic and p-value, which are about 12-15 lines of code each. You might be able to walk through it and see how each of these objects is calculated, which will tell you the mathematics they are using. This will augment Glen's answer, which confirms the output of the test in a particular case. | How does R's "poisson.test" function work, mathematically?
Glen's answer notes that you can check the code for this function, but I'm not sure if you know how to do this, so I'll augment his answer by showing you how. To check the code, just load the relevan |
53,577 | Possible that one model is better than two? | Here's a perspective: the two model approach is more constrained, hence is always going to result in an inferior model. Consider the 2m (two-model) model - it looks like:
$$ f_{2m}(\mathbf{x}) = 1.5 (\mathbf{c_1} \cdot \mathbf{x}^T) + 1.0 (\mathbf{c_2} \cdot \mathbf{x}^T)$$
where $\mathbf{c}_i$ were trained in separate models. We can rewrite this as a 1m (one-model):
$$ f_{1m}(\mathbf{x}) = \mathbf{c} \cdot \mathbf{x}^T $$
such that
$$ \mathbf{c} = 1.5\mathbf{c_1} + 1.0\mathbf{c_2}$$
There's no reason to believe that $\mathbf{c}$ is the minimum of the least squares problem
$$ \min_{\mathbf{b}} \; (\mathbf{y} - \mathbf{b} \cdot \mathbf{X})^2$$
however, the global 1m model is the solution of that minimization problem. In fact, if you keep using linear models, you'll never beat 1m's $R^2$ - it's an upper bound.
In english: yes, you have given the model more information, but that doesn't mean that the solution is optimal. In a system with noise, and the guarantee of model misspecification, I think you'll always do worse than the global model. | Possible that one model is better than two? | Here's a perspective: the two model approach is more constrained, hence is always going to result in an inferior model. Consider the 2m (two-model) model - it looks like:
$$ f_{2m}(\mathbf{x}) = 1.5 ( | Possible that one model is better than two?
Here's a perspective: the two model approach is more constrained, hence is always going to result in an inferior model. Consider the 2m (two-model) model - it looks like:
$$ f_{2m}(\mathbf{x}) = 1.5 (\mathbf{c_1} \cdot \mathbf{x}^T) + 1.0 (\mathbf{c_2} \cdot \mathbf{x}^T)$$
where $\mathbf{c}_i$ were trained in separate models. We can rewrite this as a 1m (one-model):
$$ f_{1m}(\mathbf{x}) = \mathbf{c} \cdot \mathbf{x}^T $$
such that
$$ \mathbf{c} = 1.5\mathbf{c_1} + 1.0\mathbf{c_2}$$
There's no reason to believe that $\mathbf{c}$ is the minimum of the least squares problem
$$ \min_{\mathbf{b}} \; (\mathbf{y} - \mathbf{b} \cdot \mathbf{X})^2$$
however, the global 1m model is the solution of that minimization problem. In fact, if you keep using linear models, you'll never beat 1m's $R^2$ - it's an upper bound.
In english: yes, you have given the model more information, but that doesn't mean that the solution is optimal. In a system with noise, and the guarantee of model misspecification, I think you'll always do worse than the global model. | Possible that one model is better than two?
Here's a perspective: the two model approach is more constrained, hence is always going to result in an inferior model. Consider the 2m (two-model) model - it looks like:
$$ f_{2m}(\mathbf{x}) = 1.5 ( |
53,578 | Possible that one model is better than two? | Impossible that one predictive model is better than two?
Rather than getting into the weeds on your specific models, let's just step back and view this question in a more general setting. If we consider an arbitrary series of observable values, then it is possible that a model could give a perfect prediction of those values, and it is possible that a model could give terrible predictions. That is, it is possible for one model to be right and the other to be wrong. Now, if we combine these two models by some aggregation method, the only contribution of the second model is to pollute the first model, and introduce error. Thus, it is clearly possible for one predictive model to be better than two.
Now, getting to your actual model, what is happening here is that you have separated your predictions for the points scored and assists for each player, and then you have aggregated them post-hoc. It is unclear exactly what you have done to predict these. You say you have used regression for the predictions, but you have not specified any explanatory variables, and it is also unclear if you even have multiple data points for each player. In any case, by modelling each variable separately, this implicitly treats these two things as if they are statistically independent, when they are probably related. | Possible that one model is better than two? | Impossible that one predictive model is better than two?
Rather than getting into the weeds on your specific models, let's just step back and view this question in a more general setting. If we cons | Possible that one model is better than two?
Impossible that one predictive model is better than two?
Rather than getting into the weeds on your specific models, let's just step back and view this question in a more general setting. If we consider an arbitrary series of observable values, then it is possible that a model could give a perfect prediction of those values, and it is possible that a model could give terrible predictions. That is, it is possible for one model to be right and the other to be wrong. Now, if we combine these two models by some aggregation method, the only contribution of the second model is to pollute the first model, and introduce error. Thus, it is clearly possible for one predictive model to be better than two.
Now, getting to your actual model, what is happening here is that you have separated your predictions for the points scored and assists for each player, and then you have aggregated them post-hoc. It is unclear exactly what you have done to predict these. You say you have used regression for the predictions, but you have not specified any explanatory variables, and it is also unclear if you even have multiple data points for each player. In any case, by modelling each variable separately, this implicitly treats these two things as if they are statistically independent, when they are probably related. | Possible that one model is better than two?
Impossible that one predictive model is better than two?
Rather than getting into the weeds on your specific models, let's just step back and view this question in a more general setting. If we cons |
53,579 | Possible that one model is better than two? | As the previous answers have indicated, simply adding a model that is wrong can decrease performance. However, there are clever ways around this issue.
Generalized stacking algorithms (super learner is one example) is an alternative strategy to aggregating the results of multiple models. It has the advantage of discarding wrong (or poorly performing) models, so that only models with good information are retained. Essentially, it breaks the data into $k$ pieces and fits each model to each slice and predicts in a hold-out set. The results of each model are then regressed to generate weights, with better performing models having a greater weight in the final predictions. Models that don't improve predictions have weights near 0. A large number of different algorithms can be used (it increases run-time though). This open-access paper gives further explanation: Rose 2013
For a Python implementation, I use SuPyLearner (not on PyPI but you can download from GitHub) | Possible that one model is better than two? | As the previous answers have indicated, simply adding a model that is wrong can decrease performance. However, there are clever ways around this issue.
Generalized stacking algorithms (super learner i | Possible that one model is better than two?
As the previous answers have indicated, simply adding a model that is wrong can decrease performance. However, there are clever ways around this issue.
Generalized stacking algorithms (super learner is one example) is an alternative strategy to aggregating the results of multiple models. It has the advantage of discarding wrong (or poorly performing) models, so that only models with good information are retained. Essentially, it breaks the data into $k$ pieces and fits each model to each slice and predicts in a hold-out set. The results of each model are then regressed to generate weights, with better performing models having a greater weight in the final predictions. Models that don't improve predictions have weights near 0. A large number of different algorithms can be used (it increases run-time though). This open-access paper gives further explanation: Rose 2013
For a Python implementation, I use SuPyLearner (not on PyPI but you can download from GitHub) | Possible that one model is better than two?
As the previous answers have indicated, simply adding a model that is wrong can decrease performance. However, there are clever ways around this issue.
Generalized stacking algorithms (super learner i |
53,580 | How can I calibrate my point-by-point variances for Gaussian process regression? | Standard Gaussian process (GP) regression assumes constant noise variance, whereas it seems you want to allow it to vary. So, this is a heteroscedastic GP regression problem. Similar problems have been addressed in the literature (see references below). For example, Goldberg et al. (1998) treat the noise variance as an unknown function of the input, modeled with a second GP.
What distinguishes the problem here from those papers is that the noise variance here is a function of given confidence scores, rather than a direct function of the inputs. A principled way of attacking the problem is to simultaneously learn a function mapping confidence scores to noise variance, together with a GP regression model using those noise variances. Below, I'll describe one way to do this, using an empirical Bayes approach.
Model
Let $X=\{x_1, \dots, x_n\}$ denote the training inputs, with corresponding outputs in the vector $y \in \mathbb{R}^n$ and confidence scores in the vector $c = [c_1, \dots, c_n]^T$. Let's use the model:
$$y_i = f(x_i) + \epsilon_i$$
$$f \sim \mathcal{GP}(m, k)$$
$$\epsilon_i \sim \mathcal{N}(0, \sigma^2_i)$$
Outputs are related to inputs by an unknown, nonlinear function $f$. A GP prior is placed on $f$ with mean function $m$ and covariance function $k$ (with parameters $\theta$). I'll assume that the mean function is zero, as is common practice. But, an arbitrary mean function could be incorporated following the same steps as in standard GP regression. Each observed output $y_i$ is produced by adding Gaussian noise $\epsilon_i$ to the function output $f(x_i)$. Note that the noise is not identically distributed over data points; each has its own noise variance $\sigma^2_i$, which is assumed to be a function of the confidence score $c_i$:
$$\sigma^2_i = \exp g(c_i; \eta)$$
I'll call $g$ the 'confidence function', which is parameterized by some vector $\eta$. Given a confidence score, it outputs the log noise variance; this constrains the noise variance to be positive. If a good parametric form for $g$ is known, this can be used. Otherwise, let $g$ be a member of some flexible class of function approximators. For example, it could be a linear combination of basis functions, neural net, or spline (as in the example below). One could also constrain $g$ to be monotonically decreasing, so higher confidence scores always correspond to lower noise variance. Enforcing this assumption might make learning more efficient, but may or may not be necessary in practice.
Note that everything here is similar to standard GP regression. I've simply replaced the typical, constant noise variance with noise variance that's a function of the confidence score.
Learning
Fitting the model consists of finding values for the covariance function parameters $\theta$ and the confidence function parameters $\eta$. This can be done using empirical Bayes, which is a common strategy for GP regression. That is, maximize the (log) marginal likelihood of the observed outputs (a.k.a. the evidence):
$$\max_{\theta, \eta} \ \log p(y \mid X, \theta, \eta)$$
The marginal likelihood $p(y \mid X, \theta, \eta)$ is obtained by integrating over the noiseless function outputs, which are treated as latent variables since we can't directly observe them. I won't derive it here for space reasons. The steps are similar to the derivation for ordinary GP regression (see Rasmussen & Williams chapter 2), but with the noise variance modified as above. The marginal likelihood turns out to be Gaussian with mean zero and covariance matrix $C_y$:
$$\log p(y \mid X, \theta, \eta) =
-\frac{n}{2} \log(2 \pi)
- \frac{1}{2} \log \det(\Sigma_y)
- \frac{1}{2} y^T \Sigma_y^{-1} y$$
$$C_y = K(X,X) + \text{diag}(\sigma_1^2, \dots, \sigma_n^2)$$
I use the notation $K(\cdot, \cdot)$ to denote a matrix obtained by evaluating the covariance function for all pairs of elements in two sets of points. So entry $(i,j)$ of $K(A, B)$ is equal to $k(a_i, b_j)$. Also, recall that the noise variances are a function of the confidence scores, so:
$$\text{diag}(\sigma_1^2, \dots, \sigma_n^2)
= \text{diag} \Big( \exp g(c_1; \eta), \dots, \exp g(c_n; \eta) \Big)$$
Predictive distribution
The posterior predictive distribution is similar to standard GP regression (again, see Rasmussen & Williams for the steps involved in the derivation). Suppose we've fit the model to the training data. Given $\tilde{n}$ new input points in $\tilde{X} = \{\tilde{x}_1, \dots, \tilde{x}_{\tilde{n}}\}$, we'd like to predict the corresponding noiseless function outputs $\tilde{f} = [f(\tilde{x_1}), \dots, f(\tilde{x_n})]^T$. These have a joint Gaussian posterior distribution:
$$p(\tilde{f} \mid \tilde{X}, X, y, \theta, \eta) \ = \
\mathcal{N}(\tilde{f} \mid \mu_{\tilde{f}}, C_{\tilde{f}})$$
with mean and covariance matrix:
$$\mu_{\tilde{f}} = K(\tilde{X},X) C_y^{-1} y$$
$$C_{\tilde{f}} =
K(\tilde{X},\tilde{X}) - K(\tilde{X}, X) C_y^{-1} K(X, \tilde{X})$$
The new noisy outputs $\tilde{y} = [\tilde{y}_1, \dots, \tilde{y}_{\tilde{n}}]^T$ are assumed to be produced by adding Gaussian noise to the corresponding function outputs. So, they also have a joint Gaussian posterior distribution, with mean and covariance matrix:
$$\mu_\tilde{y} = \mu_{\tilde{f}}$$
$$C_{\tilde{y}} =
C_{\tilde{f}} + \text{diag}(\tilde{\sigma}^2_1, \dots, \tilde{\sigma}^2_{\tilde{n}})$$
Note that computing the posterior predictive distribution for $\tilde{y}$ requires confidence scores for the new points in order to compute the new noise variances $\tilde{\sigma}^2_i$. But, confidence scores aren't required for the predictive distribution of $\tilde{f}$.
Example
Data generation
I generated 500 data points with the true function $f(x) = \sin(2 \pi x)$ and noise variance increasing quadratically with $x$. I generated a confidence score for each data point (ranging from 0 to 100) such that noise variance was a decreasing cubic function of the confidence score. In this example, I chose the noise variance and confidence scores to vary with $x$ for ease of visualization. But, note that the approach described above only requires that the noise variance be a function of the confidence score; no dependence on $x$ is required. Here's the data:
Standard GP regression
Standard GP regression (using the squared exponential covariance function) can capture the conditional mean well. But, the predictive distribution isn't a good fit to the data because the assumption of constant noise variance doesn't hold.
New model
I then modeled the data as described above, using the squared exponential covariance function. For the confidence function, I used a piecewise linear function (spline) with 10 fixed knot points spread evenly over the range of the confidence scores. The parameter vector $\eta$ of the confidence function contains the log noise variances to output at the knot points. Adjusting the parameters simply moves the confidence function up and down at these points. Given any confidence score, the log noise variance is obtained by linearly interpolating between the knot points. When optimizing the marginal likelihood, I initialized $\eta$ such that the noise variance would be constant, and equal to 1/10th of the overall sample variance of the training outputs.
The model captures both the conditional mean and variance of the noisy outputs, and the estimated confidence function approximates the true confidence function reasonably well:
References
Rasmussen & Williams (2006). Gaussian processes for machine learning, chapter 2.
Goldberg & Williams (1998). Regression with input-dependent noise: A Gaussian process treatment.
Le, Smola, Canu (2005). Heteroscedastic Gaussian process regression. | How can I calibrate my point-by-point variances for Gaussian process regression? | Standard Gaussian process (GP) regression assumes constant noise variance, whereas it seems you want to allow it to vary. So, this is a heteroscedastic GP regression problem. Similar problems have bee | How can I calibrate my point-by-point variances for Gaussian process regression?
Standard Gaussian process (GP) regression assumes constant noise variance, whereas it seems you want to allow it to vary. So, this is a heteroscedastic GP regression problem. Similar problems have been addressed in the literature (see references below). For example, Goldberg et al. (1998) treat the noise variance as an unknown function of the input, modeled with a second GP.
What distinguishes the problem here from those papers is that the noise variance here is a function of given confidence scores, rather than a direct function of the inputs. A principled way of attacking the problem is to simultaneously learn a function mapping confidence scores to noise variance, together with a GP regression model using those noise variances. Below, I'll describe one way to do this, using an empirical Bayes approach.
Model
Let $X=\{x_1, \dots, x_n\}$ denote the training inputs, with corresponding outputs in the vector $y \in \mathbb{R}^n$ and confidence scores in the vector $c = [c_1, \dots, c_n]^T$. Let's use the model:
$$y_i = f(x_i) + \epsilon_i$$
$$f \sim \mathcal{GP}(m, k)$$
$$\epsilon_i \sim \mathcal{N}(0, \sigma^2_i)$$
Outputs are related to inputs by an unknown, nonlinear function $f$. A GP prior is placed on $f$ with mean function $m$ and covariance function $k$ (with parameters $\theta$). I'll assume that the mean function is zero, as is common practice. But, an arbitrary mean function could be incorporated following the same steps as in standard GP regression. Each observed output $y_i$ is produced by adding Gaussian noise $\epsilon_i$ to the function output $f(x_i)$. Note that the noise is not identically distributed over data points; each has its own noise variance $\sigma^2_i$, which is assumed to be a function of the confidence score $c_i$:
$$\sigma^2_i = \exp g(c_i; \eta)$$
I'll call $g$ the 'confidence function', which is parameterized by some vector $\eta$. Given a confidence score, it outputs the log noise variance; this constrains the noise variance to be positive. If a good parametric form for $g$ is known, this can be used. Otherwise, let $g$ be a member of some flexible class of function approximators. For example, it could be a linear combination of basis functions, neural net, or spline (as in the example below). One could also constrain $g$ to be monotonically decreasing, so higher confidence scores always correspond to lower noise variance. Enforcing this assumption might make learning more efficient, but may or may not be necessary in practice.
Note that everything here is similar to standard GP regression. I've simply replaced the typical, constant noise variance with noise variance that's a function of the confidence score.
Learning
Fitting the model consists of finding values for the covariance function parameters $\theta$ and the confidence function parameters $\eta$. This can be done using empirical Bayes, which is a common strategy for GP regression. That is, maximize the (log) marginal likelihood of the observed outputs (a.k.a. the evidence):
$$\max_{\theta, \eta} \ \log p(y \mid X, \theta, \eta)$$
The marginal likelihood $p(y \mid X, \theta, \eta)$ is obtained by integrating over the noiseless function outputs, which are treated as latent variables since we can't directly observe them. I won't derive it here for space reasons. The steps are similar to the derivation for ordinary GP regression (see Rasmussen & Williams chapter 2), but with the noise variance modified as above. The marginal likelihood turns out to be Gaussian with mean zero and covariance matrix $C_y$:
$$\log p(y \mid X, \theta, \eta) =
-\frac{n}{2} \log(2 \pi)
- \frac{1}{2} \log \det(\Sigma_y)
- \frac{1}{2} y^T \Sigma_y^{-1} y$$
$$C_y = K(X,X) + \text{diag}(\sigma_1^2, \dots, \sigma_n^2)$$
I use the notation $K(\cdot, \cdot)$ to denote a matrix obtained by evaluating the covariance function for all pairs of elements in two sets of points. So entry $(i,j)$ of $K(A, B)$ is equal to $k(a_i, b_j)$. Also, recall that the noise variances are a function of the confidence scores, so:
$$\text{diag}(\sigma_1^2, \dots, \sigma_n^2)
= \text{diag} \Big( \exp g(c_1; \eta), \dots, \exp g(c_n; \eta) \Big)$$
Predictive distribution
The posterior predictive distribution is similar to standard GP regression (again, see Rasmussen & Williams for the steps involved in the derivation). Suppose we've fit the model to the training data. Given $\tilde{n}$ new input points in $\tilde{X} = \{\tilde{x}_1, \dots, \tilde{x}_{\tilde{n}}\}$, we'd like to predict the corresponding noiseless function outputs $\tilde{f} = [f(\tilde{x_1}), \dots, f(\tilde{x_n})]^T$. These have a joint Gaussian posterior distribution:
$$p(\tilde{f} \mid \tilde{X}, X, y, \theta, \eta) \ = \
\mathcal{N}(\tilde{f} \mid \mu_{\tilde{f}}, C_{\tilde{f}})$$
with mean and covariance matrix:
$$\mu_{\tilde{f}} = K(\tilde{X},X) C_y^{-1} y$$
$$C_{\tilde{f}} =
K(\tilde{X},\tilde{X}) - K(\tilde{X}, X) C_y^{-1} K(X, \tilde{X})$$
The new noisy outputs $\tilde{y} = [\tilde{y}_1, \dots, \tilde{y}_{\tilde{n}}]^T$ are assumed to be produced by adding Gaussian noise to the corresponding function outputs. So, they also have a joint Gaussian posterior distribution, with mean and covariance matrix:
$$\mu_\tilde{y} = \mu_{\tilde{f}}$$
$$C_{\tilde{y}} =
C_{\tilde{f}} + \text{diag}(\tilde{\sigma}^2_1, \dots, \tilde{\sigma}^2_{\tilde{n}})$$
Note that computing the posterior predictive distribution for $\tilde{y}$ requires confidence scores for the new points in order to compute the new noise variances $\tilde{\sigma}^2_i$. But, confidence scores aren't required for the predictive distribution of $\tilde{f}$.
Example
Data generation
I generated 500 data points with the true function $f(x) = \sin(2 \pi x)$ and noise variance increasing quadratically with $x$. I generated a confidence score for each data point (ranging from 0 to 100) such that noise variance was a decreasing cubic function of the confidence score. In this example, I chose the noise variance and confidence scores to vary with $x$ for ease of visualization. But, note that the approach described above only requires that the noise variance be a function of the confidence score; no dependence on $x$ is required. Here's the data:
Standard GP regression
Standard GP regression (using the squared exponential covariance function) can capture the conditional mean well. But, the predictive distribution isn't a good fit to the data because the assumption of constant noise variance doesn't hold.
New model
I then modeled the data as described above, using the squared exponential covariance function. For the confidence function, I used a piecewise linear function (spline) with 10 fixed knot points spread evenly over the range of the confidence scores. The parameter vector $\eta$ of the confidence function contains the log noise variances to output at the knot points. Adjusting the parameters simply moves the confidence function up and down at these points. Given any confidence score, the log noise variance is obtained by linearly interpolating between the knot points. When optimizing the marginal likelihood, I initialized $\eta$ such that the noise variance would be constant, and equal to 1/10th of the overall sample variance of the training outputs.
The model captures both the conditional mean and variance of the noisy outputs, and the estimated confidence function approximates the true confidence function reasonably well:
References
Rasmussen & Williams (2006). Gaussian processes for machine learning, chapter 2.
Goldberg & Williams (1998). Regression with input-dependent noise: A Gaussian process treatment.
Le, Smola, Canu (2005). Heteroscedastic Gaussian process regression. | How can I calibrate my point-by-point variances for Gaussian process regression?
Standard Gaussian process (GP) regression assumes constant noise variance, whereas it seems you want to allow it to vary. So, this is a heteroscedastic GP regression problem. Similar problems have bee |
53,581 | How can I calibrate my point-by-point variances for Gaussian process regression? | In the end, although I appreciated the answer from user20160, I found it impractical to implement. Instead, I went ahead with the idea I mentioned in my question, and problems didn't materialize. Specifically:
I began with the GP model I already had, and a roughly plausible mapping of confidence scores to variances.
I chose 5000 random subsets of the data, each representing consecutive samples from one instrument, over a representative period of time.
For each data-slice, 75% of the points were chosen (using sklearn train_test_split) as a "training set" and used to fit the GP model (with frozen kernel), and the remaining 25% were a left-out "validation" set.
For each point in the left-out set, the GP prediction was run for the same time coordinate, and the difference between the measurement value and the predicted mean was recorded, along with the point's confidence score.
The points were bucketed by confidence score, and the RMSE for each bucket calculated and plotted. These lined up very nicely, so I used weighted least squares to fit a line to them and plugged that in as my new confidence function.
Then I iterated, re-optimizing the kernel using the new confidence function, and re-fitting the confidence function using the new kernel. This didn't diverge, and in fact converged quickly, with no significant changes after the first two iterations.
It seems a little bit muddled-up to me what I'm actually measuring — when I take the difference between the GP predicted mean and the measurement value, neither one is the true value, and the error I get back is the sum of the measurement error (which I want) and the prediction error (which depends on the model accuracy, and the accuracy of the other data points in the set). Nonetheless, the results came out okay, which I guess means that my scenario is pretty well-behaved.
It occurs to me that I might want to ignore the y-intercept of the fitted confidence function, simply take its slope, and normalize it so that it gives a variance of 0 at the highest possible confidence score, and increases to the left, allowing for a white-noise term in my kernel to account for any excess variance, and allowing the optimizer to adjust this term. However, I didn't do this. | How can I calibrate my point-by-point variances for Gaussian process regression? | In the end, although I appreciated the answer from user20160, I found it impractical to implement. Instead, I went ahead with the idea I mentioned in my question, and problems didn't materialize. Spec | How can I calibrate my point-by-point variances for Gaussian process regression?
In the end, although I appreciated the answer from user20160, I found it impractical to implement. Instead, I went ahead with the idea I mentioned in my question, and problems didn't materialize. Specifically:
I began with the GP model I already had, and a roughly plausible mapping of confidence scores to variances.
I chose 5000 random subsets of the data, each representing consecutive samples from one instrument, over a representative period of time.
For each data-slice, 75% of the points were chosen (using sklearn train_test_split) as a "training set" and used to fit the GP model (with frozen kernel), and the remaining 25% were a left-out "validation" set.
For each point in the left-out set, the GP prediction was run for the same time coordinate, and the difference between the measurement value and the predicted mean was recorded, along with the point's confidence score.
The points were bucketed by confidence score, and the RMSE for each bucket calculated and plotted. These lined up very nicely, so I used weighted least squares to fit a line to them and plugged that in as my new confidence function.
Then I iterated, re-optimizing the kernel using the new confidence function, and re-fitting the confidence function using the new kernel. This didn't diverge, and in fact converged quickly, with no significant changes after the first two iterations.
It seems a little bit muddled-up to me what I'm actually measuring — when I take the difference between the GP predicted mean and the measurement value, neither one is the true value, and the error I get back is the sum of the measurement error (which I want) and the prediction error (which depends on the model accuracy, and the accuracy of the other data points in the set). Nonetheless, the results came out okay, which I guess means that my scenario is pretty well-behaved.
It occurs to me that I might want to ignore the y-intercept of the fitted confidence function, simply take its slope, and normalize it so that it gives a variance of 0 at the highest possible confidence score, and increases to the left, allowing for a white-noise term in my kernel to account for any excess variance, and allowing the optimizer to adjust this term. However, I didn't do this. | How can I calibrate my point-by-point variances for Gaussian process regression?
In the end, although I appreciated the answer from user20160, I found it impractical to implement. Instead, I went ahead with the idea I mentioned in my question, and problems didn't materialize. Spec |
53,582 | How can I calibrate my point-by-point variances for Gaussian process regression? | I assume by sigmas you mean the variances of the components of the kernel functions. If you want to choose them in the light of your CS, then that sounds Bayesian to me. How are you setting the length scales?
There is a literature on priors for Gaussian process parameters, eg Trangucci, Betancourt, Vehtari (2016). | How can I calibrate my point-by-point variances for Gaussian process regression? | I assume by sigmas you mean the variances of the components of the kernel functions. If you want to choose them in the light of your CS, then that sounds Bayesian to me. How are you setting the length | How can I calibrate my point-by-point variances for Gaussian process regression?
I assume by sigmas you mean the variances of the components of the kernel functions. If you want to choose them in the light of your CS, then that sounds Bayesian to me. How are you setting the length scales?
There is a literature on priors for Gaussian process parameters, eg Trangucci, Betancourt, Vehtari (2016). | How can I calibrate my point-by-point variances for Gaussian process regression?
I assume by sigmas you mean the variances of the components of the kernel functions. If you want to choose them in the light of your CS, then that sounds Bayesian to me. How are you setting the length |
53,583 | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, and updating our confidence accordingly | What you describe is the concept of dynamic predictions from mixed models.
Initially, when you have no information for a doctor you only use the fixed effects in the prediction, i.e., you put his/her ability level equal to the average ($\alpha_j = 0$).
But, as extra information is recorded you can update your predictions by calculating the random effect of the doctor. You get this from the posterior/conditional distribution of the random effects $\alpha_j$ given the observed data $y_j$.
You can find more information on these predictions in the Dynamic Predictions vignette of the GLMMadaptive package. | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, | What you describe is the concept of dynamic predictions from mixed models.
Initially, when you have no information for a doctor you only use the fixed effects in the prediction, i.e., you put his/her | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, and updating our confidence accordingly
What you describe is the concept of dynamic predictions from mixed models.
Initially, when you have no information for a doctor you only use the fixed effects in the prediction, i.e., you put his/her ability level equal to the average ($\alpha_j = 0$).
But, as extra information is recorded you can update your predictions by calculating the random effect of the doctor. You get this from the posterior/conditional distribution of the random effects $\alpha_j$ given the observed data $y_j$.
You can find more information on these predictions in the Dynamic Predictions vignette of the GLMMadaptive package. | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels,
What you describe is the concept of dynamic predictions from mixed models.
Initially, when you have no information for a doctor you only use the fixed effects in the prediction, i.e., you put his/her |
53,584 | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, and updating our confidence accordingly | I think a Bayesian approach might be beneficial. When predicting for an unobserved category in a Bayesian mixed model, you generate posterior predictions by sampling the $\alpha_j$ from the fitted $N(0, \sigma^2_\alpha)$ (aside from sampling the fixed effects from the fitted posterior). This way, you will see high uncertainty for the new doctor - which is IMHO a good thing. The amount of uncertainty will be related to the fitted $\sigma^2_\alpha$, so when most doctors are similar, you will get lower uncertainty. After you have more data, you would refit the model and the uncertainty for the (now observed) doctor will gradually shrink. | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, | I think a Bayesian approach might be beneficial. When predicting for an unobserved category in a Bayesian mixed model, you generate posterior predictions by sampling the $\alpha_j$ from the fitted $N( | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels, and updating our confidence accordingly
I think a Bayesian approach might be beneficial. When predicting for an unobserved category in a Bayesian mixed model, you generate posterior predictions by sampling the $\alpha_j$ from the fitted $N(0, \sigma^2_\alpha)$ (aside from sampling the fixed effects from the fitted posterior). This way, you will see high uncertainty for the new doctor - which is IMHO a good thing. The amount of uncertainty will be related to the fitted $\sigma^2_\alpha$, so when most doctors are similar, you will get lower uncertainty. After you have more data, you would refit the model and the uncertainty for the (now observed) doctor will gradually shrink. | Mixed Effects, Doctors & Operations: predicting on new data containing previously unobserved levels,
I think a Bayesian approach might be beneficial. When predicting for an unobserved category in a Bayesian mixed model, you generate posterior predictions by sampling the $\alpha_j$ from the fitted $N( |
53,585 | Can someone provide a brief explanation as to why reproducing kernel Hilbert space is so popular in machine learning? | The typical way to give some intuition for reproducing kernel spaces (and, in particular, the kernel trick), is the application area of support vector machines. The aim is to linearly separate two classes of points in $\mathbb R^n$, which works fine if they actually are linearly separable.
If they are not, the kernel trick provides (in certain situations) a possibility to transform the data points into another space, the so-called reproducing kernel Hilbert space or feature space, where the transformed points become linearly separable.
A good description can be found here.
Of course, this is just one of hundreds of applications of the kernel trick (or RKHS in general), but it is one which hopefully clarifies its power and justifies its usefulness. | Can someone provide a brief explanation as to why reproducing kernel Hilbert space is so popular in | The typical way to give some intuition for reproducing kernel spaces (and, in particular, the kernel trick), is the application area of support vector machines. The aim is to linearly separate two cla | Can someone provide a brief explanation as to why reproducing kernel Hilbert space is so popular in machine learning?
The typical way to give some intuition for reproducing kernel spaces (and, in particular, the kernel trick), is the application area of support vector machines. The aim is to linearly separate two classes of points in $\mathbb R^n$, which works fine if they actually are linearly separable.
If they are not, the kernel trick provides (in certain situations) a possibility to transform the data points into another space, the so-called reproducing kernel Hilbert space or feature space, where the transformed points become linearly separable.
A good description can be found here.
Of course, this is just one of hundreds of applications of the kernel trick (or RKHS in general), but it is one which hopefully clarifies its power and justifies its usefulness. | Can someone provide a brief explanation as to why reproducing kernel Hilbert space is so popular in
The typical way to give some intuition for reproducing kernel spaces (and, in particular, the kernel trick), is the application area of support vector machines. The aim is to linearly separate two cla |
53,586 | What does 'km' transform in cox.zph function mean? | km stands for Kaplan-Meier estimator.
$$\hat{S}(t) = \prod_{i: t_i \le t}\left(1-\frac{d_i}{n_i} \right)$$
with $t_{i}$ a time when at least one event happened, $d_i$ the number of events (i.e., deaths) that happened at time
$t_{i}$ and ${\displaystyle n_{i}}$ the individuals known to have survived (have not yet had an event or been censored) up to time
$t_{i}$.
Here is quote from the paper Cox Proportional-Hazards Regression for Survival Data:
Tests and graphical diagnostics for proportional hazards may be based on the scaled Schoenfeld residuals; these can be obtained directly as residuals(model, "scaledsch"), where model is a coxph model object. The matrix returned by residuals has one column for each covariate in the model. More conveniently, the cox.zph function calculates tests of the proportional-hazards assumption for each covariate, by correlating the corresponding set of scaled Schoenfeld residuals with a suitable transformation of time [the default is based on the Kaplan-Meier estimate of the survival function, $K(t)$].
To know why the choice of km as the default, Dr. Kevin E. Thorpe cited Dr. Therneau's reply in the R-news:
There are 2 reasons for making the KM the default:
Safety: The test for PH is essentially a least-squares fit of
line to a plot of f(time) vs residual. If the plot contains an
extreme oulier in x, then the test is basically worthless. This
sometimes happens with transform= identity or transform =log.
It doesn't with transform='KM'.
As a default value for naive users, I chose the safe course.
A secondary reason is efficiency. In DY Lin, JASA 1991
Dan-Yu argues that this is a "good" test statistic under various
assumptions about censoring. (His measure has the same score
statistics as the KM option).
But #1 is the big one.
Terry T. | What does 'km' transform in cox.zph function mean? | km stands for Kaplan-Meier estimator.
$$\hat{S}(t) = \prod_{i: t_i \le t}\left(1-\frac{d_i}{n_i} \right)$$
with $t_{i}$ a time when at least one event happened, $d_i$ the number of events (i.e., death | What does 'km' transform in cox.zph function mean?
km stands for Kaplan-Meier estimator.
$$\hat{S}(t) = \prod_{i: t_i \le t}\left(1-\frac{d_i}{n_i} \right)$$
with $t_{i}$ a time when at least one event happened, $d_i$ the number of events (i.e., deaths) that happened at time
$t_{i}$ and ${\displaystyle n_{i}}$ the individuals known to have survived (have not yet had an event or been censored) up to time
$t_{i}$.
Here is quote from the paper Cox Proportional-Hazards Regression for Survival Data:
Tests and graphical diagnostics for proportional hazards may be based on the scaled Schoenfeld residuals; these can be obtained directly as residuals(model, "scaledsch"), where model is a coxph model object. The matrix returned by residuals has one column for each covariate in the model. More conveniently, the cox.zph function calculates tests of the proportional-hazards assumption for each covariate, by correlating the corresponding set of scaled Schoenfeld residuals with a suitable transformation of time [the default is based on the Kaplan-Meier estimate of the survival function, $K(t)$].
To know why the choice of km as the default, Dr. Kevin E. Thorpe cited Dr. Therneau's reply in the R-news:
There are 2 reasons for making the KM the default:
Safety: The test for PH is essentially a least-squares fit of
line to a plot of f(time) vs residual. If the plot contains an
extreme oulier in x, then the test is basically worthless. This
sometimes happens with transform= identity or transform =log.
It doesn't with transform='KM'.
As a default value for naive users, I chose the safe course.
A secondary reason is efficiency. In DY Lin, JASA 1991
Dan-Yu argues that this is a "good" test statistic under various
assumptions about censoring. (His measure has the same score
statistics as the KM option).
But #1 is the big one.
Terry T. | What does 'km' transform in cox.zph function mean?
km stands for Kaplan-Meier estimator.
$$\hat{S}(t) = \prod_{i: t_i \le t}\left(1-\frac{d_i}{n_i} \right)$$
with $t_{i}$ a time when at least one event happened, $d_i$ the number of events (i.e., death |
53,587 | What does 'km' transform in cox.zph function mean? | Like the original poster, I also wondered what, exactly, is the transformation "based on the Kaplan-Meier estimate" doing? Tracking this down proved to be more difficult than you would expect, as pretty much every source I found used some variant of the "based on Kaplan-Meier" language without further explanation. I eventually resorted to figuring it out from the source code.
The transformation is computed in lines 77-91 of cox.zph.R. The relevant code is:
if (is.character(transform)) {
tname <- transform
ttimes <- switch(transform,
'identity'= times,
'rank' = rank(times),
'log' = log(times),
'km' = {
temp <- survfitKM(factor(rep(1L, nrow(y))),
y, se.fit=FALSE)
# A nuisance to do left continuous KM
indx <- findInterval(times, temp$time, left.open=TRUE)
1.0 - c(1, temp$surv)[indx+1]
},
stop("Unrecognized transform"))
}
What is happening in the 'km' branch is that the code is performing a Kaplan-Meier estimate $\hat{S}(t)$ of the survival function. The last two lines in the branch are implementing the transform $t \rightarrow \hat{S}(t)$. In other words, since the survival function is monotonic, we can replace the time coordinate with the corresponding value of the (K-M estimate of the) survival function. This has the advantage of fixing the x-values between 1 and 0 (vs. 0 and infinity for the raw time coordinate), so that observations at very large $t$ values don't have undue influence on the fit (as described in the R-news post described in the other answer).
Finally, in Applied Survival Analysis Using R, p. 99, the author does a comparison of the different transforms and notes:
The rank transformation yields a similar p-value to what we found with the "km" transformation.
Looking at the code above, the penultimate line of the "km" branch is looking up the intervals of the results of the K-M estimate corresponding to each time in the input. Since the intervals in the K-M estimate will be defined by the times in the input, in the absence of ties this lookup is just the rank of the times. So, we can view the KM transform as remapping equally spaced values from 1 to N into equally spaced values from 1 to $S(t_\max)$. Either way the linear fit to the residuals is going to be the same, which explains why the results of the two tests are nearly identical. | What does 'km' transform in cox.zph function mean? | Like the original poster, I also wondered what, exactly, is the transformation "based on the Kaplan-Meier estimate" doing? Tracking this down proved to be more difficult than you would expect, as pre | What does 'km' transform in cox.zph function mean?
Like the original poster, I also wondered what, exactly, is the transformation "based on the Kaplan-Meier estimate" doing? Tracking this down proved to be more difficult than you would expect, as pretty much every source I found used some variant of the "based on Kaplan-Meier" language without further explanation. I eventually resorted to figuring it out from the source code.
The transformation is computed in lines 77-91 of cox.zph.R. The relevant code is:
if (is.character(transform)) {
tname <- transform
ttimes <- switch(transform,
'identity'= times,
'rank' = rank(times),
'log' = log(times),
'km' = {
temp <- survfitKM(factor(rep(1L, nrow(y))),
y, se.fit=FALSE)
# A nuisance to do left continuous KM
indx <- findInterval(times, temp$time, left.open=TRUE)
1.0 - c(1, temp$surv)[indx+1]
},
stop("Unrecognized transform"))
}
What is happening in the 'km' branch is that the code is performing a Kaplan-Meier estimate $\hat{S}(t)$ of the survival function. The last two lines in the branch are implementing the transform $t \rightarrow \hat{S}(t)$. In other words, since the survival function is monotonic, we can replace the time coordinate with the corresponding value of the (K-M estimate of the) survival function. This has the advantage of fixing the x-values between 1 and 0 (vs. 0 and infinity for the raw time coordinate), so that observations at very large $t$ values don't have undue influence on the fit (as described in the R-news post described in the other answer).
Finally, in Applied Survival Analysis Using R, p. 99, the author does a comparison of the different transforms and notes:
The rank transformation yields a similar p-value to what we found with the "km" transformation.
Looking at the code above, the penultimate line of the "km" branch is looking up the intervals of the results of the K-M estimate corresponding to each time in the input. Since the intervals in the K-M estimate will be defined by the times in the input, in the absence of ties this lookup is just the rank of the times. So, we can view the KM transform as remapping equally spaced values from 1 to N into equally spaced values from 1 to $S(t_\max)$. Either way the linear fit to the residuals is going to be the same, which explains why the results of the two tests are nearly identical. | What does 'km' transform in cox.zph function mean?
Like the original poster, I also wondered what, exactly, is the transformation "based on the Kaplan-Meier estimate" doing? Tracking this down proved to be more difficult than you would expect, as pre |
53,588 | Spurious Regressions (Random Walk) | Consider what random walks are: each new value is just a small perturbation of the old value.
When an explanatory variable $x_t$ and a synchronous response $y_t$ are both random walks, the pair of points $(x_t,y_t)$ is a random walk in the plane with similar properties: each new point is a small random step (in a random direction) from the previous one. This 2D random walk maps the proverbial drunkard staggering in the dark near the lamppost: he will not cover all the ground around the lamppost for quite a while. More often than not, he will lurch off in some random direction and not get around to the other side of the lamppost until first careening off arbitrary distances into the night. As a result, if you plot just a small period of this walk, the points will tend to line up. This creates relations that appear to be "significant."
Ordinary Least Squares (and most other procedures) make the wrong determination of significance because (1) they assume the conditional responses are independent of each other--but obviously they are not--and (2) they do not account for the random (and serially correlated) variation in the explanatory variable. It's the first characteristic that really counts: it will also fool other regression methods designed to account for random variation in the $x_t.$
To illustrate this claim, here are maps of 20 separate such walks, each with 30 (Gaussian) steps. To show you the sequence, the starting point is marked with a black dot and subsequent points are drawn in lighter and lighter shades. On each one I have superimposed the OLS fit (a white line segment) and around it is a two-sided 95% confidence interval for that fit. In over half the cases (including all those in the top two rows), that confidence interval does not envelop a horizontal line, indicating the slope is "significantly" non-zero.
This behavior persists for arbitrarily long times (intuitively, because of the fractal nature of these walks: they are qualitatively similar at all scales). Here's the same kind of simulation run for 3000 steps instead of 30. The R code to generate the figures appears afterwards for your enjoyment. Most of it is dedicated to making the figures: the simulation itself is done in one line.
n.sim <- 20 # Number of iterations
n <- 30 # Length of each iteration
#
# The simulation.
# It produces an n by n.sim by 2 array; the last dimension separates 'x' from 'y'.
#
xy <- apply(array(rnorm(n.sim*2*n), c(n, n.sim, 2)), 2:3, cumsum)
#
# Post-processing to prepare for plotting.
#
library(data.table)
X <- data.table(x=c(xy[,,1]),
y=c(xy[,,2]),
Iteration=factor(rep(1:n.sim, each=n)),
Time=rep(1:n, n.sim))
P <- X[, (p=summary(lm(y ~ x))$coefficients[2,4]), by=Iteration]
setnames(P, "V1", "p-value")
Beta <- X[, (Beta=coef(lm(y ~ x))[2]), by=Iteration]
Beta[, Abs.beta := signif(abs(V1), 1)]
X <- P[Beta[X, on="Iteration"], on="Iteration"]
setorder(X, `p-value`, -Abs.beta, Iteration)
#
# Plotting.
#
library(ggplot2)
ggplot(X, aes(x, y, color=Time)) +
geom_point(show.legend=FALSE) +
geom_path(show.legend=FALSE, size=1.1) +
geom_smooth(method=lm, color="White") +
facet_wrap(~ `p-value` + Iteration, scales="free") +
theme(
strip.background = element_blank(), strip.text.x = element_blank(), axis.text=element_blank()
) | Spurious Regressions (Random Walk) | Consider what random walks are: each new value is just a small perturbation of the old value.
When an explanatory variable $x_t$ and a synchronous response $y_t$ are both random walks, the pair of poi | Spurious Regressions (Random Walk)
Consider what random walks are: each new value is just a small perturbation of the old value.
When an explanatory variable $x_t$ and a synchronous response $y_t$ are both random walks, the pair of points $(x_t,y_t)$ is a random walk in the plane with similar properties: each new point is a small random step (in a random direction) from the previous one. This 2D random walk maps the proverbial drunkard staggering in the dark near the lamppost: he will not cover all the ground around the lamppost for quite a while. More often than not, he will lurch off in some random direction and not get around to the other side of the lamppost until first careening off arbitrary distances into the night. As a result, if you plot just a small period of this walk, the points will tend to line up. This creates relations that appear to be "significant."
Ordinary Least Squares (and most other procedures) make the wrong determination of significance because (1) they assume the conditional responses are independent of each other--but obviously they are not--and (2) they do not account for the random (and serially correlated) variation in the explanatory variable. It's the first characteristic that really counts: it will also fool other regression methods designed to account for random variation in the $x_t.$
To illustrate this claim, here are maps of 20 separate such walks, each with 30 (Gaussian) steps. To show you the sequence, the starting point is marked with a black dot and subsequent points are drawn in lighter and lighter shades. On each one I have superimposed the OLS fit (a white line segment) and around it is a two-sided 95% confidence interval for that fit. In over half the cases (including all those in the top two rows), that confidence interval does not envelop a horizontal line, indicating the slope is "significantly" non-zero.
This behavior persists for arbitrarily long times (intuitively, because of the fractal nature of these walks: they are qualitatively similar at all scales). Here's the same kind of simulation run for 3000 steps instead of 30. The R code to generate the figures appears afterwards for your enjoyment. Most of it is dedicated to making the figures: the simulation itself is done in one line.
n.sim <- 20 # Number of iterations
n <- 30 # Length of each iteration
#
# The simulation.
# It produces an n by n.sim by 2 array; the last dimension separates 'x' from 'y'.
#
xy <- apply(array(rnorm(n.sim*2*n), c(n, n.sim, 2)), 2:3, cumsum)
#
# Post-processing to prepare for plotting.
#
library(data.table)
X <- data.table(x=c(xy[,,1]),
y=c(xy[,,2]),
Iteration=factor(rep(1:n.sim, each=n)),
Time=rep(1:n, n.sim))
P <- X[, (p=summary(lm(y ~ x))$coefficients[2,4]), by=Iteration]
setnames(P, "V1", "p-value")
Beta <- X[, (Beta=coef(lm(y ~ x))[2]), by=Iteration]
Beta[, Abs.beta := signif(abs(V1), 1)]
X <- P[Beta[X, on="Iteration"], on="Iteration"]
setorder(X, `p-value`, -Abs.beta, Iteration)
#
# Plotting.
#
library(ggplot2)
ggplot(X, aes(x, y, color=Time)) +
geom_point(show.legend=FALSE) +
geom_path(show.legend=FALSE, size=1.1) +
geom_smooth(method=lm, color="White") +
facet_wrap(~ `p-value` + Iteration, scales="free") +
theme(
strip.background = element_blank(), strip.text.x = element_blank(), axis.text=element_blank()
) | Spurious Regressions (Random Walk)
Consider what random walks are: each new value is just a small perturbation of the old value.
When an explanatory variable $x_t$ and a synchronous response $y_t$ are both random walks, the pair of poi |
53,589 | Spurious Regressions (Random Walk) | It's not always a problem when both dependent and independent variables are random walk. If two variables are cointegrated then you still can run OLS. It won't be the best choice, but it will retain its good properties.
So why is it a problem sometimes? In a random walk you get the variance increasing with time proportionally. Moreover, you can set any arbitrarily large value and there's non zero probability that random walk process will cross it at some point. So, if you get two sufficiently long samples of independent random walk processes, it is very high probability that there will be a "fake" trends detected in both of them even if their drifts are zero. Simply because it is very high probability that at the end of the series they will be quite far from where they started - this will look like a trend (drift). Once you have two trends, there will be correlation, a spurious (fake) one. | Spurious Regressions (Random Walk) | It's not always a problem when both dependent and independent variables are random walk. If two variables are cointegrated then you still can run OLS. It won't be the best choice, but it will retain i | Spurious Regressions (Random Walk)
It's not always a problem when both dependent and independent variables are random walk. If two variables are cointegrated then you still can run OLS. It won't be the best choice, but it will retain its good properties.
So why is it a problem sometimes? In a random walk you get the variance increasing with time proportionally. Moreover, you can set any arbitrarily large value and there's non zero probability that random walk process will cross it at some point. So, if you get two sufficiently long samples of independent random walk processes, it is very high probability that there will be a "fake" trends detected in both of them even if their drifts are zero. Simply because it is very high probability that at the end of the series they will be quite far from where they started - this will look like a trend (drift). Once you have two trends, there will be correlation, a spurious (fake) one. | Spurious Regressions (Random Walk)
It's not always a problem when both dependent and independent variables are random walk. If two variables are cointegrated then you still can run OLS. It won't be the best choice, but it will retain i |
53,590 | Difference between Cross validation,GridSearchCV and does cross validation refer the train test split? | Cross Validation(CV) or K-Fold Cross Validation (K-Fold CV) is very similar to what you already know as train-test split. When people refer to cross validation they generally mean k-fold cross validation. In k-fold cross validation what you do is just that you have multiple(k) train-test sets instead of 1. This basically means that in a k-fold CV you will be training your model k-times and also testing it k-times. The purpose of doing this is that in a single train-test split, the test part of your data that you chose might be really easy to predict and your model will perform extremely well on it but not exactly so for your actual test sets which ultimately will not be a good model. Hence, you need to use a k-fold CV method. For example, in a 4 fold cross-validation, you will divide your training data into 4 equal parts. In the first step, you keep one part out of the 4 as the set you will test upon and train on the remaining 3. This one part you left out is called the validation set and the remaining 3 becomes your training set. You keep repeating this 4 times but you will be using a different part out of the 4 each time to test your model upon. K-fold cross validation can essentially help you combat overfitting too. There are different ways to do k-fold cross validation like stratified-k fold cv, time based k-fold cv, grouped k-fold cv etc which will depend on the nature of your data and the purpose of your predictions. You can google more about these methods. A method that people generally use is that, for each of the k-folds, they also make predictions for the actual test set and later on take the mean of all the k predictions to generate the final predictions.
Depiction of K-Fold Cross Validation (Image Source: Wikipedia)
GridSearchCV is a method used to tune the hyperparameters of your model (For Example, max_depth and max_features in RandomForest). In this method, you specify a grid of possible parameter values (For Example, max_depth = [5,6,7] and max_features = [10,11,12] etc.). GridSearch will now search for the best set of combination of these set of features that you specified using the k-fold cv approach that I mentioned above i.e. it will train the model using different combinations of the above mentioned features and give you the best combination based on the best k-fold cv score obtained (For Example, Trial1: max_depth = 5 and max_features = 10 and and K-fold CV Accuracy Score Obtained is 80%, Trial2: max_depth=5 and max_features=11 and K-fold CV Accuracy Score Obtained is 85% and so on...) GridSearch is known to be a very slow method of tuning your hyperparameters and you are much better off sticking with RandomSearchCV or the more advanced Bayesian Hyperparameter Optimization methods (you have libraries like skopt and hyperopt in python for this). You can google more about these methods too. | Difference between Cross validation,GridSearchCV and does cross validation refer the train test spli | Cross Validation(CV) or K-Fold Cross Validation (K-Fold CV) is very similar to what you already know as train-test split. When people refer to cross validation they generally mean k-fold cross validat | Difference between Cross validation,GridSearchCV and does cross validation refer the train test split?
Cross Validation(CV) or K-Fold Cross Validation (K-Fold CV) is very similar to what you already know as train-test split. When people refer to cross validation they generally mean k-fold cross validation. In k-fold cross validation what you do is just that you have multiple(k) train-test sets instead of 1. This basically means that in a k-fold CV you will be training your model k-times and also testing it k-times. The purpose of doing this is that in a single train-test split, the test part of your data that you chose might be really easy to predict and your model will perform extremely well on it but not exactly so for your actual test sets which ultimately will not be a good model. Hence, you need to use a k-fold CV method. For example, in a 4 fold cross-validation, you will divide your training data into 4 equal parts. In the first step, you keep one part out of the 4 as the set you will test upon and train on the remaining 3. This one part you left out is called the validation set and the remaining 3 becomes your training set. You keep repeating this 4 times but you will be using a different part out of the 4 each time to test your model upon. K-fold cross validation can essentially help you combat overfitting too. There are different ways to do k-fold cross validation like stratified-k fold cv, time based k-fold cv, grouped k-fold cv etc which will depend on the nature of your data and the purpose of your predictions. You can google more about these methods. A method that people generally use is that, for each of the k-folds, they also make predictions for the actual test set and later on take the mean of all the k predictions to generate the final predictions.
Depiction of K-Fold Cross Validation (Image Source: Wikipedia)
GridSearchCV is a method used to tune the hyperparameters of your model (For Example, max_depth and max_features in RandomForest). In this method, you specify a grid of possible parameter values (For Example, max_depth = [5,6,7] and max_features = [10,11,12] etc.). GridSearch will now search for the best set of combination of these set of features that you specified using the k-fold cv approach that I mentioned above i.e. it will train the model using different combinations of the above mentioned features and give you the best combination based on the best k-fold cv score obtained (For Example, Trial1: max_depth = 5 and max_features = 10 and and K-fold CV Accuracy Score Obtained is 80%, Trial2: max_depth=5 and max_features=11 and K-fold CV Accuracy Score Obtained is 85% and so on...) GridSearch is known to be a very slow method of tuning your hyperparameters and you are much better off sticking with RandomSearchCV or the more advanced Bayesian Hyperparameter Optimization methods (you have libraries like skopt and hyperopt in python for this). You can google more about these methods too. | Difference between Cross validation,GridSearchCV and does cross validation refer the train test spli
Cross Validation(CV) or K-Fold Cross Validation (K-Fold CV) is very similar to what you already know as train-test split. When people refer to cross validation they generally mean k-fold cross validat |
53,591 | Matching with Multiple Treatments | I recommend taking a look at Lopez & Gutman (2017), who clearly describe the issues at hand and the methods used to solve them.
Based on your description, it sounds like you want the average treatment effect in the control group (ATC) for several treatments. For each treatment level, this answers the question, "For those who received the control, what would their improvement have been had they received treatment A?" We can, in a straightforward way, ask this about all of our treatment groups.
Note this differs from the usual estimand in matching, which is the average treatment effect in the treated (ATT), which answers the question "For those who received treatment, what would their decline had been had they received the control?" This question establishes that for those who received treatment, treatment was effective. The question the ATC answers is about what would happen if we were to give the treatment to those who normally wouldn't take it.
A third question you could ask is "For everyone, what would be the effect of treatment A vs. control?" This as an average treatment effect in the population (ATE) question, and is usually the question we want to answer in a randomized trial. It's very important to know which question you want to answer because each requires a different method. I'll carry on assuming you want the ATC for each treatment.
To get the ATC using matching, you can just perform standard matching between the control and each treatment group. This requires that you keep the control group intact (i.e., no adjustment for common support or caliper). One treatment group at a time, you find the treated individuals that are similar to the control group. After doing this for each treatment group, you can use regression in the aggregate matched sample to estimate the effects of each treatment vs. control on the outcome. To make this straightforward, simply make the control group the reference category of the treatment factor in the regression.
Here's how you might do this in MatchIt:
library(MatchIt)
treatments <- levels(data$treat) #Levels of treatment variable
control <- "control" #Name of control level
data$match.weights <- 1 #Initialize matching weights
for (i in treatments[treatments != control]) {
d <- data[data$treat %in% c(i, control),] #Subset just the control and 1 treatment
d$treat_i <- as.numeric(d$treat != i) #Create new binary treatment variable
m <- matchit(treat_i ~ cov1 + cov2 + cov3, data = d)
data[names(m$weights), "match.weights"] <- m$weights[names(m$weights)] #Assign matching weights
}
#Check balance using cobalt
library(cobalt)
bal.tab(treat ~ cov1 + cov2 + cov3, data = data,
weights = "match.weights", method = "matching",
focal = control, which.treat = .all)
#Estimate treatment effects
summary(glm(outcome ~ relevel(treat, control),
data = data[data$match.weights > 0,],
weights = match.weights))
It's a lot easier to do this using weighting instead of matching. The same assumptions and interpretations of the estimands apply. Using WeightIt, you can simply run
library(WeightIt)
w.out <- weightit(treat ~ cov1 + cov2 + cov3, data = data, focal = "control", estimand = "ATT")
#Check balance
bal.tab(w.out, which.treat = .all)
#Estimate treatment effects (using jtools to get robust SEs)
#(Can also use survey package)
library(jtools)
summ(glm(outcome ~ relevel(treat, "control"), data = data,
weights = w.out$weights), robust = "HC1")
To get the ATE, you need to use weighting. In the code above, simple replace estimand = "ATT" with estimand = "ATE" and remove focal = "control". Take a look at the WeightIt documentation for more options. In particular, you can set method = "gbm", which will give you the same results as using twang. Note that I'm the author of both cobalt and WeightIt.
Lopez, M. J., & Gutman, R. (2017). Estimation of Causal Effects with Multiple Treatments: A Review and New Ideas. Statistical Science, 32(3), 432–454. https://doi.org/10.1214/17-STS612 | Matching with Multiple Treatments | I recommend taking a look at Lopez & Gutman (2017), who clearly describe the issues at hand and the methods used to solve them.
Based on your description, it sounds like you want the average treatmen | Matching with Multiple Treatments
I recommend taking a look at Lopez & Gutman (2017), who clearly describe the issues at hand and the methods used to solve them.
Based on your description, it sounds like you want the average treatment effect in the control group (ATC) for several treatments. For each treatment level, this answers the question, "For those who received the control, what would their improvement have been had they received treatment A?" We can, in a straightforward way, ask this about all of our treatment groups.
Note this differs from the usual estimand in matching, which is the average treatment effect in the treated (ATT), which answers the question "For those who received treatment, what would their decline had been had they received the control?" This question establishes that for those who received treatment, treatment was effective. The question the ATC answers is about what would happen if we were to give the treatment to those who normally wouldn't take it.
A third question you could ask is "For everyone, what would be the effect of treatment A vs. control?" This as an average treatment effect in the population (ATE) question, and is usually the question we want to answer in a randomized trial. It's very important to know which question you want to answer because each requires a different method. I'll carry on assuming you want the ATC for each treatment.
To get the ATC using matching, you can just perform standard matching between the control and each treatment group. This requires that you keep the control group intact (i.e., no adjustment for common support or caliper). One treatment group at a time, you find the treated individuals that are similar to the control group. After doing this for each treatment group, you can use regression in the aggregate matched sample to estimate the effects of each treatment vs. control on the outcome. To make this straightforward, simply make the control group the reference category of the treatment factor in the regression.
Here's how you might do this in MatchIt:
library(MatchIt)
treatments <- levels(data$treat) #Levels of treatment variable
control <- "control" #Name of control level
data$match.weights <- 1 #Initialize matching weights
for (i in treatments[treatments != control]) {
d <- data[data$treat %in% c(i, control),] #Subset just the control and 1 treatment
d$treat_i <- as.numeric(d$treat != i) #Create new binary treatment variable
m <- matchit(treat_i ~ cov1 + cov2 + cov3, data = d)
data[names(m$weights), "match.weights"] <- m$weights[names(m$weights)] #Assign matching weights
}
#Check balance using cobalt
library(cobalt)
bal.tab(treat ~ cov1 + cov2 + cov3, data = data,
weights = "match.weights", method = "matching",
focal = control, which.treat = .all)
#Estimate treatment effects
summary(glm(outcome ~ relevel(treat, control),
data = data[data$match.weights > 0,],
weights = match.weights))
It's a lot easier to do this using weighting instead of matching. The same assumptions and interpretations of the estimands apply. Using WeightIt, you can simply run
library(WeightIt)
w.out <- weightit(treat ~ cov1 + cov2 + cov3, data = data, focal = "control", estimand = "ATT")
#Check balance
bal.tab(w.out, which.treat = .all)
#Estimate treatment effects (using jtools to get robust SEs)
#(Can also use survey package)
library(jtools)
summ(glm(outcome ~ relevel(treat, "control"), data = data,
weights = w.out$weights), robust = "HC1")
To get the ATE, you need to use weighting. In the code above, simple replace estimand = "ATT" with estimand = "ATE" and remove focal = "control". Take a look at the WeightIt documentation for more options. In particular, you can set method = "gbm", which will give you the same results as using twang. Note that I'm the author of both cobalt and WeightIt.
Lopez, M. J., & Gutman, R. (2017). Estimation of Causal Effects with Multiple Treatments: A Review and New Ideas. Statistical Science, 32(3), 432–454. https://doi.org/10.1214/17-STS612 | Matching with Multiple Treatments
I recommend taking a look at Lopez & Gutman (2017), who clearly describe the issues at hand and the methods used to solve them.
Based on your description, it sounds like you want the average treatmen |
53,592 | Iterated expectations and variances examples | Your calculation is correct, and is a good way I think. One other approach might be just using the PDF of $X$, using uniform PDF, $\Pi(x)$:
$$f_X(x)=\frac{1}{2}\Pi(x)+\frac{1}{2}\Pi(x-3)$$
Expected value can be fairly easy via both method, we just need $E[X^2]$:
$$E[X^2]=\frac{1}{2}\int_0^{1}x^2dx+\frac{1}{2}\int_3^4x^2dx=\frac{4^3-3^3+1^3}{6}=\frac{19}{3}$$
which yields $\operatorname{var}(X)=19/3-4=7/3$, as yours.
Note: Add 1/12 to your final answer, since your answer is for $V(E[X|Y])$. | Iterated expectations and variances examples | Your calculation is correct, and is a good way I think. One other approach might be just using the PDF of $X$, using uniform PDF, $\Pi(x)$:
$$f_X(x)=\frac{1}{2}\Pi(x)+\frac{1}{2}\Pi(x-3)$$
Expected va | Iterated expectations and variances examples
Your calculation is correct, and is a good way I think. One other approach might be just using the PDF of $X$, using uniform PDF, $\Pi(x)$:
$$f_X(x)=\frac{1}{2}\Pi(x)+\frac{1}{2}\Pi(x-3)$$
Expected value can be fairly easy via both method, we just need $E[X^2]$:
$$E[X^2]=\frac{1}{2}\int_0^{1}x^2dx+\frac{1}{2}\int_3^4x^2dx=\frac{4^3-3^3+1^3}{6}=\frac{19}{3}$$
which yields $\operatorname{var}(X)=19/3-4=7/3$, as yours.
Note: Add 1/12 to your final answer, since your answer is for $V(E[X|Y])$. | Iterated expectations and variances examples
Your calculation is correct, and is a good way I think. One other approach might be just using the PDF of $X$, using uniform PDF, $\Pi(x)$:
$$f_X(x)=\frac{1}{2}\Pi(x)+\frac{1}{2}\Pi(x-3)$$
Expected va |
53,593 | Iterated expectations and variances examples | This problem can be simplified substantially by decomposing the random variable of interest as a sum of two independent parts:
$$X = U+3V
\quad \quad \quad \quad U \sim \text{U}(0,1)
\quad \quad \quad \quad V \sim \text{Bern}(\tfrac{1}{2}).$$
Using this decomposition we have mean:
$$\begin{equation} \begin{aligned}
\mathbb{E}(X) = \mathbb{E}(U+3V)
&= \mathbb{E}(U) + 3 \mathbb{E}(V) \\[6pt]
&= \frac{1}{2} + 3 \cdot \frac{1}{2} = 2, \\[6pt]
\end{aligned} \end{equation}$$
and variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}(X) = \mathbb{V}(U+3V)
&= \mathbb{V}(U) + 3^2 \mathbb{V}(V) \\[6pt]
&= \frac{1}{12} + 9 \cdot \frac{1}{4} \\[6pt]
&= \frac{1}{12} + \frac{27}{12} \\[6pt]
&= \frac{28}{12} = \frac{7}{3}, \\[6pt]
\end{aligned} \end{equation}$$
which gives the corresponding standard deviation:
$$\begin{equation} \begin{aligned}
\mathbb{S}(X) = \sqrt{\mathbb{V}(X)}
&= \sqrt{\frac{7}{3}} \approx 1.527525. \\[6pt]
\end{aligned} \end{equation}$$
As you can see, this simplifies the calculations substantially, and does not require the use of iterated expectations or variance. | Iterated expectations and variances examples | This problem can be simplified substantially by decomposing the random variable of interest as a sum of two independent parts:
$$X = U+3V
\quad \quad \quad \quad U \sim \text{U}(0,1)
\quad \quad \quad | Iterated expectations and variances examples
This problem can be simplified substantially by decomposing the random variable of interest as a sum of two independent parts:
$$X = U+3V
\quad \quad \quad \quad U \sim \text{U}(0,1)
\quad \quad \quad \quad V \sim \text{Bern}(\tfrac{1}{2}).$$
Using this decomposition we have mean:
$$\begin{equation} \begin{aligned}
\mathbb{E}(X) = \mathbb{E}(U+3V)
&= \mathbb{E}(U) + 3 \mathbb{E}(V) \\[6pt]
&= \frac{1}{2} + 3 \cdot \frac{1}{2} = 2, \\[6pt]
\end{aligned} \end{equation}$$
and variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}(X) = \mathbb{V}(U+3V)
&= \mathbb{V}(U) + 3^2 \mathbb{V}(V) \\[6pt]
&= \frac{1}{12} + 9 \cdot \frac{1}{4} \\[6pt]
&= \frac{1}{12} + \frac{27}{12} \\[6pt]
&= \frac{28}{12} = \frac{7}{3}, \\[6pt]
\end{aligned} \end{equation}$$
which gives the corresponding standard deviation:
$$\begin{equation} \begin{aligned}
\mathbb{S}(X) = \sqrt{\mathbb{V}(X)}
&= \sqrt{\frac{7}{3}} \approx 1.527525. \\[6pt]
\end{aligned} \end{equation}$$
As you can see, this simplifies the calculations substantially, and does not require the use of iterated expectations or variance. | Iterated expectations and variances examples
This problem can be simplified substantially by decomposing the random variable of interest as a sum of two independent parts:
$$X = U+3V
\quad \quad \quad \quad U \sim \text{U}(0,1)
\quad \quad \quad |
53,594 | Iterated expectations and variances examples | There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem
of total expectation; or by (2) Finding the second stage expectation
$E(X)$, using $f_{X}(x)$. These are equivalent methods, but you
might find one easier to comprehend, so I present them both in detail
below for $E(X)$. The approach is similar for $Var(X)$, so I exclude
its presentation, but can update my answer if you really need it.
Method (1) Finding the second stage expectation $E(X)$ with the theorem of total expectation
In this case, the Theorem of Total Expectation states that:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)P(Y=y)\\
& = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)
\end{eqnarray*}
So, we simply need to find the corresponding terms in the line above
for $y=0$ and $y=1$. We are given the following:
\begin{eqnarray*}
f_{Y}(y) & = & \begin{cases}
\frac{1}{2} & \text{for}\,y=0\,(heads),\,1\,(tails)\\
0 & \text{otherwise}
\end{cases}
\end{eqnarray*}
and
\begin{eqnarray*}
f_{X|Y}(x|y) & = & \begin{cases}
1 & \text{for}\,3<x<4;\,y=0\\
1 & \text{for}\,0<x<1;\,y=1
\end{cases}
\end{eqnarray*}
Now, we simply need to obtain $E(X|Y=y)$ for each realization of $y$:
\begin{eqnarray*}
E(X|Y=y) & = & \int_{-\infty}^{\infty}xf_{X|Y}(x|y)dx\\
& = & \begin{cases}
\int_{3}^{4}x(1)dx & \text{for}\,y=0\\
\int_{0}^{1}x(1)dx & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\left.\frac{x^{2}}{2}\right|_{x=3}^{x=4} & \text{for}\,y=0\\
\left.\frac{x^{2}}{2}\right|_{x=0}^{x=1} & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\frac{7}{2} & \text{for}\,y=0\\
\frac{1}{2} & \text{for}\,y=1
\end{cases}
\end{eqnarray*}
So, substituting each term into the Theorem of Total Expectation above
yields:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)\\
& = & E(X|Y=0)f_{Y}(0)+E(X|Y=1)f_{Y}(1)\\
& = & \left(\frac{7}{2}\right)\left(\frac{1}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
Method (2) Finding the second stage expectation $E(X)$, using $f_{X}(x)$
To use this method, we first find the $f_{X,Y}(x,y)$ and $f_{X}(X)$.
To begin, recall that $f_{X,Y}(x,y)$ is given by:
\begin{eqnarray*}
f_{X,Y}(x,y) & = & f_{X|Y}(x|y)f_{Y}(y)\\
& = & \begin{cases}
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,3<x<4;\,y=0\\
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,0<x<1;\,y=1
\end{cases}\\
\end{eqnarray*}
and we can find $f_{X}(x)$ by summing out the $y$ component:
\begin{eqnarray*}
f_{X}(x) & = & \sum_{y=0}^{1}f_{X,Y}(x,y)\\
& = & f_{X,Y}(x,0)+f_{X,Y}(x,1)\\
& = & \frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)
\end{eqnarray*}
And now, we can just find $E(X)$ using the probability density function of $f_{X}(x)$ as
usual:
\begin{eqnarray*}
E(X) & = & \int_{-\infty}^{\infty}xf_{X}(x)dx\\
& = & \int_{-\infty}^{\infty}x\left[\frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)\right]dx\\
& = & \frac{1}{2}\int_{-\infty}^{\infty}xI(3\le x\le4)dx+\frac{1}{2}\int_{-\infty}^{\infty}xI(0\le x\le1)dx\\
& = & \frac{1}{2}\int_{3}^{4}xdx+\frac{1}{2}\int_{0}^{1}xdx\\
& = & \left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=3}^{x=4}+\left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=0}^{x=1}\\
& = & \left(\frac{1}{2}\right)\left(\frac{7}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
the same two approaches can be used to compute $Var(X)$. | Iterated expectations and variances examples | There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem
of total expectation; or by (2) Finding the second stage expectati | Iterated expectations and variances examples
There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem
of total expectation; or by (2) Finding the second stage expectation
$E(X)$, using $f_{X}(x)$. These are equivalent methods, but you
might find one easier to comprehend, so I present them both in detail
below for $E(X)$. The approach is similar for $Var(X)$, so I exclude
its presentation, but can update my answer if you really need it.
Method (1) Finding the second stage expectation $E(X)$ with the theorem of total expectation
In this case, the Theorem of Total Expectation states that:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)P(Y=y)\\
& = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)
\end{eqnarray*}
So, we simply need to find the corresponding terms in the line above
for $y=0$ and $y=1$. We are given the following:
\begin{eqnarray*}
f_{Y}(y) & = & \begin{cases}
\frac{1}{2} & \text{for}\,y=0\,(heads),\,1\,(tails)\\
0 & \text{otherwise}
\end{cases}
\end{eqnarray*}
and
\begin{eqnarray*}
f_{X|Y}(x|y) & = & \begin{cases}
1 & \text{for}\,3<x<4;\,y=0\\
1 & \text{for}\,0<x<1;\,y=1
\end{cases}
\end{eqnarray*}
Now, we simply need to obtain $E(X|Y=y)$ for each realization of $y$:
\begin{eqnarray*}
E(X|Y=y) & = & \int_{-\infty}^{\infty}xf_{X|Y}(x|y)dx\\
& = & \begin{cases}
\int_{3}^{4}x(1)dx & \text{for}\,y=0\\
\int_{0}^{1}x(1)dx & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\left.\frac{x^{2}}{2}\right|_{x=3}^{x=4} & \text{for}\,y=0\\
\left.\frac{x^{2}}{2}\right|_{x=0}^{x=1} & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\frac{7}{2} & \text{for}\,y=0\\
\frac{1}{2} & \text{for}\,y=1
\end{cases}
\end{eqnarray*}
So, substituting each term into the Theorem of Total Expectation above
yields:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)\\
& = & E(X|Y=0)f_{Y}(0)+E(X|Y=1)f_{Y}(1)\\
& = & \left(\frac{7}{2}\right)\left(\frac{1}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
Method (2) Finding the second stage expectation $E(X)$, using $f_{X}(x)$
To use this method, we first find the $f_{X,Y}(x,y)$ and $f_{X}(X)$.
To begin, recall that $f_{X,Y}(x,y)$ is given by:
\begin{eqnarray*}
f_{X,Y}(x,y) & = & f_{X|Y}(x|y)f_{Y}(y)\\
& = & \begin{cases}
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,3<x<4;\,y=0\\
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,0<x<1;\,y=1
\end{cases}\\
\end{eqnarray*}
and we can find $f_{X}(x)$ by summing out the $y$ component:
\begin{eqnarray*}
f_{X}(x) & = & \sum_{y=0}^{1}f_{X,Y}(x,y)\\
& = & f_{X,Y}(x,0)+f_{X,Y}(x,1)\\
& = & \frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)
\end{eqnarray*}
And now, we can just find $E(X)$ using the probability density function of $f_{X}(x)$ as
usual:
\begin{eqnarray*}
E(X) & = & \int_{-\infty}^{\infty}xf_{X}(x)dx\\
& = & \int_{-\infty}^{\infty}x\left[\frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)\right]dx\\
& = & \frac{1}{2}\int_{-\infty}^{\infty}xI(3\le x\le4)dx+\frac{1}{2}\int_{-\infty}^{\infty}xI(0\le x\le1)dx\\
& = & \frac{1}{2}\int_{3}^{4}xdx+\frac{1}{2}\int_{0}^{1}xdx\\
& = & \left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=3}^{x=4}+\left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=0}^{x=1}\\
& = & \left(\frac{1}{2}\right)\left(\frac{7}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
the same two approaches can be used to compute $Var(X)$. | Iterated expectations and variances examples
There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem
of total expectation; or by (2) Finding the second stage expectati |
53,595 | Iterated expectations and variances examples | Comment: Here is a brief simulation, comparing
approximate simulated results with theoretical results derived in this Q and A. Everything below matches within the margin of simulation error.
Also see Wikipedia on Mixture Distributions, under Moments, for some relevant formulas.
set.seed(420) # for reproducibility
u1 = runif(10^6); u2 = runif(10^6, 3, 4)
ht = rbinom(10^6, 1, .5)
x = ht*u1 + (1-ht)*u2
mean(x); 2
[1] 2.001059 # aprx E(X) = 2
[1] 2 # proposed exact
var(x); 7/3
[1] 2.332478 # aprx Var(X)
[1] 2.333333
mean(x^2); 19/3
[1] 6.336712 # aprx E(X^2)
[1] 6.333333
hist(x, br=40, prob=T, col="skyblue2") | Iterated expectations and variances examples | Comment: Here is a brief simulation, comparing
approximate simulated results with theoretical results derived in this Q and A. Everything below matches within the margin of simulation error.
Also see | Iterated expectations and variances examples
Comment: Here is a brief simulation, comparing
approximate simulated results with theoretical results derived in this Q and A. Everything below matches within the margin of simulation error.
Also see Wikipedia on Mixture Distributions, under Moments, for some relevant formulas.
set.seed(420) # for reproducibility
u1 = runif(10^6); u2 = runif(10^6, 3, 4)
ht = rbinom(10^6, 1, .5)
x = ht*u1 + (1-ht)*u2
mean(x); 2
[1] 2.001059 # aprx E(X) = 2
[1] 2 # proposed exact
var(x); 7/3
[1] 2.332478 # aprx Var(X)
[1] 2.333333
mean(x^2); 19/3
[1] 6.336712 # aprx E(X^2)
[1] 6.333333
hist(x, br=40, prob=T, col="skyblue2") | Iterated expectations and variances examples
Comment: Here is a brief simulation, comparing
approximate simulated results with theoretical results derived in this Q and A. Everything below matches within the margin of simulation error.
Also see |
53,596 | Does data normalization and transformation change the Pearson's correlation? | Pearson's correlation measures the linear component of association. So you
are correct that linear transformations of data will not affect the correlation between them. However, nonlinear transformations will generally have an effect.
Here is a demonstration: Generate right-skewed, correlated data vectors x and y. Pearson's correlation is $r = 0.987.$ (The correlation of $X$ and $Y^\prime = 3 + 5Y$ is the same.)
set.seed(2019)
x = rexp(100, .1); y = x + rexp(100, .5)
cor(x, y)
[1] 0.987216
cor(x, 3 + 5*y)
[1] 0.987216 # no change with linear transf of 'y'
However, if the second variable is log-transformed, Pearson's correlation
changes to $r = 0.862.$
cor(x, log(y))
[1] 0.8624539
Here are the corresponding plots:
By contrast, Spearman's correlation is unaffected by the (monotone increasing) log-transformation.
Spearman's correlation is based on ranks of observations and log-transformation
does not change ranks. Before and after transformation, $r_S = 0.966.$
cor(x, y, meth="spear")
[1] 0.9655446
cor(rank(x), rank(log(y)))
[1] 0.9655446 # Spearman again
cor(x, log(y), meth="spear")
[1] 0.9655446 | Does data normalization and transformation change the Pearson's correlation? | Pearson's correlation measures the linear component of association. So you
are correct that linear transformations of data will not affect the correlation between them. However, nonlinear transformati | Does data normalization and transformation change the Pearson's correlation?
Pearson's correlation measures the linear component of association. So you
are correct that linear transformations of data will not affect the correlation between them. However, nonlinear transformations will generally have an effect.
Here is a demonstration: Generate right-skewed, correlated data vectors x and y. Pearson's correlation is $r = 0.987.$ (The correlation of $X$ and $Y^\prime = 3 + 5Y$ is the same.)
set.seed(2019)
x = rexp(100, .1); y = x + rexp(100, .5)
cor(x, y)
[1] 0.987216
cor(x, 3 + 5*y)
[1] 0.987216 # no change with linear transf of 'y'
However, if the second variable is log-transformed, Pearson's correlation
changes to $r = 0.862.$
cor(x, log(y))
[1] 0.8624539
Here are the corresponding plots:
By contrast, Spearman's correlation is unaffected by the (monotone increasing) log-transformation.
Spearman's correlation is based on ranks of observations and log-transformation
does not change ranks. Before and after transformation, $r_S = 0.966.$
cor(x, y, meth="spear")
[1] 0.9655446
cor(rank(x), rank(log(y)))
[1] 0.9655446 # Spearman again
cor(x, log(y), meth="spear")
[1] 0.9655446 | Does data normalization and transformation change the Pearson's correlation?
Pearson's correlation measures the linear component of association. So you
are correct that linear transformations of data will not affect the correlation between them. However, nonlinear transformati |
53,597 | Does a log transform always bring a distribution closer to normal? | For purely positive quantities a log-transformation is indeed the standard first transformation to try and is very frequently used. It is also done if for regression you want a multiplicative interpretation of coefficients (e.g. doubling/ halving of blood cholesterol).
Of course it will not always make a distribution more normal, e.g. take samples from a N(1000, 1) distribution: any transformation can only make it less normal. | Does a log transform always bring a distribution closer to normal? | For purely positive quantities a log-transformation is indeed the standard first transformation to try and is very frequently used. It is also done if for regression you want a multiplicative interpre | Does a log transform always bring a distribution closer to normal?
For purely positive quantities a log-transformation is indeed the standard first transformation to try and is very frequently used. It is also done if for regression you want a multiplicative interpretation of coefficients (e.g. doubling/ halving of blood cholesterol).
Of course it will not always make a distribution more normal, e.g. take samples from a N(1000, 1) distribution: any transformation can only make it less normal. | Does a log transform always bring a distribution closer to normal?
For purely positive quantities a log-transformation is indeed the standard first transformation to try and is very frequently used. It is also done if for regression you want a multiplicative interpre |
53,598 | R: GLMM for unbalanced zero-inflated data (glmmTMB) | A1: "All in all, I have about 33% of the dates having counts of zero, which makes me think the data is zero inflated." -> this is a common misconception - zero-inflation != lots of zeros. Zero-inflation means you have more zeros than you would expect, given your fitted model. Without having fit a model, you can't know what to expect. The DHARMa R package (disclaimer: I develop this package) has a zero-inflation test for GLMMs, including glmmTMB, that you can use to test your model. However, see notes in the vignette about zero-inflation: when fitting GLMMs with variable dispersion, zero-inflation often shows up as underdispersion, so the most reliable test is usually to run a model selection with ZIP against standard model.
A2: When running a Poisson GLMM with count data, you absolutely have to check for overdispersion!!! Fitting a poisson without check is a big no no. It would be very uncommon that your data is not overdispersed, so your poisson is likely not appropriate and you should move to a neg binom or an poisson with OLRE. DHARMa has dispersion test that works with glmmTMB.
A3: at the end of the DHARMa vignette, there is an example for analysing and checking zero-inflated count data (Owl dataset)
B/C makes sense
D/E Not generally a problem, but especially in this case you should put an RE on locality as well (nested location/site) | R: GLMM for unbalanced zero-inflated data (glmmTMB) | A1: "All in all, I have about 33% of the dates having counts of zero, which makes me think the data is zero inflated." -> this is a common misconception - zero-inflation != lots of zeros. Zero-inflati | R: GLMM for unbalanced zero-inflated data (glmmTMB)
A1: "All in all, I have about 33% of the dates having counts of zero, which makes me think the data is zero inflated." -> this is a common misconception - zero-inflation != lots of zeros. Zero-inflation means you have more zeros than you would expect, given your fitted model. Without having fit a model, you can't know what to expect. The DHARMa R package (disclaimer: I develop this package) has a zero-inflation test for GLMMs, including glmmTMB, that you can use to test your model. However, see notes in the vignette about zero-inflation: when fitting GLMMs with variable dispersion, zero-inflation often shows up as underdispersion, so the most reliable test is usually to run a model selection with ZIP against standard model.
A2: When running a Poisson GLMM with count data, you absolutely have to check for overdispersion!!! Fitting a poisson without check is a big no no. It would be very uncommon that your data is not overdispersed, so your poisson is likely not appropriate and you should move to a neg binom or an poisson with OLRE. DHARMa has dispersion test that works with glmmTMB.
A3: at the end of the DHARMa vignette, there is an example for analysing and checking zero-inflated count data (Owl dataset)
B/C makes sense
D/E Not generally a problem, but especially in this case you should put an RE on locality as well (nested location/site) | R: GLMM for unbalanced zero-inflated data (glmmTMB)
A1: "All in all, I have about 33% of the dates having counts of zero, which makes me think the data is zero inflated." -> this is a common misconception - zero-inflation != lots of zeros. Zero-inflati |
53,599 | Why can't t-SNE capture a simple parabola structure? | Three general remarks:
t-SNE is excellent at preserving cluster structure but is not very good at preserving continuous "manifold structure". One famous toy example is the Swiss roll data set, and it is well-known that t-SNE has trouble "unrolling" it. In fact, one can use t-SNE to unroll it, but one has to be really careful with choosing optimisation parameters: https://jlmelville.github.io/smallvis/swisssne.html.
Using 1-dimensional t-SNE instead of 2-dimensional is likely to exacerbate this problem, possibly by quite a lot. One-dimensional optimisation is more difficult for t-SNE because points don't have the two-dimensional wiggle space and have to pass right through each other during gradient descent. Given that all pairs of points feel repulsive forces in t-SNE, this can be difficult, and it can get stuck in a bad local minimum.
t-SNE is not very good with tiny data sets. It's often easier to get a nice embedding of two million points than of twenty points. Default optimisation parameters might be inappropriate for such a tiny sample size. And by the way, perplexity larger than the sample size does not make mathematical sense (not sure what your R package is doing when you set perplexity larger than $N$).
With all these caveats in mind, if you are really careful with optimisation parameters, you can manage to preserve the manifold structure of your data set. But this is really not what t-SNE is for.
%matplotlib notebook
import numpy as np
import pylab as plt
import seaborn as sns; sns.set()
from sklearn.manifold import TSNE
x = np.arange(-5, 5.001, .5)[:,None]
y = x**2
X = np.concatenate((x,y),axis=1)
Z = TSNE(n_components=1, method='exact', perplexity=2,
early_exaggeration=2, learning_rate=1,
random_state=42).fit_transform(X)
plt.figure(figsize=(8,2))
plt.scatter(Z, Z*0, s=400)
for i in range(Z.shape[0]):
plt.text(Z[i], Z[i]*0, str(i), va='center', ha='center', color='w')
plt.tight_layout()
It was easy to make it work with n_components=2, but as I suspected, n_components=1 required some tinkering with optimisation parameters (early_exaggeration and learning_rate). | Why can't t-SNE capture a simple parabola structure? | Three general remarks:
t-SNE is excellent at preserving cluster structure but is not very good at preserving continuous "manifold structure". One famous toy example is the Swiss roll data set, and it | Why can't t-SNE capture a simple parabola structure?
Three general remarks:
t-SNE is excellent at preserving cluster structure but is not very good at preserving continuous "manifold structure". One famous toy example is the Swiss roll data set, and it is well-known that t-SNE has trouble "unrolling" it. In fact, one can use t-SNE to unroll it, but one has to be really careful with choosing optimisation parameters: https://jlmelville.github.io/smallvis/swisssne.html.
Using 1-dimensional t-SNE instead of 2-dimensional is likely to exacerbate this problem, possibly by quite a lot. One-dimensional optimisation is more difficult for t-SNE because points don't have the two-dimensional wiggle space and have to pass right through each other during gradient descent. Given that all pairs of points feel repulsive forces in t-SNE, this can be difficult, and it can get stuck in a bad local minimum.
t-SNE is not very good with tiny data sets. It's often easier to get a nice embedding of two million points than of twenty points. Default optimisation parameters might be inappropriate for such a tiny sample size. And by the way, perplexity larger than the sample size does not make mathematical sense (not sure what your R package is doing when you set perplexity larger than $N$).
With all these caveats in mind, if you are really careful with optimisation parameters, you can manage to preserve the manifold structure of your data set. But this is really not what t-SNE is for.
%matplotlib notebook
import numpy as np
import pylab as plt
import seaborn as sns; sns.set()
from sklearn.manifold import TSNE
x = np.arange(-5, 5.001, .5)[:,None]
y = x**2
X = np.concatenate((x,y),axis=1)
Z = TSNE(n_components=1, method='exact', perplexity=2,
early_exaggeration=2, learning_rate=1,
random_state=42).fit_transform(X)
plt.figure(figsize=(8,2))
plt.scatter(Z, Z*0, s=400)
for i in range(Z.shape[0]):
plt.text(Z[i], Z[i]*0, str(i), va='center', ha='center', color='w')
plt.tight_layout()
It was easy to make it work with n_components=2, but as I suspected, n_components=1 required some tinkering with optimisation parameters (early_exaggeration and learning_rate). | Why can't t-SNE capture a simple parabola structure?
Three general remarks:
t-SNE is excellent at preserving cluster structure but is not very good at preserving continuous "manifold structure". One famous toy example is the Swiss roll data set, and it |
53,600 | Explanation for Additive Property of Variance? | It doesn't!
In general:
Var(A+B) = Var(A) + Var(B) + Cov(A, B)
The additive property only holds if the two random variables have no covariation. This is almost a circular statement, since a legitimate definition of the covariation could be:
Cov(A, B) = Var(A) + Var(B) - Var(A + B)
This means that the covariance measures the failure of the additive property of variance.
This leads to the true heart of the matter, the covariance is bi-linear:
Cov(A_1 + A_2, B) = Cov(A_1, B) + Cov(A_2, B)
Cov(A, B_1 + B_2) = Cov(A, B_1) + Cov(A, B_2)
For an intuitive understanding of this, I'll link to the wonderful: How would you explain covariance to someone who understands only the mean?. In particular, see @whuber's answer. | Explanation for Additive Property of Variance? | It doesn't!
In general:
Var(A+B) = Var(A) + Var(B) + Cov(A, B)
The additive property only holds if the two random variables have no covariation. This is almost a circular statement, since a legitima | Explanation for Additive Property of Variance?
It doesn't!
In general:
Var(A+B) = Var(A) + Var(B) + Cov(A, B)
The additive property only holds if the two random variables have no covariation. This is almost a circular statement, since a legitimate definition of the covariation could be:
Cov(A, B) = Var(A) + Var(B) - Var(A + B)
This means that the covariance measures the failure of the additive property of variance.
This leads to the true heart of the matter, the covariance is bi-linear:
Cov(A_1 + A_2, B) = Cov(A_1, B) + Cov(A_2, B)
Cov(A, B_1 + B_2) = Cov(A, B_1) + Cov(A, B_2)
For an intuitive understanding of this, I'll link to the wonderful: How would you explain covariance to someone who understands only the mean?. In particular, see @whuber's answer. | Explanation for Additive Property of Variance?
It doesn't!
In general:
Var(A+B) = Var(A) + Var(B) + Cov(A, B)
The additive property only holds if the two random variables have no covariation. This is almost a circular statement, since a legitima |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.