idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
49,201 | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series? | Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences. Some other RNN variants sometimes outperform LSTM for some tasks, e.g. GRU.
FYI:
Greff, Klaus, Rupesh Kumar Srivastava, Jan Koutník, Bas R. Steunebrink, and Jürgen Schmidhuber. "LSTM: A search space odyssey." arXiv preprint arXiv:1503.04069 (2015).: "In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search and their importance was assessed using the powerful fANOVA framework".
Zaremba, Wojciech. Ilya Sutskever. Rafal Jozefowicz "An empirical exploration of recurrent network architectures." (2015): used evolutionary computation to find optimal RNN structures.
Bayer, Justin, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber. "Evolving memory cell structures for sequence learning." In International Conference on Artificial Neural Networks, pp. 755-764. Springer Berlin Heidelberg, 2009.: used evolutionary computation to find optimal RNN structures.
Le, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. "A simple way to initialize recurrent networks of rectified linear units." arXiv preprint arXiv:1504.00941 (2015): shows that RNNs can sometime have performances similar to LSTMs when the identity matrix is used to initialize the recurrent weight matrix. | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series? | Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series?
Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences. Some other RNN variants sometimes outperform LSTM for some tasks, e.g. GRU.
FYI:
Greff, Klaus, Rupesh Kumar Srivastava, Jan Koutník, Bas R. Steunebrink, and Jürgen Schmidhuber. "LSTM: A search space odyssey." arXiv preprint arXiv:1503.04069 (2015).: "In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search and their importance was assessed using the powerful fANOVA framework".
Zaremba, Wojciech. Ilya Sutskever. Rafal Jozefowicz "An empirical exploration of recurrent network architectures." (2015): used evolutionary computation to find optimal RNN structures.
Bayer, Justin, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber. "Evolving memory cell structures for sequence learning." In International Conference on Artificial Neural Networks, pp. 755-764. Springer Berlin Heidelberg, 2009.: used evolutionary computation to find optimal RNN structures.
Le, Quoc V., Navdeep Jaitly, and Geoffrey E. Hinton. "A simple way to initialize recurrent networks of rectified linear units." arXiv preprint arXiv:1504.00941 (2015): shows that RNNs can sometime have performances similar to LSTMs when the identity matrix is used to initialize the recurrent weight matrix. | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series?
Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences |
49,202 | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't trained with. Predictive accuracy can be estimated with a wide variety of techniques, including a train-test split, cross-validation, and bootstrapping. | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't train | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't trained with. Predictive accuracy can be estimated with a wide variety of techniques, including a train-test split, cross-validation, and bootstrapping. | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't train |
49,203 | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen before. If your model generalizes well on the holdout set, then presumably it will generalize equally well on live production data.
Regarding your last bullet point on holdout sets -- there aren't two types of generalizability. There is only one type, and so it's simply called generalizability. If your training dataset doesn't represent real world data, then there is no point in building a model. Unfortunately there is no simple rule for knowing how close your training data will represent production data. You just have to use good judgement (e.g., model build data should be sourced from the same production systems that you would pull from when your model is deployed).
Generalizability is often not static just like the real world is often not static. If your model is used to make important decisions, you will often also do post-production monitoring of it to make sure that it continues to generalize well. As your model's generalizability decays over time, as is most often the case with real-world (e.g., financial) data, then you'll need to do a refit. | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen before. If your model generalizes well on the holdout set, then presumably it will generalize equally well on live production data.
Regarding your last bullet point on holdout sets -- there aren't two types of generalizability. There is only one type, and so it's simply called generalizability. If your training dataset doesn't represent real world data, then there is no point in building a model. Unfortunately there is no simple rule for knowing how close your training data will represent production data. You just have to use good judgement (e.g., model build data should be sourced from the same production systems that you would pull from when your model is deployed).
Generalizability is often not static just like the real world is often not static. If your model is used to make important decisions, you will often also do post-production monitoring of it to make sure that it continues to generalize well. As your model's generalizability decays over time, as is most often the case with real-world (e.g., financial) data, then you'll need to do a refit. | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen |
49,204 | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero? | If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$.
You are free to impose any set of constraints that will lead to identifiable parameters. The sum-to-zero constraints shown in the question are arbitrary but convenient ways to pin down a particular set of solutions. Usually such constraints are chosen to be linear and to have simple, relevant interpretations. When the $m$ equations are linearly independent, a single linear constraint (that is independent of those) will suffice. Instead of the sum-to-zero constraint, you could elect to set any single coefficient to zero (thereby making its variable the "baseline"): such constraints are also popular and easy to interpret. | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero? | If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$. | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero?
If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$.
You are free to impose any set of constraints that will lead to identifiable parameters. The sum-to-zero constraints shown in the question are arbitrary but convenient ways to pin down a particular set of solutions. Usually such constraints are chosen to be linear and to have simple, relevant interpretations. When the $m$ equations are linearly independent, a single linear constraint (that is independent of those) will suffice. Instead of the sum-to-zero constraint, you could elect to set any single coefficient to zero (thereby making its variable the "baseline"): such constraints are also popular and easy to interpret. | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero?
If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$. |
49,205 | Thompson Sampling | This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\theta)=\mathbb{E}(r|a,\theta).$$
Let $r^*(\theta)$ be the maximum expected reward for given $\theta$,
$$
\bar{r}^*(\theta)=\max_{a'}\bar{r}(a',\theta).
$$
The event $A$ we are interested in can be written then as follows:
$$
A=\{\theta: \bar{r}(a^*,\theta)=\bar{r}^*(\theta)\}.
$$
The probability of this event is:
$$
\mathbb{P}(A)=\int_A P(\theta|\mathcal{D})d\theta=\int I_A(\theta)P(\theta|\mathcal{D})d\theta.
$$
This is exactly the Wikipedia formula (in new notation). | Thompson Sampling | This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\th | Thompson Sampling
This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\theta)=\mathbb{E}(r|a,\theta).$$
Let $r^*(\theta)$ be the maximum expected reward for given $\theta$,
$$
\bar{r}^*(\theta)=\max_{a'}\bar{r}(a',\theta).
$$
The event $A$ we are interested in can be written then as follows:
$$
A=\{\theta: \bar{r}(a^*,\theta)=\bar{r}^*(\theta)\}.
$$
The probability of this event is:
$$
\mathbb{P}(A)=\int_A P(\theta|\mathcal{D})d\theta=\int I_A(\theta)P(\theta|\mathcal{D})d\theta.
$$
This is exactly the Wikipedia formula (in new notation). | Thompson Sampling
This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\th |
49,206 | Same kernel for mixed/categorical data? | From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might not really mean much similarity, these similarities depend on the probabilities of occurrence, and these could (should?) be taken into account, adding more information to the problem.
Marco Antonio Villegas García describes some valid kernels for categorical data in his MSc thesis [1], even beating common kernels in SVM classification tasks benchmarking.
They are:
$$\begin{align} &k_{0}(z_{i},z_{j}) = \left\{\begin{matrix}&1, &&z_{i} = z_{j}\\&0, &&z_{i} \neq z_{j}\end{matrix}\right.\\&k_{1}(z_{i},z_{j}) = \left\{\begin{matrix}&h(P_{z}(z_{i})), &z_{i} = z_{j}\\&0, &z_{i} \neq z_{j}\end{matrix}\right.\end{align}$$
With $h(z) = (1-z^{\alpha})^{1/\alpha}$ being a measures of "probabilistic" similarity and $P_{z}$ a Probability Mass Function (PMF), or, in other words $P_{z}(z_{i})$ is the probability that variable $z$ assumes the value $z_{i}$.
While $k_{0}$ takes a naive approach to similarity, $k_{1}$ takes into account the probabilities of occurence.
He also introduces a third kernel as an afterthought:
$$\begin{align} &k_{2}(z_{i},z_{j}) = \left\{\begin{matrix}&h(P_{z}(z_{i})), &z_{i} = z_{j}\\&g(P_{z}(z_{i}),P_{z}(z_{j})), &z_{i} \neq z_{j}\end{matrix}\right.\end{align}$$
With the introduction of another inverting function, $g$.
[1] Villegas García, M. A. (2013). An investigation into new kernels for categorical variables. | Same kernel for mixed/categorical data? | From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might | Same kernel for mixed/categorical data?
From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might not really mean much similarity, these similarities depend on the probabilities of occurrence, and these could (should?) be taken into account, adding more information to the problem.
Marco Antonio Villegas García describes some valid kernels for categorical data in his MSc thesis [1], even beating common kernels in SVM classification tasks benchmarking.
They are:
$$\begin{align} &k_{0}(z_{i},z_{j}) = \left\{\begin{matrix}&1, &&z_{i} = z_{j}\\&0, &&z_{i} \neq z_{j}\end{matrix}\right.\\&k_{1}(z_{i},z_{j}) = \left\{\begin{matrix}&h(P_{z}(z_{i})), &z_{i} = z_{j}\\&0, &z_{i} \neq z_{j}\end{matrix}\right.\end{align}$$
With $h(z) = (1-z^{\alpha})^{1/\alpha}$ being a measures of "probabilistic" similarity and $P_{z}$ a Probability Mass Function (PMF), or, in other words $P_{z}(z_{i})$ is the probability that variable $z$ assumes the value $z_{i}$.
While $k_{0}$ takes a naive approach to similarity, $k_{1}$ takes into account the probabilities of occurence.
He also introduces a third kernel as an afterthought:
$$\begin{align} &k_{2}(z_{i},z_{j}) = \left\{\begin{matrix}&h(P_{z}(z_{i})), &z_{i} = z_{j}\\&g(P_{z}(z_{i}),P_{z}(z_{j})), &z_{i} \neq z_{j}\end{matrix}\right.\end{align}$$
With the introduction of another inverting function, $g$.
[1] Villegas García, M. A. (2013). An investigation into new kernels for categorical variables. | Same kernel for mixed/categorical data?
From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might |
49,207 | Is it legit to run clustering on MDS result of a distance matrix? | MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to only plot their data.
I previously used the software PRIMER to do clustering analysis and the package clustsig seems to be doing pretty much the same thing in R. You may want to look into this package to perform your clustering analysis, maybe it will be faster than mclust? | Is it legit to run clustering on MDS result of a distance matrix? | MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to on | Is it legit to run clustering on MDS result of a distance matrix?
MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to only plot their data.
I previously used the software PRIMER to do clustering analysis and the package clustsig seems to be doing pretty much the same thing in R. You may want to look into this package to perform your clustering analysis, maybe it will be faster than mclust? | Is it legit to run clustering on MDS result of a distance matrix?
MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to on |
49,208 | Is it legit to run clustering on MDS result of a distance matrix? | That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can be seen as a dimensionality reduction algorithm like any other, with the advantage being it tries to keep the hierarchical structure of the data (and it can of course fail on that, no free lunch).
Keep in mind you're not restricted to 2-dimensions with MDS. You can actually test for a feasible and reasonable number of dimensions. | Is it legit to run clustering on MDS result of a distance matrix? | That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can b | Is it legit to run clustering on MDS result of a distance matrix?
That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can be seen as a dimensionality reduction algorithm like any other, with the advantage being it tries to keep the hierarchical structure of the data (and it can of course fail on that, no free lunch).
Keep in mind you're not restricted to 2-dimensions with MDS. You can actually test for a feasible and reasonable number of dimensions. | Is it legit to run clustering on MDS result of a distance matrix?
That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can b |
49,209 | Pandas Time Series DataFrame Missing Values | Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't exactly 2010, you can replace 0s with nans:
new = old.replace([0], float('nan'))
This will cause a "pen lift" (if you can imagine an old pen plotter, nan caused a pen lift, therefore matlab, and consequently matplotlib, emulated that behavior).
If you do this, you need to make sure your analysis routines can handle nans properly. (particularly any time filtering, like MA, across the gap.
Finally, I would strongly suggest moving this kind of question to StackOverflow, since it's more of a Pandas question than an analytical question. | Pandas Time Series DataFrame Missing Values | Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't e | Pandas Time Series DataFrame Missing Values
Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't exactly 2010, you can replace 0s with nans:
new = old.replace([0], float('nan'))
This will cause a "pen lift" (if you can imagine an old pen plotter, nan caused a pen lift, therefore matlab, and consequently matplotlib, emulated that behavior).
If you do this, you need to make sure your analysis routines can handle nans properly. (particularly any time filtering, like MA, across the gap.
Finally, I would strongly suggest moving this kind of question to StackOverflow, since it's more of a Pandas question than an analytical question. | Pandas Time Series DataFrame Missing Values
Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't e |
49,210 | Which is the dimension (or units) of the predicted random effects? | The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly defined (from a dimensional analysis point of view) only if $x$ is dimensionless. For example, if you define the $\exp$ function by its power series
$$\exp(x)=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\ldots$$
then you are adding $x$ and $x^2$, which is only possible if $x$ is dimensionless. See this discussion on physics.SE for details and links.
Yet, as you wrote in the comments, we often take logs of variables which are not dimensionless; in your model, the unit of $y_{it}$ is $£/h$. This is resolved by adding an arbitrary base value: a more formal way of writing your model would be to define $y_0=1£/h$ and write
$$\ln\left(\frac{y_{it}}{y_0}\right)=X_{it}\beta+\zeta_i+\eta_t+\epsilon_{it}$$
and now it is clearer that your dependent variable is dimensionless, as are $\zeta_i$ and $\eta_t$.
Exponentiating, you get
$$y_{it}=y_0e^{X_{it}\beta}e^{\zeta_i}e^{\eta_t}e^{\epsilon_{it}}$$
where $y_0$ is in $£/h$ and the other variables are dimensionless.
The answer to Q1 is thus no, and this hopefully clears everything up for Q3. | Which is the dimension (or units) of the predicted random effects? | The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly d | Which is the dimension (or units) of the predicted random effects?
The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly defined (from a dimensional analysis point of view) only if $x$ is dimensionless. For example, if you define the $\exp$ function by its power series
$$\exp(x)=1+x+\frac{x^2}{2}+\frac{x^3}{6}+\ldots$$
then you are adding $x$ and $x^2$, which is only possible if $x$ is dimensionless. See this discussion on physics.SE for details and links.
Yet, as you wrote in the comments, we often take logs of variables which are not dimensionless; in your model, the unit of $y_{it}$ is $£/h$. This is resolved by adding an arbitrary base value: a more formal way of writing your model would be to define $y_0=1£/h$ and write
$$\ln\left(\frac{y_{it}}{y_0}\right)=X_{it}\beta+\zeta_i+\eta_t+\epsilon_{it}$$
and now it is clearer that your dependent variable is dimensionless, as are $\zeta_i$ and $\eta_t$.
Exponentiating, you get
$$y_{it}=y_0e^{X_{it}\beta}e^{\zeta_i}e^{\eta_t}e^{\epsilon_{it}}$$
where $y_0$ is in $£/h$ and the other variables are dimensionless.
The answer to Q1 is thus no, and this hopefully clears everything up for Q3. | Which is the dimension (or units) of the predicted random effects?
The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly d |
49,211 | Which is the dimension (or units) of the predicted random effects? | As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is also possible to deal directly with the dimensional quantities (i.e., with the units still included) by treating the units as algebraic quantities that are subject to an algebra of operations. The field that examines the algebraic structures for dimensional quantities is called dimensional analysis.
Dimensional analysis is a well-developed field with a substantial literature (see e.g., Drobot 1953, Huntley 1967, Whitney 1968a, Whitney 1968b, Szekeres 1978, Hughes 2016).
Analysis within this field is generally undertaken formally by defining an algebra that can handle arithmetic operations on quantities that possess units, which involves creating an algebra on a set containing the real numbers, plus some unit quantities representing the units of analysis. The core operations in these algebras are addition and multiplication, but it is generally possible to extend to consideration of more complicated operations (like the exponential and logarithmic transformations) using standard formulae that relate these to addition and multiplication (e.g., their power series definitions). Since exponentials and logarithms are defined through power series, extension of the algebraic system to incorporate logarithms of quantities with units requires you to add quantities with different units, so this requires an algebra that is sufficiently developed for this purpose (discussion and some of the maths can be found in some answers to a similar question here).
It is notable here that there are various suggestions floating around the internet suggesting that you can only take logarithms of a dimensionless quantity. This is false --- it is possible to create an algrebraic structure that is sufficiently rich to incorporate units, allow you to add quantities with different units, and therefore allow you to extend to logarithms of quantities with units. You need to be careful if you do this, since operations on units are not necessarily easy to interpret, and they follow some algebraic rules that are non-intuitive.
Units in your analysis: Consider your regression equation where the variables have units (using your example where the response variable is a wage rate, modelled by looking at productivity and pay rate). To facilitate our analysis we denote dimensionless quantities for those variables (i.e., stripped of their units) using standard notation, and we denote the inclusion of units by putting a tilde on top of the notation. We can the write the dimensional quantities as a product of a dimensionless value and a unit quantity as follows:
$$\tilde{y}_{i,t} = y_{i,t} \cdot \mathbf{u} \quad \quad \quad \exp (\tilde{x}_{i,t} \tilde{\beta} ) = \exp (x_{i,t} \beta) \cdot \mathbf{u} \quad \quad \quad \mathbf{u} \equiv 1 \cdot \frac{£}{\text{hours}}.$$
In this system, the base unit $\mathbf{u}$ is rate of one pound per hour. If we include the units, your log-linear model can then be written as:
$$\ln \tilde{y}_{i,t} = \tilde{x}_{it} \tilde{\beta} + \zeta_{i} + \eta_{t} + \epsilon_{it}.$$
The response variable here is a dimensional quantity measured in "log pounds per hour"$^\dagger$, as is the first term on the right-hand-side of the equation. The remaining random effects and error terms are dimensionless. Alegraic manipulation of the equation to separate the units from the corresponding dimensionless quantities yields:
$$\ln y_{i,t} + \ln \mathbf{u} = x_{it} \beta + \ln \mathbf{u} + \zeta_{i} + \eta_{t} + \epsilon_{it}.$$
The term $\ln \mathbf{u}$ on both sides of this equation shows that both sides are dimensional quantities measured in "log pounds per hour". So, in answer to your specific questions: (1) The random effects/errors are dimensionless, and this is the case even if you take your response variable to have a dimension; (2) the exponentiated random effects/errors are also dimensionless; (3) theoretical understanding of the interrelation of the units in the equation requires you to look at algebraic structures that incorporate both dimensionless numbers and also units (see the above literature to get started on this).
$^\dagger$ This name for the units is a little ambiguous, since it is unclear grammatically that the logarithm is taken after taking the ratio of pounds per hour. We will assume in the present context that this is clear enough for our purposes. | Which is the dimension (or units) of the predicted random effects? | As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is a | Which is the dimension (or units) of the predicted random effects?
As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is also possible to deal directly with the dimensional quantities (i.e., with the units still included) by treating the units as algebraic quantities that are subject to an algebra of operations. The field that examines the algebraic structures for dimensional quantities is called dimensional analysis.
Dimensional analysis is a well-developed field with a substantial literature (see e.g., Drobot 1953, Huntley 1967, Whitney 1968a, Whitney 1968b, Szekeres 1978, Hughes 2016).
Analysis within this field is generally undertaken formally by defining an algebra that can handle arithmetic operations on quantities that possess units, which involves creating an algebra on a set containing the real numbers, plus some unit quantities representing the units of analysis. The core operations in these algebras are addition and multiplication, but it is generally possible to extend to consideration of more complicated operations (like the exponential and logarithmic transformations) using standard formulae that relate these to addition and multiplication (e.g., their power series definitions). Since exponentials and logarithms are defined through power series, extension of the algebraic system to incorporate logarithms of quantities with units requires you to add quantities with different units, so this requires an algebra that is sufficiently developed for this purpose (discussion and some of the maths can be found in some answers to a similar question here).
It is notable here that there are various suggestions floating around the internet suggesting that you can only take logarithms of a dimensionless quantity. This is false --- it is possible to create an algrebraic structure that is sufficiently rich to incorporate units, allow you to add quantities with different units, and therefore allow you to extend to logarithms of quantities with units. You need to be careful if you do this, since operations on units are not necessarily easy to interpret, and they follow some algebraic rules that are non-intuitive.
Units in your analysis: Consider your regression equation where the variables have units (using your example where the response variable is a wage rate, modelled by looking at productivity and pay rate). To facilitate our analysis we denote dimensionless quantities for those variables (i.e., stripped of their units) using standard notation, and we denote the inclusion of units by putting a tilde on top of the notation. We can the write the dimensional quantities as a product of a dimensionless value and a unit quantity as follows:
$$\tilde{y}_{i,t} = y_{i,t} \cdot \mathbf{u} \quad \quad \quad \exp (\tilde{x}_{i,t} \tilde{\beta} ) = \exp (x_{i,t} \beta) \cdot \mathbf{u} \quad \quad \quad \mathbf{u} \equiv 1 \cdot \frac{£}{\text{hours}}.$$
In this system, the base unit $\mathbf{u}$ is rate of one pound per hour. If we include the units, your log-linear model can then be written as:
$$\ln \tilde{y}_{i,t} = \tilde{x}_{it} \tilde{\beta} + \zeta_{i} + \eta_{t} + \epsilon_{it}.$$
The response variable here is a dimensional quantity measured in "log pounds per hour"$^\dagger$, as is the first term on the right-hand-side of the equation. The remaining random effects and error terms are dimensionless. Alegraic manipulation of the equation to separate the units from the corresponding dimensionless quantities yields:
$$\ln y_{i,t} + \ln \mathbf{u} = x_{it} \beta + \ln \mathbf{u} + \zeta_{i} + \eta_{t} + \epsilon_{it}.$$
The term $\ln \mathbf{u}$ on both sides of this equation shows that both sides are dimensional quantities measured in "log pounds per hour". So, in answer to your specific questions: (1) The random effects/errors are dimensionless, and this is the case even if you take your response variable to have a dimension; (2) the exponentiated random effects/errors are also dimensionless; (3) theoretical understanding of the interrelation of the units in the equation requires you to look at algebraic structures that incorporate both dimensionless numbers and also units (see the above literature to get started on this).
$^\dagger$ This name for the units is a little ambiguous, since it is unclear grammatically that the logarithm is taken after taking the ratio of pounds per hour. We will assume in the present context that this is clear enough for our purposes. | Which is the dimension (or units) of the predicted random effects?
As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is a |
49,212 | What is the advantage of sparsity? | I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $p$ is much larger than the number of observations $n$. As is well known, classical methods like OLS do not work (do not have a unique solution) in case $p>n$. Hence, regularized or sparse methods like the LASSO become attractive or even indispensable alternatives.
Now, if you want to for example demonstrate theoretical results like that LASSO is capable of finding the relevant predictors out of a large set of predictors, it turns out you need to make such sparsity assumptions that the set of these actually relevant variables is suitably "small".
See, e.g., Statistics for High-Dimensional Data - Methods, Theory and Applications by Bühlmann, Peter, van de Geer, Sara. | What is the advantage of sparsity? | I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $ | What is the advantage of sparsity?
I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $p$ is much larger than the number of observations $n$. As is well known, classical methods like OLS do not work (do not have a unique solution) in case $p>n$. Hence, regularized or sparse methods like the LASSO become attractive or even indispensable alternatives.
Now, if you want to for example demonstrate theoretical results like that LASSO is capable of finding the relevant predictors out of a large set of predictors, it turns out you need to make such sparsity assumptions that the set of these actually relevant variables is suitably "small".
See, e.g., Statistics for High-Dimensional Data - Methods, Theory and Applications by Bühlmann, Peter, van de Geer, Sara. | What is the advantage of sparsity?
I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $ |
49,213 | What is the advantage of sparsity? | I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging the statistical fluctuations introduced during the training phase. But this is an intuition, I have no proof. | What is the advantage of sparsity? | I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging | What is the advantage of sparsity?
I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging the statistical fluctuations introduced during the training phase. But this is an intuition, I have no proof. | What is the advantage of sparsity?
I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging |
49,214 | Difference between first one-step ahead forecast and first forecast from fitted model | forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
fitted produces one-step in-sample (i.e., training data) "forecasts". That is, it gives a forecast of observation t using observations up to time t-1 for each t in the data.
So fitted(fit) gives one-step forecasts of observations 1, 2, ... It is possible to produce a "forecast" for observation 1 as a forecast is simply the expected value of that observation given the model and any preceding history.
fitted(refit) gives one-step forecasts of observations 401, 402, .... So it uses the model estimated on observations 1...400, but it uses the data from time 401...500.
Note that forecast(fit)$mean[1] will not be the same as fitted(refit)[1] due to differences in what they are conditioning on. forecast(fit)$mean[1] conditions on the training data (observations 1...400) while fitted(refit) conditions only on the test data and it does not "know" what the training data were. So fitted(refit)[1] is the estimate of observation 401 given the model but no history, while forecast(fit)$mean[1] is the estimation of observation 401 given the model and the data up to time 400.
Update
Note that the model is actually
\begin{align}
y_t &= \mu + n_t \\
n_t &= \phi n_{t-1} + e_t
\end{align}
where $\mu$ is the estimated "intercept" and $\phi$ is the ar coefficient.
So if you write it in the more usual way,
$$
y_t = (1-\phi)\mu + \phi y_{t-1} + e_t
$$
Thus forecasts are given by
> phi <- coef(fit)['ar1']
> mu <- coef(fit)['intercept']
> (by.hand <- phi*test.5 + (1-phi)*mu)
[1] 1.318043 0.010579 0.628453 -0.515169 -2.010278
>
> (auto <- c(forecast(fit)$mean[1], fitted(refit)[2:5]))
[1] 1.318043 0.010579 0.628453 -0.515169 -2.010278 | Difference between first one-step ahead forecast and first forecast from fitted model | forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
| Difference between first one-step ahead forecast and first forecast from fitted model
forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
fitted produces one-step in-sample (i.e., training data) "forecasts". That is, it gives a forecast of observation t using observations up to time t-1 for each t in the data.
So fitted(fit) gives one-step forecasts of observations 1, 2, ... It is possible to produce a "forecast" for observation 1 as a forecast is simply the expected value of that observation given the model and any preceding history.
fitted(refit) gives one-step forecasts of observations 401, 402, .... So it uses the model estimated on observations 1...400, but it uses the data from time 401...500.
Note that forecast(fit)$mean[1] will not be the same as fitted(refit)[1] due to differences in what they are conditioning on. forecast(fit)$mean[1] conditions on the training data (observations 1...400) while fitted(refit) conditions only on the test data and it does not "know" what the training data were. So fitted(refit)[1] is the estimate of observation 401 given the model but no history, while forecast(fit)$mean[1] is the estimation of observation 401 given the model and the data up to time 400.
Update
Note that the model is actually
\begin{align}
y_t &= \mu + n_t \\
n_t &= \phi n_{t-1} + e_t
\end{align}
where $\mu$ is the estimated "intercept" and $\phi$ is the ar coefficient.
So if you write it in the more usual way,
$$
y_t = (1-\phi)\mu + \phi y_{t-1} + e_t
$$
Thus forecasts are given by
> phi <- coef(fit)['ar1']
> mu <- coef(fit)['intercept']
> (by.hand <- phi*test.5 + (1-phi)*mu)
[1] 1.318043 0.010579 0.628453 -0.515169 -2.010278
>
> (auto <- c(forecast(fit)$mean[1], fitted(refit)[2:5]))
[1] 1.318043 0.010579 0.628453 -0.515169 -2.010278 | Difference between first one-step ahead forecast and first forecast from fitted model
forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
|
49,215 | Example of a consistent estimator that doesn't grow less variable with increased sample size? | The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consistency are not the same.
This quote from the Wikipedia page may help remove some confusion:
Bias is related to consistency as follows: a sequence of estimators is consistent if and only if it converges to a value and the bias converges to zero. Consistent estimators are convergent and asymptotically unbiased (hence converge to the correct value): individual estimators in the sequence may be biased, but the overall sequence still consistent, if the bias converges to zero. Conversely, if the sequence does not converge to a value, then it is not consistent, regardless of whether the estimators in the sequence are biased or not.
The estimator for the mean of a sequence proposed in another answer here:
$$X_1 + \frac{1}{n}$$
thus is not consistent because it does not converge to the true value of the mean as the number of observations increases.
The requirement for convergence means that the estimator must get arbitrarily close to to the true value as sample size increases. That would seem to require that the estimator "grow less variable with increased sample size," for any reasonable definition of "less variable." | Example of a consistent estimator that doesn't grow less variable with increased sample size? | The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consiste | Example of a consistent estimator that doesn't grow less variable with increased sample size?
The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consistency are not the same.
This quote from the Wikipedia page may help remove some confusion:
Bias is related to consistency as follows: a sequence of estimators is consistent if and only if it converges to a value and the bias converges to zero. Consistent estimators are convergent and asymptotically unbiased (hence converge to the correct value): individual estimators in the sequence may be biased, but the overall sequence still consistent, if the bias converges to zero. Conversely, if the sequence does not converge to a value, then it is not consistent, regardless of whether the estimators in the sequence are biased or not.
The estimator for the mean of a sequence proposed in another answer here:
$$X_1 + \frac{1}{n}$$
thus is not consistent because it does not converge to the true value of the mean as the number of observations increases.
The requirement for convergence means that the estimator must get arbitrarily close to to the true value as sample size increases. That would seem to require that the estimator "grow less variable with increased sample size," for any reasonable definition of "less variable." | Example of a consistent estimator that doesn't grow less variable with increased sample size?
The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consiste |
49,216 | Maximum Likelihood estimation and the Kalman filter | I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log likelihood in this case is described in the stack exchange post you refer to. Make sure Q, R, mu_0 and A are free parameters
Optimize the function with respect to those parameters by maximizing the log likelihood.
Essentially yes, the underlying optimization procedure will start with random parameter values but from there it will optimize the parameters to fit the observables. I don't see how you can estimate these parameters first and then do the kalman filter.
Source: https://faculty.washington.edu/eeholmes/Files/Intro_to_kalman.pdf | Maximum Likelihood estimation and the Kalman filter | I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log | Maximum Likelihood estimation and the Kalman filter
I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log likelihood in this case is described in the stack exchange post you refer to. Make sure Q, R, mu_0 and A are free parameters
Optimize the function with respect to those parameters by maximizing the log likelihood.
Essentially yes, the underlying optimization procedure will start with random parameter values but from there it will optimize the parameters to fit the observables. I don't see how you can estimate these parameters first and then do the kalman filter.
Source: https://faculty.washington.edu/eeholmes/Files/Intro_to_kalman.pdf | Maximum Likelihood estimation and the Kalman filter
I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log |
49,217 | Maximum Likelihood estimation and the Kalman filter | We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the maximum likelihood (ML) inference of parameters, assuming that these parameters are shared across time, during inference of hidden state variables you need to use the non-causal version of Kalman filter, that is the forward-backward Kalman filter (RTS smoothing).
After that you carry out ML estimation as usual. This is an instance of the well-known Expectation-Maximization algorithm, applied within the context of Kalman Filtering as early as 1982! Therefore it is iterative, and you do not necessarily arrive at a global optimum. As typical with these models, starting from a sensible values of hyperparameters first and running the forward-backward algorithm thereon will give better results. This is the case with most Bayesian models that result in non-convex objective functions (EM, Variational Inference, ...).
For further reference, check Appendix A.3 of the review paper by Roweis and Ghahramani:
https://authors.library.caltech.edu/13697/1/ROWnc99.pdf | Maximum Likelihood estimation and the Kalman filter | We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the max | Maximum Likelihood estimation and the Kalman filter
We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the maximum likelihood (ML) inference of parameters, assuming that these parameters are shared across time, during inference of hidden state variables you need to use the non-causal version of Kalman filter, that is the forward-backward Kalman filter (RTS smoothing).
After that you carry out ML estimation as usual. This is an instance of the well-known Expectation-Maximization algorithm, applied within the context of Kalman Filtering as early as 1982! Therefore it is iterative, and you do not necessarily arrive at a global optimum. As typical with these models, starting from a sensible values of hyperparameters first and running the forward-backward algorithm thereon will give better results. This is the case with most Bayesian models that result in non-convex objective functions (EM, Variational Inference, ...).
For further reference, check Appendix A.3 of the review paper by Roweis and Ghahramani:
https://authors.library.caltech.edu/13697/1/ROWnc99.pdf | Maximum Likelihood estimation and the Kalman filter
We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the max |
49,218 | Unbiased estimator of binomial parameter | Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p)^m$$
This also means $$E\left(\frac{1}{n}\sum_{i=1}^n 2^{X_i}\right)=(1+p)^m$$
Hence an unbiased estimator of $(1+p)^m$ based on a sample of size $n$ is $$T=\frac{1}{n}\sum\limits_{i=1}^n 2^{X_i}$$
Here an unbiased estimate of $(1+p)^{10}$ is therefore the observed value of $T$, which is $24.8$. | Unbiased estimator of binomial parameter | Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p | Unbiased estimator of binomial parameter
Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p)^m$$
This also means $$E\left(\frac{1}{n}\sum_{i=1}^n 2^{X_i}\right)=(1+p)^m$$
Hence an unbiased estimator of $(1+p)^m$ based on a sample of size $n$ is $$T=\frac{1}{n}\sum\limits_{i=1}^n 2^{X_i}$$
Here an unbiased estimate of $(1+p)^{10}$ is therefore the observed value of $T$, which is $24.8$. | Unbiased estimator of binomial parameter
Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p |
49,219 | Unbiased estimator of binomial parameter | Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the case that $n$ is large and if additionally $np$ (or $n(1-p)$) is also large enough so that the normal approximation to the binomial applies. Unfortunately, $5$ and $10$ are likely too small for the following approximation to be useful, but perhaps it may lead to further ideas.
The sample is $X_1,\ldots,X_m\sim\text{Bin}(n,p)\approx\text{N}(np,np(1-p))$, with sample sum $m\bar{X} \sim \text{Bin}(mn,p)\approx \text{N}(mnp,mnp(1-p))$, so that approximately, the sample mean $\bar{X}\sim \text{N}(np,\frac{np(1-p)}{m})$ and the sample variance $S^2$ is unbiased with $\text{E}[S^2] = np(1-p)$.
Let's use the conventional unbiased estimator for $p$, that is $\hat{p}=\frac{\bar{X}}{n}$, and see what that the bias is of the estimator
$$
\hat{\theta} = (1+\hat{p})^n
$$
for $\theta=(1+p)^n$. Now if $n$ is large, then approximately
$$
\theta = (1+p)^n = (1+\frac{np}{n})^n \approx e^{np}\,,\ \ \text{and}\ \ \hat{\theta} = (1+\frac{\bar{X}}{n})^n \approx e^{\bar{X}}\,.
$$
Because $\bar{X}$ is normally distributed, $\hat{\theta}=e^{\bar{X}}$ is lognormally distributed. From the properties of the lognormal distribution we easily obtain, with $\mu=np$ and $\sigma^2=\frac{np(1-p)}{m}$ the mean and variance of $\bar{X}$, that
$$
\text{E}[\hat{\theta}] = e^{\mu+\sigma^2/2} = \exp(np + \frac{np(1-p)}{2m})
$$
The bias of $\hat{\theta}$ is therefore
$$
\text{E}[\hat{\theta}] - \theta = \exp(np + \frac{np(1-p)}{2m}) - \exp(np)
= e^{np} [\exp(\frac{np(1-p)}{2m})-1]
$$
By replacing $p$ by its estimate $\hat{p}$, this can be used to eliminate the bias of $\hat{\theta}$. You can also use $S^2$ to estimate $np(1-p)$, for example I think
$$
e^{n\hat{p}} [e^{S^2/2m} -1]
$$
will be a pretty good estimation of the bias and thus
$$
\hat{\theta}' = e^{n\hat{p}+S^2/2m}
$$
will be a reasonably unbiased estimate of $(1+p)^n$. | Unbiased estimator of binomial parameter | Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the c | Unbiased estimator of binomial parameter
Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the case that $n$ is large and if additionally $np$ (or $n(1-p)$) is also large enough so that the normal approximation to the binomial applies. Unfortunately, $5$ and $10$ are likely too small for the following approximation to be useful, but perhaps it may lead to further ideas.
The sample is $X_1,\ldots,X_m\sim\text{Bin}(n,p)\approx\text{N}(np,np(1-p))$, with sample sum $m\bar{X} \sim \text{Bin}(mn,p)\approx \text{N}(mnp,mnp(1-p))$, so that approximately, the sample mean $\bar{X}\sim \text{N}(np,\frac{np(1-p)}{m})$ and the sample variance $S^2$ is unbiased with $\text{E}[S^2] = np(1-p)$.
Let's use the conventional unbiased estimator for $p$, that is $\hat{p}=\frac{\bar{X}}{n}$, and see what that the bias is of the estimator
$$
\hat{\theta} = (1+\hat{p})^n
$$
for $\theta=(1+p)^n$. Now if $n$ is large, then approximately
$$
\theta = (1+p)^n = (1+\frac{np}{n})^n \approx e^{np}\,,\ \ \text{and}\ \ \hat{\theta} = (1+\frac{\bar{X}}{n})^n \approx e^{\bar{X}}\,.
$$
Because $\bar{X}$ is normally distributed, $\hat{\theta}=e^{\bar{X}}$ is lognormally distributed. From the properties of the lognormal distribution we easily obtain, with $\mu=np$ and $\sigma^2=\frac{np(1-p)}{m}$ the mean and variance of $\bar{X}$, that
$$
\text{E}[\hat{\theta}] = e^{\mu+\sigma^2/2} = \exp(np + \frac{np(1-p)}{2m})
$$
The bias of $\hat{\theta}$ is therefore
$$
\text{E}[\hat{\theta}] - \theta = \exp(np + \frac{np(1-p)}{2m}) - \exp(np)
= e^{np} [\exp(\frac{np(1-p)}{2m})-1]
$$
By replacing $p$ by its estimate $\hat{p}$, this can be used to eliminate the bias of $\hat{\theta}$. You can also use $S^2$ to estimate $np(1-p)$, for example I think
$$
e^{n\hat{p}} [e^{S^2/2m} -1]
$$
will be a pretty good estimation of the bias and thus
$$
\hat{\theta}' = e^{n\hat{p}+S^2/2m}
$$
will be a reasonably unbiased estimate of $(1+p)^n$. | Unbiased estimator of binomial parameter
Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the c |
49,220 | Why Are Impulse Responses in VECM Permanent? | This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, suppose a drunken man is randomly walking when a mean teenager pushes him. The push sends the man stumbling but he regains his footing after a short distance. He shrugs it off and keeps on walking, but he has been displaced a number of feet, continues his drunken walk, and may never return to the original location.
And how would one interpret the cumulative impulse responses in that case, which will then grow (or decrease) infinitely?
Since VECMs are models of the deltas, as long as an "impulse" in one of the variables leads to future deltas that eventually go to zero, then the cumulative response will not grow indefinitely! (Remember, there's no noise to push it around.) But the displacement will also not revert back to a reference value.
If the variables are cointegrated, shouldn't they stay "close" to each other, separated by some value that is constant over time? In that case, one variable cannot grow indefinitely when the other changes by a fixed amount.
They will, and that's exactly what the error correction mechanism enforces, dashing any hopes of an "exogenous" shock. In a simulation that I'm about to show you, whether the unit "shock" happens at equilibrium or a place away from equilibrium also matters...a lot. The impulse response that the urca/vars R package combo gets you, to the best of my understanding, is to what happens to each variable when exactly one variable is increased by one unit and the starting position is at equilibrium (exactly on the cointegrating vector). As soon as the shock occurs, the system is out of equilibrium and both the lag effects of the shock and the error correction mechanism will be in play. It's not clean at all.
I've got a simulation attempting to draw from:
$$
\left(\begin{matrix} \Delta y_t \\ \Delta x_t \end{matrix}\right) =
\left(\begin{matrix} .3 \\ .4 \end{matrix}\right)
\left(\begin{matrix} 1 & -1.3 \end{matrix}\right) \left(\begin{matrix} y_{t-1} \\ x_{t-1} \end{matrix}\right) +
\left(\begin{matrix} .5 & .4 \\ 0 & .8 \end{matrix}\right) \left(\begin{matrix} \Delta y_{t-1} \\ \Delta x_{t-1} \end{matrix}\right)
+ \mathbf{\epsilon}_t,
$$
where notice I tried to make $x_t$ as exogenous as possible by having $y_t$'s previous delta not feed into $x_t$'s current period delta. But that doesn't appear to do much.
Here's how I fed in the unit impulse from x:
T <- 50
x0 <- 100
shock <- 1
offset <- 0
beta <- 1.3 # y_t - beta * x_t is stationary
Alpha <- matrix(c(.3, .4), ncol=1)
Beta_T <- matrix(c(1, -beta), ncol=2)
Gamma <- matrix(c(.5, .4, 0, .8), ncol=2, byrow=T)
# Shock x and run Generate Data block
x_t1 <- c(offset + beta * x0, x0) # start at equilibrium
x_t2 <- c(offset + beta * x0, x0 + shock) # shock in x only
# Generate data
X <- matrix(NA, nrow=T, ncol=2)
X[1, ] <- x_t1
X[2, ] <- x_t2
for (t in 3:T) {
x_lag1 <- X[t - 1, ]
del_x_lag1 <- X[t - 1, ] - X[t - 2, ]
del_x <- Alpha %*% Beta_T %*% x_lag1 + Gamma %*% del_x_lag1
X[t, ] <- x_lag1 + del_x
}
df <- as.data.frame(X)
names(df) <- c('y', 'x')
plot(df$y, main="Effect on y_t of x's impulse at t=2")
plot(df$x, main="Effect on x_t of x's impulse at t=2")
So the unit impulse in $x_t$ leaves both $x_{t+30}$ and $y_{t+30}$ lower by approximately 10 units, without any noise. Note how the shock from $x_1$ to $x_2$ results in a higher value of $x_3$, but after that both shapes looks similar. The impulse response plot of from the irf function of the vars package looks like this (shown for the effect of a unit impulse in $x$ on $y$):
If you change the offset so that y and x start in a non-equilibrium position, then the scale of the impulse response is dramatically different. | Why Are Impulse Responses in VECM Permanent? | This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, sup | Why Are Impulse Responses in VECM Permanent?
This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, suppose a drunken man is randomly walking when a mean teenager pushes him. The push sends the man stumbling but he regains his footing after a short distance. He shrugs it off and keeps on walking, but he has been displaced a number of feet, continues his drunken walk, and may never return to the original location.
And how would one interpret the cumulative impulse responses in that case, which will then grow (or decrease) infinitely?
Since VECMs are models of the deltas, as long as an "impulse" in one of the variables leads to future deltas that eventually go to zero, then the cumulative response will not grow indefinitely! (Remember, there's no noise to push it around.) But the displacement will also not revert back to a reference value.
If the variables are cointegrated, shouldn't they stay "close" to each other, separated by some value that is constant over time? In that case, one variable cannot grow indefinitely when the other changes by a fixed amount.
They will, and that's exactly what the error correction mechanism enforces, dashing any hopes of an "exogenous" shock. In a simulation that I'm about to show you, whether the unit "shock" happens at equilibrium or a place away from equilibrium also matters...a lot. The impulse response that the urca/vars R package combo gets you, to the best of my understanding, is to what happens to each variable when exactly one variable is increased by one unit and the starting position is at equilibrium (exactly on the cointegrating vector). As soon as the shock occurs, the system is out of equilibrium and both the lag effects of the shock and the error correction mechanism will be in play. It's not clean at all.
I've got a simulation attempting to draw from:
$$
\left(\begin{matrix} \Delta y_t \\ \Delta x_t \end{matrix}\right) =
\left(\begin{matrix} .3 \\ .4 \end{matrix}\right)
\left(\begin{matrix} 1 & -1.3 \end{matrix}\right) \left(\begin{matrix} y_{t-1} \\ x_{t-1} \end{matrix}\right) +
\left(\begin{matrix} .5 & .4 \\ 0 & .8 \end{matrix}\right) \left(\begin{matrix} \Delta y_{t-1} \\ \Delta x_{t-1} \end{matrix}\right)
+ \mathbf{\epsilon}_t,
$$
where notice I tried to make $x_t$ as exogenous as possible by having $y_t$'s previous delta not feed into $x_t$'s current period delta. But that doesn't appear to do much.
Here's how I fed in the unit impulse from x:
T <- 50
x0 <- 100
shock <- 1
offset <- 0
beta <- 1.3 # y_t - beta * x_t is stationary
Alpha <- matrix(c(.3, .4), ncol=1)
Beta_T <- matrix(c(1, -beta), ncol=2)
Gamma <- matrix(c(.5, .4, 0, .8), ncol=2, byrow=T)
# Shock x and run Generate Data block
x_t1 <- c(offset + beta * x0, x0) # start at equilibrium
x_t2 <- c(offset + beta * x0, x0 + shock) # shock in x only
# Generate data
X <- matrix(NA, nrow=T, ncol=2)
X[1, ] <- x_t1
X[2, ] <- x_t2
for (t in 3:T) {
x_lag1 <- X[t - 1, ]
del_x_lag1 <- X[t - 1, ] - X[t - 2, ]
del_x <- Alpha %*% Beta_T %*% x_lag1 + Gamma %*% del_x_lag1
X[t, ] <- x_lag1 + del_x
}
df <- as.data.frame(X)
names(df) <- c('y', 'x')
plot(df$y, main="Effect on y_t of x's impulse at t=2")
plot(df$x, main="Effect on x_t of x's impulse at t=2")
So the unit impulse in $x_t$ leaves both $x_{t+30}$ and $y_{t+30}$ lower by approximately 10 units, without any noise. Note how the shock from $x_1$ to $x_2$ results in a higher value of $x_3$, but after that both shapes looks similar. The impulse response plot of from the irf function of the vars package looks like this (shown for the effect of a unit impulse in $x$ on $y$):
If you change the offset so that y and x start in a non-equilibrium position, then the scale of the impulse response is dramatically different. | Why Are Impulse Responses in VECM Permanent?
This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, sup |
49,221 | Leverages and effect of leverage points | We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will make the calculations simpler (and the hat matrix will not depend on the parametrization chosen), $i=1,2,\dotsc,p, \quad, j=1, \dotsc,n_i$, the total number of observations $n=\sum_i n_i$. Then the design matrix $X$ has the form
$$
X =\begin{pmatrix} 1 & 0 & \dots & 0 \\
\dots \\
1 & 0 & \dots & 0 \\
0 & 1 & 0 & \dots \\
\vdots \\
0 & 0 & \dots & 1
\end{pmatrix}
$$
where the number of 1's in group $l$ is $n_l$. Then it is easy to calculate that $X^T X = \text{diag}( n_1, \dotsc, n_p )$ and its inverse is $\text{diag}( n_1^{-1}, \dotsc, n_p^{-1} )$. Finally,
$$
H = X (X^T X)^{-1}X^T = (h_{ij})
$$
where we calculate
$$
h_{ij}= \sum_{s,l} X_{i,s} (X^T X)^{-1}_{sl} (X^T)_{lj} = \\
\sum_{s,l} x_{is} (X^T X)^{-1}_{sl} x_{jl} = \\
\sum_s x_{is} n_s^{-1} x_{js} = \\
\begin{cases} n_l^{-1} &~\text{if observations $i,j$ has same treatment $l$} \\
0 &~\text{in other cases}
\end{cases}
$$
Then $H$ is a block matrix with diagonal blocks of size $n_l\times n_l$, with all elements equal to $n_l^{-1}$.
Then you can easily check the properties you have given in "second question". For the last one, $\hat{Y}=H Y$ so that $\text{var}(HY)=\sigma^2 H$, using that $H$ is (symmetric and ) idempotent. Then as you have given,
$$
Var(\hat {Y_i})=\sigma^2h_{ii}=\frac{\sigma^2}{\frac{1}{h_{ii}}}
$$
and the conclusion Then it says $1/h_{ii}$ is roughly the number of observations needed to estimate $\hat{Y_i}$. (after replacing needed with used) follows by noting that a mean based on $n$ observations (independent) has variance $\sigma^2/n$, and here identifying $n$ with $1/h_{ii}$.
Finally, your third question: The residual $r_i = Y_i - \hat{Y_i}$ with variance $(1-h_{ii})\sigma^2$. If $h_{ii}=1$, then the variance becomes zero. That means that $\hat{Y_i}=Y_i$ with certainty. That would maybe be good if it was believable, but it is too good to be true: This is not really a perfect prediction based on the other observations, it is just a copy of the observation into its own prediction. Use the form of hat matrix $H$ we calculated above: $h_{ii}=1/n_l$ where $i$ belongs to group $l$. So $h_{ii}=1$ really means that $n_l=1$. Then you can check that in
$$
\hat{Y_i} = (HY)_{ii} = \sum_{j=1}^n h_{ij} Y_j = h_{ii} Y_i
$$
since from the block diagonal form of $H$ you can see that $h_{ij}=0$ for $j\not = i$. So the perfect prediction (and residual 0) is a chimera. | Leverages and effect of leverage points | We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will | Leverages and effect of leverage points
We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will make the calculations simpler (and the hat matrix will not depend on the parametrization chosen), $i=1,2,\dotsc,p, \quad, j=1, \dotsc,n_i$, the total number of observations $n=\sum_i n_i$. Then the design matrix $X$ has the form
$$
X =\begin{pmatrix} 1 & 0 & \dots & 0 \\
\dots \\
1 & 0 & \dots & 0 \\
0 & 1 & 0 & \dots \\
\vdots \\
0 & 0 & \dots & 1
\end{pmatrix}
$$
where the number of 1's in group $l$ is $n_l$. Then it is easy to calculate that $X^T X = \text{diag}( n_1, \dotsc, n_p )$ and its inverse is $\text{diag}( n_1^{-1}, \dotsc, n_p^{-1} )$. Finally,
$$
H = X (X^T X)^{-1}X^T = (h_{ij})
$$
where we calculate
$$
h_{ij}= \sum_{s,l} X_{i,s} (X^T X)^{-1}_{sl} (X^T)_{lj} = \\
\sum_{s,l} x_{is} (X^T X)^{-1}_{sl} x_{jl} = \\
\sum_s x_{is} n_s^{-1} x_{js} = \\
\begin{cases} n_l^{-1} &~\text{if observations $i,j$ has same treatment $l$} \\
0 &~\text{in other cases}
\end{cases}
$$
Then $H$ is a block matrix with diagonal blocks of size $n_l\times n_l$, with all elements equal to $n_l^{-1}$.
Then you can easily check the properties you have given in "second question". For the last one, $\hat{Y}=H Y$ so that $\text{var}(HY)=\sigma^2 H$, using that $H$ is (symmetric and ) idempotent. Then as you have given,
$$
Var(\hat {Y_i})=\sigma^2h_{ii}=\frac{\sigma^2}{\frac{1}{h_{ii}}}
$$
and the conclusion Then it says $1/h_{ii}$ is roughly the number of observations needed to estimate $\hat{Y_i}$. (after replacing needed with used) follows by noting that a mean based on $n$ observations (independent) has variance $\sigma^2/n$, and here identifying $n$ with $1/h_{ii}$.
Finally, your third question: The residual $r_i = Y_i - \hat{Y_i}$ with variance $(1-h_{ii})\sigma^2$. If $h_{ii}=1$, then the variance becomes zero. That means that $\hat{Y_i}=Y_i$ with certainty. That would maybe be good if it was believable, but it is too good to be true: This is not really a perfect prediction based on the other observations, it is just a copy of the observation into its own prediction. Use the form of hat matrix $H$ we calculated above: $h_{ii}=1/n_l$ where $i$ belongs to group $l$. So $h_{ii}=1$ really means that $n_l=1$. Then you can check that in
$$
\hat{Y_i} = (HY)_{ii} = \sum_{j=1}^n h_{ij} Y_j = h_{ii} Y_i
$$
since from the block diagonal form of $H$ you can see that $h_{ij}=0$ for $j\not = i$. So the perfect prediction (and residual 0) is a chimera. | Leverages and effect of leverage points
We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will |
49,222 | What's aggregation bias, and how does it relate to the ecological fallacy? | From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; and Blalock [2] has shown that the regression coefficients may be biased also. It is well established that it is incorrect to assume that relationships existing at one level of analysis will necessarily demonstrate the same strength at another level. The estimates derived from aggregate data are valid only for the particular system of observational units employed. The consequences of using potentially biased estimates of the correlation and regression coefficients as substitutes for the “true” microlevel estimates are most serious in terms of the causal inferences to be drawn from statistical analyses
And just a bit later in the paper, relating to how aggregation bias and ecology fallacy are related (bold is mine):
Probably the most serious disadvantage of using aggregate data is the inherent difficulty of making valid multilevel inferences based on a single level of analysis [1]. Alker has identified three types of erroneous inferences that may appear should a researcher attempt to generalize from one level of investigation to another. The individualistic fallacy is the attempt to impute macrolevel (aggregate) relationships from microlevel (individual) relationships. It is the classic aggregation problem first examined by economists, and according to Hannan [15, p. 5] it concerns attempts to group observations on ‘behavioral units’ so as to investigate economic relationships holding for sectors or total economies.” Cross-level fallacies can occur when one makes inferences from one subpopulation to another at the same level of analysis. The ecological fallacy, so named from the work of Robinson [18], is the opposite of the individualistic fallacy and involves making inferences from higher to lower levels of analysis. Robinson demonstrated that there was not necessarily a correspondence between individual and ecological correlations, and that generally the latter would be larger than the former. Although the ecological fallacy has been widely discussed and publicized, it is still a common error in studies involving causal inference.
Clark, W. A., & Avery, K. L. (1976). The effects of data aggregation in statistical analysis. Geographical Analysis, 8(4), 428-438.
The paper is available as a PDF on Google Scholar (not linking because links break) | What's aggregation bias, and how does it relate to the ecological fallacy? | From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; | What's aggregation bias, and how does it relate to the ecological fallacy?
From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; and Blalock [2] has shown that the regression coefficients may be biased also. It is well established that it is incorrect to assume that relationships existing at one level of analysis will necessarily demonstrate the same strength at another level. The estimates derived from aggregate data are valid only for the particular system of observational units employed. The consequences of using potentially biased estimates of the correlation and regression coefficients as substitutes for the “true” microlevel estimates are most serious in terms of the causal inferences to be drawn from statistical analyses
And just a bit later in the paper, relating to how aggregation bias and ecology fallacy are related (bold is mine):
Probably the most serious disadvantage of using aggregate data is the inherent difficulty of making valid multilevel inferences based on a single level of analysis [1]. Alker has identified three types of erroneous inferences that may appear should a researcher attempt to generalize from one level of investigation to another. The individualistic fallacy is the attempt to impute macrolevel (aggregate) relationships from microlevel (individual) relationships. It is the classic aggregation problem first examined by economists, and according to Hannan [15, p. 5] it concerns attempts to group observations on ‘behavioral units’ so as to investigate economic relationships holding for sectors or total economies.” Cross-level fallacies can occur when one makes inferences from one subpopulation to another at the same level of analysis. The ecological fallacy, so named from the work of Robinson [18], is the opposite of the individualistic fallacy and involves making inferences from higher to lower levels of analysis. Robinson demonstrated that there was not necessarily a correspondence between individual and ecological correlations, and that generally the latter would be larger than the former. Although the ecological fallacy has been widely discussed and publicized, it is still a common error in studies involving causal inference.
Clark, W. A., & Avery, K. L. (1976). The effects of data aggregation in statistical analysis. Geographical Analysis, 8(4), 428-438.
The paper is available as a PDF on Google Scholar (not linking because links break) | What's aggregation bias, and how does it relate to the ecological fallacy?
From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; |
49,223 | Showing independence between two functions of a set of random variables | The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \sum_i a_i \log(X_i),\ Z = \sum_i X_i,$$
for $|s|\lt 1$ and $|t|\lt 1$ compute the joint MGF as
$$\eqalign{
\phi_{Y,Z}(s,t) &= \mathbb{E}\left[e^{sY + tZ}\right] \\
&= \int\cdots\int \exp(sY+tZ) \prod_i \exp(-x_i)\mathrm{d}x_i \\
&= \prod_i \int_{0}^\infty \exp\left(sa_i \log(x_i) + (t-1) x_i\right) \mathrm{d}x_i\\
&= \prod_i \int_{0}^\infty x_i^{sa_i} e^{(t-1)x_i}\mathrm{d}x_i\\
&= \prod_i (t-1)^{-(sa_i + 1)} \Gamma(sa_i + 1) \\
&= \left((t-1)^{-n}(t-1)^{-s \sum_i a_i}\right)\ \left(\prod_i \Gamma(a_is + 1)\right).
}$$
The term $(t-1)^{-s\sum_i a_i}$ shows this factors into separate MGFs $\phi_Y(s)\phi_Z(t)$ if and only if $\sum_i a_i = 0.$ (When it does factor, we recognize the components $\phi_Y$ and $\phi_Z$ as the MGFs of a sum of shifted Gumbel distributions and a Gamma distribution, respectively.) Therefore
$Y$ and $Z$ are independent if and only if the $a_i$ sum to zero.
This figure documents a simulation of ten thousand draws of the $X_i$ with $n=3.$ The four panels are scatterplots of $Z$ vs $Y$ for four different sets of vectors $(a_i).$ The red lines are Lowess smooths: a non-horizontal smooth indicates lack of independence. Can you spot the vector that does not sum to zero?
This R code produced the examples.
#
# Specify the problem, the simulation size, and number of examples to plot.
#
n <- 3
n.sim <- 1e4
n.examples <- 4
#
# Create the examples randomly.
#
a <- matrix(rnorm(n * n.examples), n)
a <- scale(a, scale=FALSE) # Will sum to 0
a[, n.examples] <- a[, n.examples] + runif(n, 1/n, 2/n) # Will not sum to 0
#
# Generate the data.
#
x <- matrix(rexp(n*n.sim), n)
#
# Compute Z and various Y.
#
z <- colSums(x)
y <- t(a) %*% log(x)
#
# Show scatterplots and smooths.
#
mfr <- par(mfrow=c(1, n.examples))
invisible(apply(y, 1, function(y) {
plot(y, z, pch=19, col="#00000004")
f <- lowess(z ~ y)
lines(f$x, f$y, col="Red", lwd=2)
}))
par(mfrow=mfr) | Showing independence between two functions of a set of random variables | The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \ | Showing independence between two functions of a set of random variables
The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \sum_i a_i \log(X_i),\ Z = \sum_i X_i,$$
for $|s|\lt 1$ and $|t|\lt 1$ compute the joint MGF as
$$\eqalign{
\phi_{Y,Z}(s,t) &= \mathbb{E}\left[e^{sY + tZ}\right] \\
&= \int\cdots\int \exp(sY+tZ) \prod_i \exp(-x_i)\mathrm{d}x_i \\
&= \prod_i \int_{0}^\infty \exp\left(sa_i \log(x_i) + (t-1) x_i\right) \mathrm{d}x_i\\
&= \prod_i \int_{0}^\infty x_i^{sa_i} e^{(t-1)x_i}\mathrm{d}x_i\\
&= \prod_i (t-1)^{-(sa_i + 1)} \Gamma(sa_i + 1) \\
&= \left((t-1)^{-n}(t-1)^{-s \sum_i a_i}\right)\ \left(\prod_i \Gamma(a_is + 1)\right).
}$$
The term $(t-1)^{-s\sum_i a_i}$ shows this factors into separate MGFs $\phi_Y(s)\phi_Z(t)$ if and only if $\sum_i a_i = 0.$ (When it does factor, we recognize the components $\phi_Y$ and $\phi_Z$ as the MGFs of a sum of shifted Gumbel distributions and a Gamma distribution, respectively.) Therefore
$Y$ and $Z$ are independent if and only if the $a_i$ sum to zero.
This figure documents a simulation of ten thousand draws of the $X_i$ with $n=3.$ The four panels are scatterplots of $Z$ vs $Y$ for four different sets of vectors $(a_i).$ The red lines are Lowess smooths: a non-horizontal smooth indicates lack of independence. Can you spot the vector that does not sum to zero?
This R code produced the examples.
#
# Specify the problem, the simulation size, and number of examples to plot.
#
n <- 3
n.sim <- 1e4
n.examples <- 4
#
# Create the examples randomly.
#
a <- matrix(rnorm(n * n.examples), n)
a <- scale(a, scale=FALSE) # Will sum to 0
a[, n.examples] <- a[, n.examples] + runif(n, 1/n, 2/n) # Will not sum to 0
#
# Generate the data.
#
x <- matrix(rexp(n*n.sim), n)
#
# Compute Z and various Y.
#
z <- colSums(x)
y <- t(a) %*% log(x)
#
# Show scatterplots and smooths.
#
mfr <- par(mfrow=c(1, n.examples))
invisible(apply(y, 1, function(y) {
plot(y, z, pch=19, col="#00000004")
f <- lowess(z ~ y)
lines(f$x, f$y, col="Red", lwd=2)
}))
par(mfrow=mfr) | Showing independence between two functions of a set of random variables
The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \ |
49,224 | Non linear regression mixed model | Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes for a headache. I don't think there's enough data to fit it.
A random effects model might help. With your full data of 20 groups you may be able to easily fit a large random effects structure as in the other answer (or even larger). But, in that case you don't get much help predicting specific optimum of each strain. (Although you'd estimate the mean parameters of the optimum model more accurately due to partial pooling of data across the three groups.
The simplest approach is to fit a separate model for each strain and predict from that. Here's some code adopted from yours (and also using intervals as in @Walmes answer) that does that and combines the estimates to get the table of estimates and intervals:
dlist = sapply(levels(df$iso), function(ll) subset(df, iso==ll),
simplify = FALSE, USE.NAMES = TRUE) # so we get a named list
reslist <- lapply(names(dlist),
function(iso_name) {
n0 <- gnls(diam ~ thy * exp(thq * (temp - thx ) ^ 2 +
thc * (temp - thx) ^ 3),
data=dlist[[iso_name]],
start=c(thy=5.5, thq=-0.08, thx=25, thc=-0.01))
list(thx = c(iso=iso_name, intervals(n0)$coef["thx", ]), model=n0)
})
data.frame(do.call(rbind, lapply(reslist, function(r) r$thx)))
table:
iso lower est. upper
1 Itiquira 23.076061236415 25.2831326285258 27.4902040206366
2 Londrina 23.7432027069316 24.4043925569263 25.0655824069209
3 Sinop 25.2525659791209 26.4496178512724 27.6466697234239
you can explore the fit of each model using plot(reslist[[1]]$model) or summary(reslist[[1]])
edit oh, as @Walmes pointed out (and the question is currently edited to show) you can do this more cleanly using nlme:nlsList | Non linear regression mixed model | Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes | Non linear regression mixed model
Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes for a headache. I don't think there's enough data to fit it.
A random effects model might help. With your full data of 20 groups you may be able to easily fit a large random effects structure as in the other answer (or even larger). But, in that case you don't get much help predicting specific optimum of each strain. (Although you'd estimate the mean parameters of the optimum model more accurately due to partial pooling of data across the three groups.
The simplest approach is to fit a separate model for each strain and predict from that. Here's some code adopted from yours (and also using intervals as in @Walmes answer) that does that and combines the estimates to get the table of estimates and intervals:
dlist = sapply(levels(df$iso), function(ll) subset(df, iso==ll),
simplify = FALSE, USE.NAMES = TRUE) # so we get a named list
reslist <- lapply(names(dlist),
function(iso_name) {
n0 <- gnls(diam ~ thy * exp(thq * (temp - thx ) ^ 2 +
thc * (temp - thx) ^ 3),
data=dlist[[iso_name]],
start=c(thy=5.5, thq=-0.08, thx=25, thc=-0.01))
list(thx = c(iso=iso_name, intervals(n0)$coef["thx", ]), model=n0)
})
data.frame(do.call(rbind, lapply(reslist, function(r) r$thx)))
table:
iso lower est. upper
1 Itiquira 23.076061236415 25.2831326285258 27.4902040206366
2 Londrina 23.7432027069316 24.4043925569263 25.0655824069209
3 Sinop 25.2525659791209 26.4496178512724 27.6466697234239
you can explore the fit of each model using plot(reslist[[1]]$model) or summary(reslist[[1]])
edit oh, as @Walmes pointed out (and the question is currently edited to show) you can do this more cleanly using nlme:nlsList | Non linear regression mixed model
Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes |
49,225 | Non linear regression mixed model | Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estimate. With the data you provided, the model is estimated but the correlation between random effects is close to one. I prefer random effects in thy and thx parameters because they are low order parameters and the strains seems to vary most in these parameters.
df <- groupedData(diam ~ temp | iso, data = df, order = FALSE)
str(df)
n0 <- nlme(diam ~ thy * exp(thq * (temp - thx)^2 + thc * (temp - thx)^3),
data = df,
fixed = thy + thq + thx + thc ~ 1,
# random = thy + thq + thx + thc ~ 1 | iso,
random = thy + thx ~ 1 | iso,
start = c(thy = 5.5, thq = -0.08, thx = 25, thc = -0.01))
summary(n0)
plot(augPred(n0, level = 0:1))
intervals(n0, which = "fixed")
ranef(n0) | Non linear regression mixed model | Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estima | Non linear regression mixed model
Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estimate. With the data you provided, the model is estimated but the correlation between random effects is close to one. I prefer random effects in thy and thx parameters because they are low order parameters and the strains seems to vary most in these parameters.
df <- groupedData(diam ~ temp | iso, data = df, order = FALSE)
str(df)
n0 <- nlme(diam ~ thy * exp(thq * (temp - thx)^2 + thc * (temp - thx)^3),
data = df,
fixed = thy + thq + thx + thc ~ 1,
# random = thy + thq + thx + thc ~ 1 | iso,
random = thy + thx ~ 1 | iso,
start = c(thy = 5.5, thq = -0.08, thx = 25, thc = -0.01))
summary(n0)
plot(augPred(n0, level = 0:1))
intervals(n0, which = "fixed")
ranef(n0) | Non linear regression mixed model
Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estima |
49,226 | How do I find the percentile p (or quantile q) from a weighted dataset? | To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks of the observations.
Find the largest $k$ such that $w_1+w_2+\ldots+w_k \leq Wq$.
Then $x_k$ is your estimate for the $q$th quantile.
Notice $x_k$ estimates a range of quantiles, just like you would see if you created an expanded dataset replicating observations repeatedly based on their weights. | How do I find the percentile p (or quantile q) from a weighted dataset? | To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks o | How do I find the percentile p (or quantile q) from a weighted dataset?
To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks of the observations.
Find the largest $k$ such that $w_1+w_2+\ldots+w_k \leq Wq$.
Then $x_k$ is your estimate for the $q$th quantile.
Notice $x_k$ estimates a range of quantiles, just like you would see if you created an expanded dataset replicating observations repeatedly based on their weights. | How do I find the percentile p (or quantile q) from a weighted dataset?
To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks o |
49,227 | How to compute estimate for the first time series value using ARIMA model? | There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first observation, or the mean of the first $n$ observations, or whatever. These are often used when there is no underlying statistical model, or when we don't care about it, like in Exponential Smoothing or in Croston's method for intermittent demands.
Conversely, you can treat not only your AR and MA parameters as parameters, but also the initial values, and then optimize these by maximizing the likelihood, assuming that you do have a statistical model. Conditional sums of squares are similar, though I don't have the details handy.
Sometimes we will use heuristics even if a statistical model is available. For instance, if you have a long seasonal pattern, say weekly data with yearly seasonality, you will have to optimize a lot of parameters, and you likely won't have a lot of data compared to the number of parameters, so you may end up overfitting if you use maximum likelihood. In such case, it makes sense to use heuristics for computational and stability reasons.
Finally, for your specific application, here is what ?Arima tells you for the method parameter:
The default (unless there are missing values) is to use
conditional-sum-of-squares to find starting values, then maximum
likelihood.
If you are interested in the gory details, you could look into the source code. | How to compute estimate for the first time series value using ARIMA model? | There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first obser | How to compute estimate for the first time series value using ARIMA model?
There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first observation, or the mean of the first $n$ observations, or whatever. These are often used when there is no underlying statistical model, or when we don't care about it, like in Exponential Smoothing or in Croston's method for intermittent demands.
Conversely, you can treat not only your AR and MA parameters as parameters, but also the initial values, and then optimize these by maximizing the likelihood, assuming that you do have a statistical model. Conditional sums of squares are similar, though I don't have the details handy.
Sometimes we will use heuristics even if a statistical model is available. For instance, if you have a long seasonal pattern, say weekly data with yearly seasonality, you will have to optimize a lot of parameters, and you likely won't have a lot of data compared to the number of parameters, so you may end up overfitting if you use maximum likelihood. In such case, it makes sense to use heuristics for computational and stability reasons.
Finally, for your specific application, here is what ?Arima tells you for the method parameter:
The default (unless there are missing values) is to use
conditional-sum-of-squares to find starting values, then maximum
likelihood.
If you are interested in the gory details, you could look into the source code. | How to compute estimate for the first time series value using ARIMA model?
There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first obser |
49,228 | What statistical analysis to run for count data in R? | I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that takes into account the correlation of these observations within their clusters? I would suggest using a Generalized Estimating Equations (GEE) approach or a mixed model approach. If you aren't interested in examining differences between measurement sites, then I'd recommend going with GEE since it offers population average estimates. There are plenty of posts on Cross-Validated that describe GEE and mixed models. | What statistical analysis to run for count data in R? | I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that t | What statistical analysis to run for count data in R?
I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that takes into account the correlation of these observations within their clusters? I would suggest using a Generalized Estimating Equations (GEE) approach or a mixed model approach. If you aren't interested in examining differences between measurement sites, then I'd recommend going with GEE since it offers population average estimates. There are plenty of posts on Cross-Validated that describe GEE and mixed models. | What statistical analysis to run for count data in R?
I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that t |
49,229 | What statistical analysis to run for count data in R? | One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-fit (GoF) chi-squared test. The idea is that the number of counts would (under the null hypothesis of no difference between seasons) follow a uniform distribution across seasons.
For instance,
counts = c(spring = 453, fall = 324)
chisq.test(counts, p = c(0.5,0.5), correct = F)
Chi-squared test for given probabilities
data: counts
X-squared = 21.417, df = 1, p-value = 3.695e-06
A more complete approach would entail setting up a Poisson regression model in which the explanatory variables are season and year:
counts = c(139, 111, 152, 92, 162, 121)
year = rep(c("'14","'15","'16"), 2)
season = rep(c("spring","fall"), 3)
dat = data.frame(counts, year, season)
summary(glm(counts ~ year + season, family=poisson))
Call:
glm(formula = counts ~ year + season, family = poisson)
Deviance Residuals:
1 2 3 4 5 6
0.3707 -0.2671 -0.5720 -0.4440 0.2243 0.6644
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 4.56772 0.07828 58.348 < 2e-16 ***
year'15 0.16705 0.08940 1.869 0.0617 .
year'16 0.16705 0.08940 1.869 0.0617 .
seasonspring 0.33515 0.07276 4.606 4.1e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 27.3707 on 5 degrees of freedom
Residual deviance: 1.2248 on 2 degrees of freedom
AIC: 49.333
Number of Fisher Scoring iterations: 3 | What statistical analysis to run for count data in R? | One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-f | What statistical analysis to run for count data in R?
One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-fit (GoF) chi-squared test. The idea is that the number of counts would (under the null hypothesis of no difference between seasons) follow a uniform distribution across seasons.
For instance,
counts = c(spring = 453, fall = 324)
chisq.test(counts, p = c(0.5,0.5), correct = F)
Chi-squared test for given probabilities
data: counts
X-squared = 21.417, df = 1, p-value = 3.695e-06
A more complete approach would entail setting up a Poisson regression model in which the explanatory variables are season and year:
counts = c(139, 111, 152, 92, 162, 121)
year = rep(c("'14","'15","'16"), 2)
season = rep(c("spring","fall"), 3)
dat = data.frame(counts, year, season)
summary(glm(counts ~ year + season, family=poisson))
Call:
glm(formula = counts ~ year + season, family = poisson)
Deviance Residuals:
1 2 3 4 5 6
0.3707 -0.2671 -0.5720 -0.4440 0.2243 0.6644
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 4.56772 0.07828 58.348 < 2e-16 ***
year'15 0.16705 0.08940 1.869 0.0617 .
year'16 0.16705 0.08940 1.869 0.0617 .
seasonspring 0.33515 0.07276 4.606 4.1e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 27.3707 on 5 degrees of freedom
Residual deviance: 1.2248 on 2 degrees of freedom
AIC: 49.333
Number of Fisher Scoring iterations: 3 | What statistical analysis to run for count data in R?
One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-f |
49,230 | What statistical analysis to run for count data in R? | There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve and the cost (error) you can bear. If you want to visualize the data using the boxplot and avoid outliers, I would recommend transforming the data. If you want to develop statistical models, GLM models are recommended. Please keep in mind count data are nonparametric data, a t-test is not recommended. Please also refer to this paper and this page. It has a good description of the history of the debate I mentioned at the beginning. | What statistical analysis to run for count data in R? | There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve a | What statistical analysis to run for count data in R?
There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve and the cost (error) you can bear. If you want to visualize the data using the boxplot and avoid outliers, I would recommend transforming the data. If you want to develop statistical models, GLM models are recommended. Please keep in mind count data are nonparametric data, a t-test is not recommended. Please also refer to this paper and this page. It has a good description of the history of the debate I mentioned at the beginning. | What statistical analysis to run for count data in R?
There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve a |
49,231 | What statistical analysis to run for count data in R? | A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer in spring versus fall. R code below:
t.test(spring_deer_count, fall_deer_count)
If your t-value falls outside the confidence interval, you can safely reject the null hypothesis and accept the alternative hypothesis. | What statistical analysis to run for count data in R? | A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer i | What statistical analysis to run for count data in R?
A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer in spring versus fall. R code below:
t.test(spring_deer_count, fall_deer_count)
If your t-value falls outside the confidence interval, you can safely reject the null hypothesis and accept the alternative hypothesis. | What statistical analysis to run for count data in R?
A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer i |
49,232 | Should I have "Confidence" in Credibility Intervals? | If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior distribution (and therefore credible intervals constructed from it), because Bayes' rule provides the appropriate way to update your beliefs upon seeing data.
The above statement is for one particular experiment, but more importantly, one particular quantity of interest. It says nothing about what happens to a whole set of experiments or quantities, nor should it, since this one particular quantity is the quantity of interest. It also says nothing about where the true parameter value is, but instead says something about where you believe the true parameter value is. So it says exactly what you should believe based on your choice of prior, your choice of likelihood, and the data you observed.
For coverage of confidence or credible intervals, we need to construct a repeatable statistical procedure. The above procedure is not repeatable because the prior construction is specific to the particular quantity of interest. But, we can construct default Bayesian procedures whose priors are constructed to satisfy certain properties. One of those properties is probability matching and credible intervals constructed bsaed on probability matching priors attain the appropriate frequentist coverage over repeated use of this prior.
So, if coverage gives you "confidence", then you should probably only use a default Bayesian procedure with a probability matching prior. | Should I have "Confidence" in Credibility Intervals? | If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior di | Should I have "Confidence" in Credibility Intervals?
If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior distribution (and therefore credible intervals constructed from it), because Bayes' rule provides the appropriate way to update your beliefs upon seeing data.
The above statement is for one particular experiment, but more importantly, one particular quantity of interest. It says nothing about what happens to a whole set of experiments or quantities, nor should it, since this one particular quantity is the quantity of interest. It also says nothing about where the true parameter value is, but instead says something about where you believe the true parameter value is. So it says exactly what you should believe based on your choice of prior, your choice of likelihood, and the data you observed.
For coverage of confidence or credible intervals, we need to construct a repeatable statistical procedure. The above procedure is not repeatable because the prior construction is specific to the particular quantity of interest. But, we can construct default Bayesian procedures whose priors are constructed to satisfy certain properties. One of those properties is probability matching and credible intervals constructed bsaed on probability matching priors attain the appropriate frequentist coverage over repeated use of this prior.
So, if coverage gives you "confidence", then you should probably only use a default Bayesian procedure with a probability matching prior. | Should I have "Confidence" in Credibility Intervals?
If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior di |
49,233 | Should I have "Confidence" in Credibility Intervals? | It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inherently, I (a proponent of Bayesian methods) ask how much to believe in something, and with new information, I am able to adjust my belief (adjust what I think). To me, belief simply represents how sure I am of the presence of an effect in combination with the totality of previous evidence, with "truth" never being given consideration.
For instance, I have an effect size, d, of 0.2 and a CI spanning from 0.004-0.510. How is it you can use this to derive a binary truth? What is your truth here? Is your sense of truth based on hypothetical resampling, from which a fixed but unknown parameter is captured by hypothetical intervals that are hypothetically constructed from the aforementioned resampling that has not actually occurred? In order for that to be the way in which truth is bestowed upon us, you would have to believe it to be so. I believe that to be improbable.
Based only on this effect size and interval, I would infer that, although the interval does not span zero, values very close to zero are probable. I would then pick a value based off past evidence to assess the posterior probability of an effect that can be considered important (practical significance). In this case, I would pick 0.2 (common in psychology), which would potentially result in a posterior probability of effect being less than 0.2 as 50 %, whereas there would also be a 50 % posterior probability of the effect being greater than 0.2. Based on this information, I would be very unsure of the importance of the effect, but would also consider the presence of an effect to be probable and worth consideration. For me, this is the appropriate inference based on the effect size and interval and can only be obtained using Bayesian methods.
As much as we would like a statistical programming language to give us all the answers, it cannot. We actually have to think and, from what we think, we form beliefs. As such, I would suggest giving serious consideration to the limits of statistical inference and to not be shy about being somewhat sure/unsure of what the truth actually is. | Should I have "Confidence" in Credibility Intervals? | It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inhe | Should I have "Confidence" in Credibility Intervals?
It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inherently, I (a proponent of Bayesian methods) ask how much to believe in something, and with new information, I am able to adjust my belief (adjust what I think). To me, belief simply represents how sure I am of the presence of an effect in combination with the totality of previous evidence, with "truth" never being given consideration.
For instance, I have an effect size, d, of 0.2 and a CI spanning from 0.004-0.510. How is it you can use this to derive a binary truth? What is your truth here? Is your sense of truth based on hypothetical resampling, from which a fixed but unknown parameter is captured by hypothetical intervals that are hypothetically constructed from the aforementioned resampling that has not actually occurred? In order for that to be the way in which truth is bestowed upon us, you would have to believe it to be so. I believe that to be improbable.
Based only on this effect size and interval, I would infer that, although the interval does not span zero, values very close to zero are probable. I would then pick a value based off past evidence to assess the posterior probability of an effect that can be considered important (practical significance). In this case, I would pick 0.2 (common in psychology), which would potentially result in a posterior probability of effect being less than 0.2 as 50 %, whereas there would also be a 50 % posterior probability of the effect being greater than 0.2. Based on this information, I would be very unsure of the importance of the effect, but would also consider the presence of an effect to be probable and worth consideration. For me, this is the appropriate inference based on the effect size and interval and can only be obtained using Bayesian methods.
As much as we would like a statistical programming language to give us all the answers, it cannot. We actually have to think and, from what we think, we form beliefs. As such, I would suggest giving serious consideration to the limits of statistical inference and to not be shy about being somewhat sure/unsure of what the truth actually is. | Should I have "Confidence" in Credibility Intervals?
It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inhe |
49,234 | Formal test for exogeneity of instruments | If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
Y = \alpha + \beta D + U
$$
This is a homogeneous effects model: the treatment effect is a constant $\beta$ that is the same for everyone. The two IV assumptions are relevance and exogeneity. Relevance requires that $\text{Cov}(Z,D) \neq 0$. This is directly testable. Exogeneity requires that $\text{Cov}(Z,U) = 0$. This cannot be tested. To see why suppose that $Z$ is in fact an endogenous instrument, i.e. that
Suppose that $Z$ is in fact an invalid instrument, i.e. that $\text{Cov}(Z,U) \neq 0$.
In this case the IV estimand is still perfectly well-defined, it simply doesn't equal $\beta$:
$$
\beta_{IV} = \frac{\text{Cov}(Z,Y)}{\text{Cov}(Z,D)} = \beta + \frac{\text{Cov}(Z,U)}{\text{Cov}(Z,D)}, \quad
\alpha_{IV} = \mathbb{E}(Y) - \beta_{IV} \mathbb{E}(D).
$$
Now, let $V$ be the IV residual: $V \equiv Y - \alpha_{IV} - \beta_{IV} D$.
Note that $V$ is only equal to $U$ if $Z$ is a valid instrument, because this is the only way that we can have $\beta_{IV} = \beta$ and $\alpha_{IV} = \alpha$.
Using our definition of $V$, we can calculate $\text{Cov}(Z,V)$ as follows:
\begin{align*}
\text{Cov}(Z,V) &= \text{Cov}(Z, Y - \alpha_{IV} - \beta_{IV} D) = \text{Cov}(Z,Y) - \beta_{IV} \text{Cov}(Z,D) \\
&= \text{Cov}(Z,Y) - \frac{\text{Cov}(Z,Y)}{\text{Cov}(Z,D)} \text{Cov}(Z,D) = 0.
\end{align*}
In other words, $Z$ is always perfectly uncorrelated with the IV residual $V$ by construction, regardless of whether $Z$ is correlated with the structural error $U$.
A Durbin-Hausman-Wu test checks whether the OLS and IV estimands are the same. This does not tell us whether the instrument is invalid.
When there are more instruments than endogenous regressors, an overidentifying restrictions test can be used to test the null hypothesis that both instruments are valid. The intuition is as follows. Continue to assume that $Y = \alpha + \beta D + U$ but suppose now that we have two relevant instruments $Z_1$ and $Z_2$, i.e. $\text{Cov}(Z_1, D) \neq 0$ and $\text{Cov}(Z_2,D)\neq 0$. Define two IV estimands: one that uses $Z_1$ to instrument for $D$ and another that uses $Z_2$, namely
$$
\beta_{IV}^{(1)} \equiv \frac{\text{Cov}(Z_1,Y)}{\text{Cov}(Z_1,D)} = \beta + \frac{\text{Cov}(Z_1,U)}{\text{Cov}(Z_1,D)}
$$
and
$$
\beta_{IV}^{(2)} \equiv \frac{\text{Cov}(Z_2,Y)}{\text{Cov}(Z_2,D)} = \beta + \frac{\text{Cov}(Z_2,U)}{\text{Cov}(Z_2,D)}.
$$
Taking differences of the two estimands, we obtain
$$
\beta_{IV}^{(1)} - \beta_{IV}^{(2)} = \frac{\text{Cov}(Z_1,U)}{\text{Cov}(Z_1,D)} - \frac{\text{Cov}(Z_2,U)}{\text{Cov}(Z_2,D)}.
$$
If both $Z_1$ and $Z_2$ are valid instruments, then $\text{Cov}(Z_1,U) = \text{Cov}(Z_2,U) = 0$ which implies $\beta_{IV}^{(1)} - \beta_{IV}^{(2)} = 0$.
Therefore, if $\beta_{IV}^{(1)}$ and $\beta_{IV}^{(2)}$ differ then at least one of the instruments $(Z_1,Z_2)$ must be invalid.
While it is formulated in a slightly different way, a test of overidentifying restrictions exploits this basic intuition to provide a test of the joint null hypothesis that both instruments are valid: $\text{Cov}(Z_1,U) = \text{Cov}(Z_2,U) = 0$.
While this example concerns two instruments in a model with a single endogenous regressor, the same idea applies whenever there are more instruments than endogenous regressors.
In a model with heterogeneous treatment effects, the equivalent of instrument exogeneity does have testable implications even if there are as many endogenous regressors as instruments. See the following references for details:
Huber, Martin, and Giovanni Mellace. "Testing instrument validity for
LATE identification based on inequality moment constraints." Review
of Economics and Statistics 97.2 (2015): 398-411.
Mourifié, Ismael, and Yuanyuan Wan. "Testing local average treatment effect assumptions." Review of Economics and Statistics 99.2 (2017): 305-313.
Kitagawa, Toru. "A test for instrument validity." Econometrica 83.5 (2015): 2043-2063. | Formal test for exogeneity of instruments | If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
| Formal test for exogeneity of instruments
If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
Y = \alpha + \beta D + U
$$
This is a homogeneous effects model: the treatment effect is a constant $\beta$ that is the same for everyone. The two IV assumptions are relevance and exogeneity. Relevance requires that $\text{Cov}(Z,D) \neq 0$. This is directly testable. Exogeneity requires that $\text{Cov}(Z,U) = 0$. This cannot be tested. To see why suppose that $Z$ is in fact an endogenous instrument, i.e. that
Suppose that $Z$ is in fact an invalid instrument, i.e. that $\text{Cov}(Z,U) \neq 0$.
In this case the IV estimand is still perfectly well-defined, it simply doesn't equal $\beta$:
$$
\beta_{IV} = \frac{\text{Cov}(Z,Y)}{\text{Cov}(Z,D)} = \beta + \frac{\text{Cov}(Z,U)}{\text{Cov}(Z,D)}, \quad
\alpha_{IV} = \mathbb{E}(Y) - \beta_{IV} \mathbb{E}(D).
$$
Now, let $V$ be the IV residual: $V \equiv Y - \alpha_{IV} - \beta_{IV} D$.
Note that $V$ is only equal to $U$ if $Z$ is a valid instrument, because this is the only way that we can have $\beta_{IV} = \beta$ and $\alpha_{IV} = \alpha$.
Using our definition of $V$, we can calculate $\text{Cov}(Z,V)$ as follows:
\begin{align*}
\text{Cov}(Z,V) &= \text{Cov}(Z, Y - \alpha_{IV} - \beta_{IV} D) = \text{Cov}(Z,Y) - \beta_{IV} \text{Cov}(Z,D) \\
&= \text{Cov}(Z,Y) - \frac{\text{Cov}(Z,Y)}{\text{Cov}(Z,D)} \text{Cov}(Z,D) = 0.
\end{align*}
In other words, $Z$ is always perfectly uncorrelated with the IV residual $V$ by construction, regardless of whether $Z$ is correlated with the structural error $U$.
A Durbin-Hausman-Wu test checks whether the OLS and IV estimands are the same. This does not tell us whether the instrument is invalid.
When there are more instruments than endogenous regressors, an overidentifying restrictions test can be used to test the null hypothesis that both instruments are valid. The intuition is as follows. Continue to assume that $Y = \alpha + \beta D + U$ but suppose now that we have two relevant instruments $Z_1$ and $Z_2$, i.e. $\text{Cov}(Z_1, D) \neq 0$ and $\text{Cov}(Z_2,D)\neq 0$. Define two IV estimands: one that uses $Z_1$ to instrument for $D$ and another that uses $Z_2$, namely
$$
\beta_{IV}^{(1)} \equiv \frac{\text{Cov}(Z_1,Y)}{\text{Cov}(Z_1,D)} = \beta + \frac{\text{Cov}(Z_1,U)}{\text{Cov}(Z_1,D)}
$$
and
$$
\beta_{IV}^{(2)} \equiv \frac{\text{Cov}(Z_2,Y)}{\text{Cov}(Z_2,D)} = \beta + \frac{\text{Cov}(Z_2,U)}{\text{Cov}(Z_2,D)}.
$$
Taking differences of the two estimands, we obtain
$$
\beta_{IV}^{(1)} - \beta_{IV}^{(2)} = \frac{\text{Cov}(Z_1,U)}{\text{Cov}(Z_1,D)} - \frac{\text{Cov}(Z_2,U)}{\text{Cov}(Z_2,D)}.
$$
If both $Z_1$ and $Z_2$ are valid instruments, then $\text{Cov}(Z_1,U) = \text{Cov}(Z_2,U) = 0$ which implies $\beta_{IV}^{(1)} - \beta_{IV}^{(2)} = 0$.
Therefore, if $\beta_{IV}^{(1)}$ and $\beta_{IV}^{(2)}$ differ then at least one of the instruments $(Z_1,Z_2)$ must be invalid.
While it is formulated in a slightly different way, a test of overidentifying restrictions exploits this basic intuition to provide a test of the joint null hypothesis that both instruments are valid: $\text{Cov}(Z_1,U) = \text{Cov}(Z_2,U) = 0$.
While this example concerns two instruments in a model with a single endogenous regressor, the same idea applies whenever there are more instruments than endogenous regressors.
In a model with heterogeneous treatment effects, the equivalent of instrument exogeneity does have testable implications even if there are as many endogenous regressors as instruments. See the following references for details:
Huber, Martin, and Giovanni Mellace. "Testing instrument validity for
LATE identification based on inequality moment constraints." Review
of Economics and Statistics 97.2 (2015): 398-411.
Mourifié, Ismael, and Yuanyuan Wan. "Testing local average treatment effect assumptions." Review of Economics and Statistics 99.2 (2017): 305-313.
Kitagawa, Toru. "A test for instrument validity." Econometrica 83.5 (2015): 2043-2063. | Formal test for exogeneity of instruments
If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
|
49,235 | Formal test for exogeneity of instruments | Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments | Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments
Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments
Hausman and Wu specifications and the test for over identification will do. |
49,236 | Formal test for exogeneity of instruments | You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous variable, $z$ is an instrument, and $u$ are unobservables. Endogeneity is what happens when one or more of your right-hand-side variables is correlated with $u$, so for your instrument to be endogenous, it would have to be correlated with $u$ and not $y$.
Recall the 3 criteria for a valid instrument (Wooldridge, 2009):
$cov(z,y)=0$
$cov(z,x)≠0$
$cov(z,u)=0$
The exogeneity of the instrument criterion refers to bullet point 3 above, and an over-identified model is required to test this criterion. The remaining 2 criteria, however, can easily be tested. Remember that $cov(z,x)≠0$ means $z$ cannot be directly related to $y$, except through $x$. Meaning, $z$ and $y$ are allowed to be related in bivariate relationships, but as soon as $x$ is added to the model, the relationship should be null. One easy way of testing this relationship is to fit
$y=\beta_0+\beta_1x+\beta_2z+V\beta+\epsilon$,
where $x$ is the endogenous variable and $V$ is a vector of exogenous control variables. Controlling for all model covariates, including the endogenous variable, test the coefficient of $z$. $\beta_z$ should be non-significant.
Criterion 2 (also called the Test of Instrument Relevance) can be tested in a similar way but regressing $x$ on $z$ using the same control variables.
$x=\delta_0+\delta_1z+V\delta+nu$,
The test on the coefficient of $z$ ($\delta_1$) should be statistically significant in this case.
Reference
Wooldridge JM. Introductory Econometrics: A Modern Approach. 4th ed. Mason, OH, USA: South-Western, Cengage Learning; 2009. | Formal test for exogeneity of instruments | You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous | Formal test for exogeneity of instruments
You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous variable, $z$ is an instrument, and $u$ are unobservables. Endogeneity is what happens when one or more of your right-hand-side variables is correlated with $u$, so for your instrument to be endogenous, it would have to be correlated with $u$ and not $y$.
Recall the 3 criteria for a valid instrument (Wooldridge, 2009):
$cov(z,y)=0$
$cov(z,x)≠0$
$cov(z,u)=0$
The exogeneity of the instrument criterion refers to bullet point 3 above, and an over-identified model is required to test this criterion. The remaining 2 criteria, however, can easily be tested. Remember that $cov(z,x)≠0$ means $z$ cannot be directly related to $y$, except through $x$. Meaning, $z$ and $y$ are allowed to be related in bivariate relationships, but as soon as $x$ is added to the model, the relationship should be null. One easy way of testing this relationship is to fit
$y=\beta_0+\beta_1x+\beta_2z+V\beta+\epsilon$,
where $x$ is the endogenous variable and $V$ is a vector of exogenous control variables. Controlling for all model covariates, including the endogenous variable, test the coefficient of $z$. $\beta_z$ should be non-significant.
Criterion 2 (also called the Test of Instrument Relevance) can be tested in a similar way but regressing $x$ on $z$ using the same control variables.
$x=\delta_0+\delta_1z+V\delta+nu$,
The test on the coefficient of $z$ ($\delta_1$) should be statistically significant in this case.
Reference
Wooldridge JM. Introductory Econometrics: A Modern Approach. 4th ed. Mason, OH, USA: South-Western, Cengage Learning; 2009. | Formal test for exogeneity of instruments
You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous |
49,237 | Combining transition operators in MCMC | A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about the benefits of mixing two MCMC kernels. (One can also argue that Gibbs sampling is nothing but a combination of kernels. While each of those kernels is not even irreducible, since it only generates a subset of the variables, the combination is a valid MCMC generator. Furthermore, it can be shown that a random combination does better than a sequential generation of $\theta_1$, then $\theta_2$, etc.) | Combining transition operators in MCMC | A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about t | Combining transition operators in MCMC
A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about the benefits of mixing two MCMC kernels. (One can also argue that Gibbs sampling is nothing but a combination of kernels. While each of those kernels is not even irreducible, since it only generates a subset of the variables, the combination is a valid MCMC generator. Furthermore, it can be shown that a random combination does better than a sequential generation of $\theta_1$, then $\theta_2$, etc.) | Combining transition operators in MCMC
A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about t |
49,238 | Markov Random Field Non-Positive Distribution | The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 12 here, and the explanation that follows on page 13. The given network is not factorable.
In case the link goes stale, consider a distribution 4 nodes that form a square, with vertices labeled $(a,b,c,d)$ in clockwise order. Then define a uniform distribution on 8 of the following configurations (with the other 8 having zero probability):
1----1 1----1
| | | |
| | | |
1----1 0----1
1----0 0----1
| | | |
| | | |
1----0 0----1
1----0 0----0
| | | |
| | | |
0----1 0----1
0----0 0----0
| | | |
| | | |
1----0 0----0
The proof is to then assume the distribution factorizes. An exhaustive search on the above configurations will show all four factors are positive. This is a contradiction since the remaining 8 states have 0 probability. | Markov Random Field Non-Positive Distribution | The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 1 | Markov Random Field Non-Positive Distribution
The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 12 here, and the explanation that follows on page 13. The given network is not factorable.
In case the link goes stale, consider a distribution 4 nodes that form a square, with vertices labeled $(a,b,c,d)$ in clockwise order. Then define a uniform distribution on 8 of the following configurations (with the other 8 having zero probability):
1----1 1----1
| | | |
| | | |
1----1 0----1
1----0 0----1
| | | |
| | | |
1----0 0----1
1----0 0----0
| | | |
| | | |
0----1 0----1
0----0 0----0
| | | |
| | | |
1----0 0----0
The proof is to then assume the distribution factorizes. An exhaustive search on the above configurations will show all four factors are positive. This is a contradiction since the remaining 8 states have 0 probability. | Markov Random Field Non-Positive Distribution
The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 1 |
49,239 | Is Quasi-Poisson the same thing as fitting a Poisson GEE model? | Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $V_i$ is the variance of observation $i$. In the case that these are a constant multiple of the (standard dispersion) poisson mean, this reduces to a poisson GLM with a dispersion parameter. However, the GEE is more general than that; it can cope when there is additional variance structure on top of this. | Is Quasi-Poisson the same thing as fitting a Poisson GEE model? | Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $ | Is Quasi-Poisson the same thing as fitting a Poisson GEE model?
Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $V_i$ is the variance of observation $i$. In the case that these are a constant multiple of the (standard dispersion) poisson mean, this reduces to a poisson GLM with a dispersion parameter. However, the GEE is more general than that; it can cope when there is additional variance structure on top of this. | Is Quasi-Poisson the same thing as fitting a Poisson GEE model?
Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $ |
49,240 | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation | In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coefficient. Each column of $X$ is a predictor and each row is observed predictor values for each observation.
If we to write $X$ as $(x_1^T \, x_2^T \, \ldots \, x_m^T)$ then each $x_i$ is the observed predictors for the $i$th observation. Each $x_i^T$ makes the rows of $X$. Then from the matrix notation, for each observation $i$, we have
$$\hat{y}_i = x_i^T \hat{\beta}.$$
As amoeba pointed out, Hastie et al. use the notation $X$ in place of my $x_i$ here, which is different from the $X$ notation in the first equation. | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation | In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coeffic | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation
In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coefficient. Each column of $X$ is a predictor and each row is observed predictor values for each observation.
If we to write $X$ as $(x_1^T \, x_2^T \, \ldots \, x_m^T)$ then each $x_i$ is the observed predictors for the $i$th observation. Each $x_i^T$ makes the rows of $X$. Then from the matrix notation, for each observation $i$, we have
$$\hat{y}_i = x_i^T \hat{\beta}.$$
As amoeba pointed out, Hastie et al. use the notation $X$ in place of my $x_i$ here, which is different from the $X$ notation in the first equation. | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation
In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coeffic |
49,241 | Endogeneity in forecasting | Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is a major problem. Endogeneity produce, first of all, biased parameters estimates. Other sources of endogeneity as, measurement errors or simultaneity/reverse causation, produce biased parameters estimation as well.
However if your goal is forecasting (or contemporaneous prediction as well) your major problem is overfitting. This concept is related to loss function as mean square errors, that you have to minimize, and appear when we consider in sample vs out of sample measure.
Key concept for understand the crucial distinction in argument is bias-variance trade off. Read my explanation here (Are inconsistent estimators ever preferable?) and, mostly, the cited article to which it refers to.
For another explanation you can read this article
http://statisticalhorizons.com/prediction-vs-causation-in-regression-analysis
EDIT:
I embraced the distinction between causation and prediction in light of the arguments contained in Shmueli (2010), mainly based on the bias-variance tradeoff. The bias is not the core but it play a role in prediction also. Therefore the “theory” play its role in prediction as well. Then, the so called “data driven” (correlational driven) model can be seen as too extreme perspective even if our goal is pure prediction; the magnitude of bias matters. However this magnitude depends from the "true model" and in any real situation it is unknown; so the magnitude of bias. Fortunately this problem is only theoretical and, at least in my opinion, irrelevant. Indeed the relevant thing is that the bias-variance tradeoff give us a justification for see the regression in two markedly different way and, more important yet, give us a justification for develop very different metrics to adopt.
Infact the perspective about regression in causal inference and in predictive learning are markedly different. Moreover, also more relevant difference exist in the tools/metrics commonly used therein. If we do not accept a clear separation between causation and prediction those difference in regression practice are very hard to justify.
For example, models like ARMA and ANNet are “free of theory” by definition, they are purely correlational driven (data driven). The growing area of predictive learning, as a whole, follow the same perspective. Those models has demonstrate their effectiveness in practice and their superiority for forecasting purpose in comparison to structural models. While structural models is a necessity for causal inference. Latinus ancient people said in medio stat virtus; however, in my experience about causation vs prediction story, in the middle I see only confusion. | Endogeneity in forecasting | Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is | Endogeneity in forecasting
Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is a major problem. Endogeneity produce, first of all, biased parameters estimates. Other sources of endogeneity as, measurement errors or simultaneity/reverse causation, produce biased parameters estimation as well.
However if your goal is forecasting (or contemporaneous prediction as well) your major problem is overfitting. This concept is related to loss function as mean square errors, that you have to minimize, and appear when we consider in sample vs out of sample measure.
Key concept for understand the crucial distinction in argument is bias-variance trade off. Read my explanation here (Are inconsistent estimators ever preferable?) and, mostly, the cited article to which it refers to.
For another explanation you can read this article
http://statisticalhorizons.com/prediction-vs-causation-in-regression-analysis
EDIT:
I embraced the distinction between causation and prediction in light of the arguments contained in Shmueli (2010), mainly based on the bias-variance tradeoff. The bias is not the core but it play a role in prediction also. Therefore the “theory” play its role in prediction as well. Then, the so called “data driven” (correlational driven) model can be seen as too extreme perspective even if our goal is pure prediction; the magnitude of bias matters. However this magnitude depends from the "true model" and in any real situation it is unknown; so the magnitude of bias. Fortunately this problem is only theoretical and, at least in my opinion, irrelevant. Indeed the relevant thing is that the bias-variance tradeoff give us a justification for see the regression in two markedly different way and, more important yet, give us a justification for develop very different metrics to adopt.
Infact the perspective about regression in causal inference and in predictive learning are markedly different. Moreover, also more relevant difference exist in the tools/metrics commonly used therein. If we do not accept a clear separation between causation and prediction those difference in regression practice are very hard to justify.
For example, models like ARMA and ANNet are “free of theory” by definition, they are purely correlational driven (data driven). The growing area of predictive learning, as a whole, follow the same perspective. Those models has demonstrate their effectiveness in practice and their superiority for forecasting purpose in comparison to structural models. While structural models is a necessity for causal inference. Latinus ancient people said in medio stat virtus; however, in my experience about causation vs prediction story, in the middle I see only confusion. | Endogeneity in forecasting
Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is |
49,242 | How to tune the weak learner in boosted algorithms | Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will overfit quite quick if we let it. On the other hand, if we pass a more complicated model such as a large polynomial linear regression then boosting will actually apply some regularization. Although, linear models won't be able to give you the best accuracy when boosting (typically).
You mention kaggle and deep trees. There is a significant difference between standard boosting trees and the base trees used for xgboost or lightgbm (what wins Kaggle). The trees in these algorithms are heavily regularized which allows us to use deeper trees and still keep them 'weak'.
In terms of param spaces to try, it depends on the algo. If you are using some basic boosted tree like scikit-learn's then you will typically be dealing with really short trees (so max depth around 1-3ish). Whereas with xgboost or lightgbm you will get good results typically going deeper to something like 8-16. Learning rates typically are lower around .01 - .1 although I have seen some go up to .5 for optimal solutions.
Overall though, the huge param space search is the price we pay for state-of-the-art results. As Tim mentioned in his answer we typically leverage hyperparameter packages to speed this up like hyperopt or optuna. | How to tune the weak learner in boosted algorithms | Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will | How to tune the weak learner in boosted algorithms
Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will overfit quite quick if we let it. On the other hand, if we pass a more complicated model such as a large polynomial linear regression then boosting will actually apply some regularization. Although, linear models won't be able to give you the best accuracy when boosting (typically).
You mention kaggle and deep trees. There is a significant difference between standard boosting trees and the base trees used for xgboost or lightgbm (what wins Kaggle). The trees in these algorithms are heavily regularized which allows us to use deeper trees and still keep them 'weak'.
In terms of param spaces to try, it depends on the algo. If you are using some basic boosted tree like scikit-learn's then you will typically be dealing with really short trees (so max depth around 1-3ish). Whereas with xgboost or lightgbm you will get good results typically going deeper to something like 8-16. Learning rates typically are lower around .01 - .1 although I have seen some go up to .5 for optimal solutions.
Overall though, the huge param space search is the price we pay for state-of-the-art results. As Tim mentioned in his answer we typically leverage hyperparameter packages to speed this up like hyperopt or optuna. | How to tune the weak learner in boosted algorithms
Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will |
49,243 | How to tune the weak learner in boosted algorithms | The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very bad estimator and being a robust estimator tend to not coincide.
Whether or not your learners are weak, you can do a primitive statistical design of experiment on your grid searching to accelerate the overall testing by orders of magnitude.
Instead of uniform, try random sparse. Look at the functions that make autoplotting, how they bisect and look for changes, and use those. Use the multivariate versions.
These two things are not necessarily connected, and the second one has some very good solutions to be had. | How to tune the weak learner in boosted algorithms | The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very ba | How to tune the weak learner in boosted algorithms
The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very bad estimator and being a robust estimator tend to not coincide.
Whether or not your learners are weak, you can do a primitive statistical design of experiment on your grid searching to accelerate the overall testing by orders of magnitude.
Instead of uniform, try random sparse. Look at the functions that make autoplotting, how they bisect and look for changes, and use those. Use the multivariate versions.
These two things are not necessarily connected, and the second one has some very good solutions to be had. | How to tune the weak learner in boosted algorithms
The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very ba |
49,244 | How to tune the weak learner in boosted algorithms | One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak learners are simple by design, we don't usually tune them. You are correct, if you wanted to tune them, this becomes a complicated optimization problem. If you really want to do this, one thing that could make things easier is to use a more clever optimization algorithm than grid search, for example Bayesian optimization that "cleverly" explores the parameter space. | How to tune the weak learner in boosted algorithms | One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak | How to tune the weak learner in boosted algorithms
One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak learners are simple by design, we don't usually tune them. You are correct, if you wanted to tune them, this becomes a complicated optimization problem. If you really want to do this, one thing that could make things easier is to use a more clever optimization algorithm than grid search, for example Bayesian optimization that "cleverly" explores the parameter space. | How to tune the weak learner in boosted algorithms
One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak |
49,245 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuous predictor your most important job is to figure out its actual relation to outcome. That might end up being a cutoff in some circumstances, but usually the variable itself or some continuous transformation of it will be more useful in an outcome model.
In that context it's not clear whether your analyses are accounting in any way for other clinical variables, which typically will be associated with a biomarker. Single-variable relations to outcome are only a part, an early part, of this type of study.
Please note that an "optimal" cutoff in any event depends on its intended use. Are the costs of placing a low-risk case into the high-risk category really the same as the opposite type of error? That's the implicit assumption in "optimization" schemes that don't take such misclassification costs into account.
There is nothing wrong with picking a cutoff for display of survival plots or with showing single-variable relations. The median is often used in this context. The cutoff and single-variable analyses should not, however, be the basis of the tests that ultimately demonstrate the variable's significance.
Added in response to comment:
Although many investigators do select predictor variables for Cox multiple regression based on single-variable relations to outcome, this is unwise on several levels. When you have multiple correlated predictors, as is typical in clinical studies, the particular variables found "significant" in any one study will depend heavily on the sample at hand. As for your biomarker, these single-variable relations do not take into account the values of all the other predictors. P-values in the multiple regression will be uninterpretable, as the assumptions that underlie them will have been violated.
At an early stage in your study, you presumably want to show that your biomarker adds something useful. So include in your multiple regression the variables that are traditionally used for prognostication in your field (such as TNM stage in cancer studies), and see whether your biomarker is still related to outcome when those are taken into account. With a standard Cox regression, you can include one predictor for every 10-20 events (recurrences in your case). That will pose a problem for the example illustrated in your first plot, with fewer than 20 events, but not for the larger study illustrated in your last plot.
You can include more predictors in your analysis if you use methods like ridge regression that shrink the individual regression coefficients toward zero. This can be a useful way to build a prognostic model that takes multiple correlated predictors into account, but it's not so well suited for showing that a particular predictor is "significantly" related to outcome.
For more background on how to proceed with this type of work, consider Frank Harrell's Regression Modeling Strategies, the associated class notes, and the rms package in R. | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuou | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuous predictor your most important job is to figure out its actual relation to outcome. That might end up being a cutoff in some circumstances, but usually the variable itself or some continuous transformation of it will be more useful in an outcome model.
In that context it's not clear whether your analyses are accounting in any way for other clinical variables, which typically will be associated with a biomarker. Single-variable relations to outcome are only a part, an early part, of this type of study.
Please note that an "optimal" cutoff in any event depends on its intended use. Are the costs of placing a low-risk case into the high-risk category really the same as the opposite type of error? That's the implicit assumption in "optimization" schemes that don't take such misclassification costs into account.
There is nothing wrong with picking a cutoff for display of survival plots or with showing single-variable relations. The median is often used in this context. The cutoff and single-variable analyses should not, however, be the basis of the tests that ultimately demonstrate the variable's significance.
Added in response to comment:
Although many investigators do select predictor variables for Cox multiple regression based on single-variable relations to outcome, this is unwise on several levels. When you have multiple correlated predictors, as is typical in clinical studies, the particular variables found "significant" in any one study will depend heavily on the sample at hand. As for your biomarker, these single-variable relations do not take into account the values of all the other predictors. P-values in the multiple regression will be uninterpretable, as the assumptions that underlie them will have been violated.
At an early stage in your study, you presumably want to show that your biomarker adds something useful. So include in your multiple regression the variables that are traditionally used for prognostication in your field (such as TNM stage in cancer studies), and see whether your biomarker is still related to outcome when those are taken into account. With a standard Cox regression, you can include one predictor for every 10-20 events (recurrences in your case). That will pose a problem for the example illustrated in your first plot, with fewer than 20 events, but not for the larger study illustrated in your last plot.
You can include more predictors in your analysis if you use methods like ridge regression that shrink the individual regression coefficients toward zero. This can be a useful way to build a prognostic model that takes multiple correlated predictors into account, but it's not so well suited for showing that a particular predictor is "significantly" related to outcome.
For more background on how to proceed with this type of work, consider Frank Harrell's Regression Modeling Strategies, the associated class notes, and the rms package in R. | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuou |
49,246 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to determine some cut-off for some future analysis. In general, if you want to make a continuous covariate discrete, the decision on the cut-off should be taken before seeing the data.
In your situation, I believe that the p-value is rendered meaningless, as it is a form of 'p-value hacking'. I would not trust the difference in the second figure, unless some bootstrapping procedure confirms that the cut-off point is legitimate (quite difficult to do with a small data set). | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to determine some cut-off for some future analysis. In general, if you want to make a continuous covariate discrete, the decision on the cut-off should be taken before seeing the data.
In your situation, I believe that the p-value is rendered meaningless, as it is a form of 'p-value hacking'. I would not trust the difference in the second figure, unless some bootstrapping procedure confirms that the cut-off point is legitimate (quite difficult to do with a small data set). | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to |
49,247 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can try the 'maxstat' tool (an R package) and this tool is also available in 'survminer' another R-package, seems to be best tool to find the cut-point which is basically based on maximally ranked statistic.
Good luck,
-Altaf Khan | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can try the 'maxstat' tool (an R package) and this tool is also available in 'survminer' another R-package, seems to be best tool to find the cut-point which is basically based on maximally ranked statistic.
Good luck,
-Altaf Khan | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
49,248 | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model using mgcv in R | This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQL() -> lme() so you are really fitting a specially weighted linear mixed model. I won't ask how you found that $\theta = 0.82$...
I think your model specification is wrong for the correlations structure. If you want Day within Year, then the formula to use is: form = ~ Day | Year. At least, this way is explicit about the nesting.
I will add that this additive model plus correlation structure model isn't going to work in some cases, especially where the wiggliness of the smooth is of similar magnitude to the auto-correlation. Splines induce a correlation in the observations; this is most easily seen from the splines as random effects representation of the spline model. A random intercept induces a correlation between observations of the same group; similar things happen for splines.
If you have a smooth spline and a wiggly correlation (high autocorrelation) that is fine, and vice versa, but you can get into identifiability issues when trying to fit a splines and correlation structures in residuals (as here) as both terms increase in "complexity" (wiggly spline, high correlation). | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model u | This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQ | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model using mgcv in R
This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQL() -> lme() so you are really fitting a specially weighted linear mixed model. I won't ask how you found that $\theta = 0.82$...
I think your model specification is wrong for the correlations structure. If you want Day within Year, then the formula to use is: form = ~ Day | Year. At least, this way is explicit about the nesting.
I will add that this additive model plus correlation structure model isn't going to work in some cases, especially where the wiggliness of the smooth is of similar magnitude to the auto-correlation. Splines induce a correlation in the observations; this is most easily seen from the splines as random effects representation of the spline model. A random intercept induces a correlation between observations of the same group; similar things happen for splines.
If you have a smooth spline and a wiggly correlation (high autocorrelation) that is fine, and vice versa, but you can get into identifiability issues when trying to fit a splines and correlation structures in residuals (as here) as both terms increase in "complexity" (wiggly spline, high correlation). | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model u
This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQ |
49,249 | Is a Markov chain with a limiting distribution a stationary process? | Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state space $\left\{0, 1\right\}$. The limiting distribution is $\pi = \left(0.5, 0.5\right)$. However, suppose you start the process at time zero with initial distribution $\mu = \left(1, 0\right)$, i.e. $X_0 = 0$ with probability one.
We then have $\mathbf{E}[X_0] = 0 \neq \mathbf{E}[X_1] = \frac{1}{2}$, meaning the moments of $X_t$ depend on $t$, which violates the definition of stationarity.
Here's some R code illustrating a similar example with
$$P = \begin{pmatrix} 0.98 & 0.02 \\ 0.02 & 0.98 \end{pmatrix}$$
p_stay <- 0.98
P <- matrix(1 - p_stay, 2, 2)
diag(P) <- p_stay
stopifnot(all(rowSums(P) == rep(1, nrow(P))))
mu <- c(1, 0)
pi <- matrix(0, 100, 2)
pi[1, ] <- mu
for(time in seq(2, nrow(pi))) {
pi[time, ] <- pi[time - 1, ] %*% P
}
plot(seq(1, nrow(pi)), pi[, 1], type="l", xlab="time", ylab="Pr[X_t = 0]")
abline(h=0.5, lty=2)
The fact that $X_t$ is converging in distribution to some limit does not mean the process is stationary. | Is a Markov chain with a limiting distribution a stationary process? | Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state s | Is a Markov chain with a limiting distribution a stationary process?
Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state space $\left\{0, 1\right\}$. The limiting distribution is $\pi = \left(0.5, 0.5\right)$. However, suppose you start the process at time zero with initial distribution $\mu = \left(1, 0\right)$, i.e. $X_0 = 0$ with probability one.
We then have $\mathbf{E}[X_0] = 0 \neq \mathbf{E}[X_1] = \frac{1}{2}$, meaning the moments of $X_t$ depend on $t$, which violates the definition of stationarity.
Here's some R code illustrating a similar example with
$$P = \begin{pmatrix} 0.98 & 0.02 \\ 0.02 & 0.98 \end{pmatrix}$$
p_stay <- 0.98
P <- matrix(1 - p_stay, 2, 2)
diag(P) <- p_stay
stopifnot(all(rowSums(P) == rep(1, nrow(P))))
mu <- c(1, 0)
pi <- matrix(0, 100, 2)
pi[1, ] <- mu
for(time in seq(2, nrow(pi))) {
pi[time, ] <- pi[time - 1, ] %*% P
}
plot(seq(1, nrow(pi)), pi[, 1], type="l", xlab="time", ylab="Pr[X_t = 0]")
abline(h=0.5, lty=2)
The fact that $X_t$ is converging in distribution to some limit does not mean the process is stationary. | Is a Markov chain with a limiting distribution a stationary process?
Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state s |
49,250 | Deterministic sampling from discrete distribution | One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution $p_A$. To draw a value at random from $A$, generate a vector of independent uniform variates $\mathbf{U}=(U_a, U_b, U_c, \ldots)$. If $U_a \le p_A(a)$, choose $a$. Otherwise, recursively draw a value from the remaining letters $A^\prime = A-\{a\} = (b,c,\ldots)$ using the vector $\mathbf{U}^\prime=(U_b, U_c, \ldots)$ and the probabilities $$p_{A^\prime}(b) = \frac{p_A(b)}{1-p_A(a)},\ p_{A^\prime}(c) = \frac{p_A(c)}{1-p_A(a)},$$ etc.
You can re-use the same vector $\mathbf{U}$ for any other distribution on $A$.
Using this method, the expected frequency with which the same letter would be drawn from distributions $p_A$ and $q_A$ is the frequency with which $a$ would be drawn from both distributions, equal to the smaller of $p_A(a)$ and $q_A(a)$, plus the expected frequency with which the same letter would be drawn from $(b,c,\ldots)$, conditional on $a$ not being drawn from either distribution.
This method is the best you can do by assigning each letter to its own connected interval. With additional work it's possible to make the two sequences agree even more frequently, but you would have to assign the extra letter "k" to a complicated subset of $(0,1]$.
Here is R code to generate n symbols from an alphabet with probability vector prob.
s <- function(n, prob) {
k <- length(prob)
q <- prob / rev(cumsum(rev(prob)))
u <- matrix(runif(n*k), nrow=k, byrow=TRUE) < q
apply(u, 2, function(x) match(TRUE, x))
}
Let's generate samples of size 10,000 from distributions like those in the question. The output shows the first 60 draws from each, using the same starting seed. They are remarkably similar.
prob <- c(75, 77, 83, 90, 96, 103, 109, 116, 122, 129, rep(0, 16))
prob.k <- prob
prob.k[11] <- 0.119/(1-0.119)*sum(prob)
seed <- 17
N <- 1e4
set.seed(seed); x <- letters[s(N, prob)]
set.seed(seed); x.k <- letters[s(N, prob.k)]
rbind(First=paste0(head(x, 60), collapse=""),
Second=paste0(head(x.k, 60), collapse=""))
Here it is:
First "geifgefhiifafbhfhijgfcdiebjfjegajgggidghchhjjfgdjheicbhbjica"
Second "geifgefhiikafbhgiikifcdiibjfkjkakiggkdghchijjfgdkhhkkbhbkkcj"
You may check that the actual frequencies are close to the intended ones:
rbind(First=c(table(x), k=0), Second=table(x.k))
This output is
a b c d e f g h i j k
First 755 775 808 842 995 1068 1111 1184 1206 1256 0
Second 657 666 693 739 872 955 973 1056 1074 1144 1171
The degree of similarity (that is, proportion of time the two sequences are expected to agree) is readily computed recursively.
similarity <- function(x, y) {
if (min(length(x), length(y)) == 0) return (0)
a <- min(x[1], y[1])
x.s <- sum(x[-1])
y.s <- sum(y[-1])
if (x.s > 0 & y.s > 0) {
b <- max(x[1], y[1])
x <- x[-1]/sum(x[-1])
y <- y[-1]/sum(y[-1])
b <- (1-b) * similarity(x, y)
} else {
b <- 0
}
return (a + b)
}
similarity(prob/sum(prob), prob.k/sum(prob.k))
The output is
0.7568941
In fact, in this simulation the observed frequency was close to that:
mean(x.k == x)
[1] 0.754 | Deterministic sampling from discrete distribution | One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution | Deterministic sampling from discrete distribution
One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution $p_A$. To draw a value at random from $A$, generate a vector of independent uniform variates $\mathbf{U}=(U_a, U_b, U_c, \ldots)$. If $U_a \le p_A(a)$, choose $a$. Otherwise, recursively draw a value from the remaining letters $A^\prime = A-\{a\} = (b,c,\ldots)$ using the vector $\mathbf{U}^\prime=(U_b, U_c, \ldots)$ and the probabilities $$p_{A^\prime}(b) = \frac{p_A(b)}{1-p_A(a)},\ p_{A^\prime}(c) = \frac{p_A(c)}{1-p_A(a)},$$ etc.
You can re-use the same vector $\mathbf{U}$ for any other distribution on $A$.
Using this method, the expected frequency with which the same letter would be drawn from distributions $p_A$ and $q_A$ is the frequency with which $a$ would be drawn from both distributions, equal to the smaller of $p_A(a)$ and $q_A(a)$, plus the expected frequency with which the same letter would be drawn from $(b,c,\ldots)$, conditional on $a$ not being drawn from either distribution.
This method is the best you can do by assigning each letter to its own connected interval. With additional work it's possible to make the two sequences agree even more frequently, but you would have to assign the extra letter "k" to a complicated subset of $(0,1]$.
Here is R code to generate n symbols from an alphabet with probability vector prob.
s <- function(n, prob) {
k <- length(prob)
q <- prob / rev(cumsum(rev(prob)))
u <- matrix(runif(n*k), nrow=k, byrow=TRUE) < q
apply(u, 2, function(x) match(TRUE, x))
}
Let's generate samples of size 10,000 from distributions like those in the question. The output shows the first 60 draws from each, using the same starting seed. They are remarkably similar.
prob <- c(75, 77, 83, 90, 96, 103, 109, 116, 122, 129, rep(0, 16))
prob.k <- prob
prob.k[11] <- 0.119/(1-0.119)*sum(prob)
seed <- 17
N <- 1e4
set.seed(seed); x <- letters[s(N, prob)]
set.seed(seed); x.k <- letters[s(N, prob.k)]
rbind(First=paste0(head(x, 60), collapse=""),
Second=paste0(head(x.k, 60), collapse=""))
Here it is:
First "geifgefhiifafbhfhijgfcdiebjfjegajgggidghchhjjfgdjheicbhbjica"
Second "geifgefhiikafbhgiikifcdiibjfkjkakiggkdghchijjfgdkhhkkbhbkkcj"
You may check that the actual frequencies are close to the intended ones:
rbind(First=c(table(x), k=0), Second=table(x.k))
This output is
a b c d e f g h i j k
First 755 775 808 842 995 1068 1111 1184 1206 1256 0
Second 657 666 693 739 872 955 973 1056 1074 1144 1171
The degree of similarity (that is, proportion of time the two sequences are expected to agree) is readily computed recursively.
similarity <- function(x, y) {
if (min(length(x), length(y)) == 0) return (0)
a <- min(x[1], y[1])
x.s <- sum(x[-1])
y.s <- sum(y[-1])
if (x.s > 0 & y.s > 0) {
b <- max(x[1], y[1])
x <- x[-1]/sum(x[-1])
y <- y[-1]/sum(y[-1])
b <- (1-b) * similarity(x, y)
} else {
b <- 0
}
return (a + b)
}
similarity(prob/sum(prob), prob.k/sum(prob.k))
The output is
0.7568941
In fact, in this simulation the observed frequency was close to that:
mean(x.k == x)
[1] 0.754 | Deterministic sampling from discrete distribution
One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution |
49,251 | stacking and blending of regression models | The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regression instead of linear regression to combine model predictions. You could also look at accounts by other Kaggle winners. For example, the winner of the Bulldozer Price Prediction contest combined models "using a neural network".
I think the best thing to do is to look at things like Kaggle contest writeups and see what people did (or look at their code.)
As for your question, if you are only interested in prediction (which you probably are if you are stacking) then it doesn't matter if the mixture weights don't add up to 1. | stacking and blending of regression models | The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regressio | stacking and blending of regression models
The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regression instead of linear regression to combine model predictions. You could also look at accounts by other Kaggle winners. For example, the winner of the Bulldozer Price Prediction contest combined models "using a neural network".
I think the best thing to do is to look at things like Kaggle contest writeups and see what people did (or look at their code.)
As for your question, if you are only interested in prediction (which you probably are if you are stacking) then it doesn't matter if the mixture weights don't add up to 1. | stacking and blending of regression models
The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regressio |
49,252 | Calculating p-values for two tail test for population variance | What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the null density for all values in the lower and upper tails of that density that are at least as "extreme" (i.e., at least as conducive to the alternative hypothesis) as the observed test statistic. Because this test has an asymmetric null distribution, we need to specify exactly what we mean by "extreme".
Lowest-density p-value calculation: The most sensible thing method of two-sided hypothesis testing is to interpret "more extreme" as meaning a lower value of the null density. This is the interpretation used in a standard likelihood-ratio (LR) test. Under this method , the p-value is the probability of falling in the "lowest density region", where the density cut-off is the density at the observed test statistic. With an asymmetric null distribution, this leads you to a p-value calculated with unequal tails.
To implement this method for the two-sided variance test, let $\text{Chi-Sq}(\cdot | n-1)$ denote the chi-squared density function with $n-1$ degrees-of-freedom. For a given test statistic $\chi_\text{obs}^2$, the p-value for the two-sided test is given by:
$$p(\chi_\text{obs}^2)
= \mathbb{P} \Big( \text{Chi-Sq}( \chi^2 | n-1) \leqslant \text{Chi-Sq}( \chi_\text{obs}^2|n-1) \Big| H_0 \Big).$$
In your particular problem you have $n=18$ and $\chi_\text{obs}^2 = 15.35667$. At this observed value of the test statistic you have density $\text{Chi-Sq}( \chi_\text{obs}^2|n-1) = \text{Chi-Sq}( 15.35667|17) = 0.07188203$, and this density cut-off also occurs at the lower point $\chi^2=14.64890$. This means that your two-sided p-value can be calculated as:
$$\begin{equation} \begin{aligned}
p(\chi_\text{obs}^2)
&= \mathbb{P} \Big( \text{Chi-Sq}( \chi^2 | 17) \leqslant 0.07188203 \Big| H_0 \Big) \\[6pt]
&= \mathbb{P} \Big( \chi^2 \leqslant 14.64890 \Big| H_0 \Big) + \mathbb{P} \Big( \chi^2 \geqslant 15.35667 \Big| H_0 \Big) \\[6pt]
&= 0.3792454 + 0.5698078 \\[6pt]
&= 0.9490532.
\end{aligned} \end{equation}$$
This is a large p-value and so we would not reject the null hypothesis in this case. Hence, there is no significant evidence to falsify the null hypothesis that $\sigma^2 = 9$.
A common alternative to the above test is to use a simpler calculation that takes the smallest of the two tail areas from above and below the observed test statistic, and then doubles this value. This is often used as a "quick-and-nasty" approximation to the above method, since it is relatively simple and does not require the calculation of a density cut-off value. (This is the method you are applying in your question.) Under this method, the p-value is approximated as:
$$\hat{p}(\chi_\text{obs}^2) = 2 \cdot \min \Big( \mathbb{P}(\chi^2 \leqslant \chi_\text{obs}^2 | H_0), \mathbb{P}(\chi^2 \geqslant \chi_\text{obs}^2 | H_0) \Big).$$
In your particular case this is an odd calculation, since the observed test statistic is above the mode of the null density, but its left-tail is the smaller probability. Your calculation of this approximate p-value is correct, but it is notable that it is not a very good approximation in this case (and in general, this is not a very good way to get p-values for a two-sided test with an asymmetric null distribution).
. | Calculating p-values for two tail test for population variance | What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the nul | Calculating p-values for two tail test for population variance
What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the null density for all values in the lower and upper tails of that density that are at least as "extreme" (i.e., at least as conducive to the alternative hypothesis) as the observed test statistic. Because this test has an asymmetric null distribution, we need to specify exactly what we mean by "extreme".
Lowest-density p-value calculation: The most sensible thing method of two-sided hypothesis testing is to interpret "more extreme" as meaning a lower value of the null density. This is the interpretation used in a standard likelihood-ratio (LR) test. Under this method , the p-value is the probability of falling in the "lowest density region", where the density cut-off is the density at the observed test statistic. With an asymmetric null distribution, this leads you to a p-value calculated with unequal tails.
To implement this method for the two-sided variance test, let $\text{Chi-Sq}(\cdot | n-1)$ denote the chi-squared density function with $n-1$ degrees-of-freedom. For a given test statistic $\chi_\text{obs}^2$, the p-value for the two-sided test is given by:
$$p(\chi_\text{obs}^2)
= \mathbb{P} \Big( \text{Chi-Sq}( \chi^2 | n-1) \leqslant \text{Chi-Sq}( \chi_\text{obs}^2|n-1) \Big| H_0 \Big).$$
In your particular problem you have $n=18$ and $\chi_\text{obs}^2 = 15.35667$. At this observed value of the test statistic you have density $\text{Chi-Sq}( \chi_\text{obs}^2|n-1) = \text{Chi-Sq}( 15.35667|17) = 0.07188203$, and this density cut-off also occurs at the lower point $\chi^2=14.64890$. This means that your two-sided p-value can be calculated as:
$$\begin{equation} \begin{aligned}
p(\chi_\text{obs}^2)
&= \mathbb{P} \Big( \text{Chi-Sq}( \chi^2 | 17) \leqslant 0.07188203 \Big| H_0 \Big) \\[6pt]
&= \mathbb{P} \Big( \chi^2 \leqslant 14.64890 \Big| H_0 \Big) + \mathbb{P} \Big( \chi^2 \geqslant 15.35667 \Big| H_0 \Big) \\[6pt]
&= 0.3792454 + 0.5698078 \\[6pt]
&= 0.9490532.
\end{aligned} \end{equation}$$
This is a large p-value and so we would not reject the null hypothesis in this case. Hence, there is no significant evidence to falsify the null hypothesis that $\sigma^2 = 9$.
A common alternative to the above test is to use a simpler calculation that takes the smallest of the two tail areas from above and below the observed test statistic, and then doubles this value. This is often used as a "quick-and-nasty" approximation to the above method, since it is relatively simple and does not require the calculation of a density cut-off value. (This is the method you are applying in your question.) Under this method, the p-value is approximated as:
$$\hat{p}(\chi_\text{obs}^2) = 2 \cdot \min \Big( \mathbb{P}(\chi^2 \leqslant \chi_\text{obs}^2 | H_0), \mathbb{P}(\chi^2 \geqslant \chi_\text{obs}^2 | H_0) \Big).$$
In your particular case this is an odd calculation, since the observed test statistic is above the mode of the null density, but its left-tail is the smaller probability. Your calculation of this approximate p-value is correct, but it is notable that it is not a very good approximation in this case (and in general, this is not a very good way to get p-values for a two-sided test with an asymmetric null distribution).
. | Calculating p-values for two tail test for population variance
What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the nul |
49,253 | Calculating p-values for two tail test for population variance | The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here value is 15.35667 here and (n-2) =16. Now identify the corresponding p-value using any tool that is available online or a table from a standard statistical text. Based on that, you can reach your conclusion. | Calculating p-values for two tail test for population variance | The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here val | Calculating p-values for two tail test for population variance
The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here value is 15.35667 here and (n-2) =16. Now identify the corresponding p-value using any tool that is available online or a table from a standard statistical text. Based on that, you can reach your conclusion. | Calculating p-values for two tail test for population variance
The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here val |
49,254 | Cross-validation techniques for time series data | Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman's blog post "Time series cross-validation: an R example".
However, Bergmeir et al. "A note on the validity of cross-validation for evaluating time series prediction" (working paper) suggest that regular leave-$K$-out cross validation may work well even in a time series context when purely autoregessive models are used. Here is the abstract:
In this work we have investigated the use of cross-validation procedures for time series prediction evaluation when purely autoregressive models are used, which is a very common use-case when using Machine Learning procedures for time series forecasting. In a theoretical proof, we showed that a normal K-fold cross-validation procedure can be used if the lag structure of the models is adequately specified. In the experiments, we showed empirically that even if the lag structure is not correct, as long as the data are fitted well by the model, cross-validation without any modification is a better choice than OOS evaluation. Only if the models are heavily misspecified, are the cross-validation procedures to be avoided as in such a case they may yield a systematic underestimation of the error.
Precise conditions for that to hold are laid out in the working paper. | Cross-validation techniques for time series data | Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman | Cross-validation techniques for time series data
Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman's blog post "Time series cross-validation: an R example".
However, Bergmeir et al. "A note on the validity of cross-validation for evaluating time series prediction" (working paper) suggest that regular leave-$K$-out cross validation may work well even in a time series context when purely autoregessive models are used. Here is the abstract:
In this work we have investigated the use of cross-validation procedures for time series prediction evaluation when purely autoregressive models are used, which is a very common use-case when using Machine Learning procedures for time series forecasting. In a theoretical proof, we showed that a normal K-fold cross-validation procedure can be used if the lag structure of the models is adequately specified. In the experiments, we showed empirically that even if the lag structure is not correct, as long as the data are fitted well by the model, cross-validation without any modification is a better choice than OOS evaluation. Only if the models are heavily misspecified, are the cross-validation procedures to be avoided as in such a case they may yield a systematic underestimation of the error.
Precise conditions for that to hold are laid out in the working paper. | Cross-validation techniques for time series data
Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman |
49,255 | What are the prerequisites to start learning Bayesian analysis? | Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types of question that are typically answered with the help of statistics, and I think that those questions are the prerequisite for learning all statistical approaches to a more than superficial level.
What do the data say?
What should I believe now that I have these data?
What should I do or decide now that I have these data?
P-values interpreted as continuous indices of evidence and likelihood functions are well suited to answering the first question. Bayesian analyses are clearly well suited to answering question 2 and Frequentist approaches are well suited to answering question 3.
It is true that Bayesian analyses are often more involved than a simple recipe-based Frequentist hypothesis test approach, but that is inevitable given that considerations of information external to the data, sometimes in the form of opinion, come into the Bayesian analysis. However, a full understanding of the philosophy of the Frequentist approach requires more maturity on the part of students than is usually communicated by the test recipes. | What are the prerequisites to start learning Bayesian analysis? | Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types o | What are the prerequisites to start learning Bayesian analysis?
Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types of question that are typically answered with the help of statistics, and I think that those questions are the prerequisite for learning all statistical approaches to a more than superficial level.
What do the data say?
What should I believe now that I have these data?
What should I do or decide now that I have these data?
P-values interpreted as continuous indices of evidence and likelihood functions are well suited to answering the first question. Bayesian analyses are clearly well suited to answering question 2 and Frequentist approaches are well suited to answering question 3.
It is true that Bayesian analyses are often more involved than a simple recipe-based Frequentist hypothesis test approach, but that is inevitable given that considerations of information external to the data, sometimes in the form of opinion, come into the Bayesian analysis. However, a full understanding of the philosophy of the Frequentist approach requires more maturity on the part of students than is usually communicated by the test recipes. | What are the prerequisites to start learning Bayesian analysis?
Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types o |
49,256 | Metrics for one-class classification | We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection.
With such data you cannot assess accuracy of the predictions. Technically you can check if it properly labeled all your data as "positive", but then you would conclude that the useless model that always returns "positive" label no matter of data, has perfect fit.
To judge performance of such classifier you would need to have data with "negative" labels. One thing you could do is to simulate data with artificially introduced anomalies (this is often done, e.g. in image classification where you add noise to the data, or transform the images), or simulate such data that you know that should be classified as anomaly, and use such data for testing.
The story is different if you have data about "positive" and "negative" classes, since then you can use exactly the same tools for evaluating your model as for classification in general, but then, why would you use one-class classification algorithms? | Metrics for one-class classification | We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection. | Metrics for one-class classification
We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection.
With such data you cannot assess accuracy of the predictions. Technically you can check if it properly labeled all your data as "positive", but then you would conclude that the useless model that always returns "positive" label no matter of data, has perfect fit.
To judge performance of such classifier you would need to have data with "negative" labels. One thing you could do is to simulate data with artificially introduced anomalies (this is often done, e.g. in image classification where you add noise to the data, or transform the images), or simulate such data that you know that should be classified as anomaly, and use such data for testing.
The story is different if you have data about "positive" and "negative" classes, since then you can use exactly the same tools for evaluating your model as for classification in general, but then, why would you use one-class classification algorithms? | Metrics for one-class classification
We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection. |
49,257 | Metrics for one-class classification | Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative data. So we cannot any development set has similar distribution to the real data.
A standard setting for one-class classification is we have positive and unlabeled dataset. We can't assume we have the label for "negative" data even in the development set. Also, we can't assume all the unlabelled data are "negative".
An alternative evaluation is proposed in the following paper (section 4):
Lee, Wee Sun, and Bing Liu. "Learning with positive and unlabelled examples using weighted logistic regression." ICML. Vol. 3. 2003.
They uses
$$ \frac{r^2}{\Pr(Y=1)} $$
P.S.: Prof. Lee, Prof Liu and Dr. Cheng are the people that coined one-class classification. We can take their evaluation as somewhat "official". | Metrics for one-class classification | Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative dat | Metrics for one-class classification
Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative data. So we cannot any development set has similar distribution to the real data.
A standard setting for one-class classification is we have positive and unlabeled dataset. We can't assume we have the label for "negative" data even in the development set. Also, we can't assume all the unlabelled data are "negative".
An alternative evaluation is proposed in the following paper (section 4):
Lee, Wee Sun, and Bing Liu. "Learning with positive and unlabelled examples using weighted logistic regression." ICML. Vol. 3. 2003.
They uses
$$ \frac{r^2}{\Pr(Y=1)} $$
P.S.: Prof. Lee, Prof Liu and Dr. Cheng are the people that coined one-class classification. We can take their evaluation as somewhat "official". | Metrics for one-class classification
Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative dat |
49,258 | Metrics for one-class classification | @user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predicted responses. Therefore, by definition, False Positive and True Negative should exist.
Logically, what the OP didn't consider is, if we know some examples belong in the class we know the remaining examples (rest of the world population for which we don't actually need training examples) belong outside the class. Therefore, FP and TN can be calculated when we observe them during testing. | Metrics for one-class classification | @user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predi | Metrics for one-class classification
@user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predicted responses. Therefore, by definition, False Positive and True Negative should exist.
Logically, what the OP didn't consider is, if we know some examples belong in the class we know the remaining examples (rest of the world population for which we don't actually need training examples) belong outside the class. Therefore, FP and TN can be calculated when we observe them during testing. | Metrics for one-class classification
@user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predi |
49,259 | Metrics for one-class classification | When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actually negative and TN are the instances that you have correctly classified as negatives. Then you can calculate precision and recall:
precision = TP/(TP + FP)
recall = TP/(TP + FN)
The wikipedia page https://en.wikipedia.org/wiki/Precision_and_recall explains it very well.
However, when you do one class classification, some other common metrics are the false positive rate (FPR) and the f1-score.
FPR = FP/(FP+TN)
F1-SCORE = 2TP/(2TP+FP+FN)
I hope that helped.
Regards! | Metrics for one-class classification | When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actu | Metrics for one-class classification
When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actually negative and TN are the instances that you have correctly classified as negatives. Then you can calculate precision and recall:
precision = TP/(TP + FP)
recall = TP/(TP + FN)
The wikipedia page https://en.wikipedia.org/wiki/Precision_and_recall explains it very well.
However, when you do one class classification, some other common metrics are the false positive rate (FPR) and the f1-score.
FPR = FP/(FP+TN)
F1-SCORE = 2TP/(2TP+FP+FN)
I hope that helped.
Regards! | Metrics for one-class classification
When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actu |
49,260 | What is the connection between many highly correlated parameters in weight matrix with gradient descent converges slowly? | General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a duplicate feature, $x_1 = x_2$ (perfect correlation), and you want a linear function that maps $X$ to $Y$, $Y = f(X)$, where $f(X) = \beta_0 + \beta_1x_1 + \beta_2x_2$.
Assume $\beta_1 = 0, \beta_2 = 1$ gives an answer that is "ok"; but so does $\beta_1 = 1, \beta_2 = 0$, and every $\beta_1 + \beta_2 = 1$ will give the same answer.
This means that changing $\beta_1$ has a significant effect on the value $\beta_2$ should take. As Neural Networks compute such a linear model at each node, it can happen a lot from a low correlation to begin with.
Neural Net specifics: There is also another problem, more specific to the structure of neural networks, that will lead to this. Again, the extreme exemple: if two nodes in the same layer are initialized with the same values, they will have the same gradient at each iteration and learn the same transformed feature out of the last layer, leading to a duplicate.
Effect on gradient descent: This argument might be a bit loose, but Gradient Descent works 'best' when the direction of the gradient at each iteration points to the optimal point; that is, you could minimize each $\beta_i$ separately and get to a good answer. This is possible when the function to optimize is stricly convex. But when inputs are highly correlated, this is no longer the case. Obviously, it is not possible for neural networks since the function is not convex to begin with, but it has effects on reaching the local minimum as well.
A little bit off-topic, but to help figure out what is going on in gradient descent, you can check lecture 6.2 of Geoffrey Hinton's mooc; "A Bag of tricks for gradient descent" (Here on Coursera, Here on Youtube). He is not stricly talking about correlation, but he shows the effect of scaling and shifting the inputs, which helps get an intuition about how gradient descent works. | What is the connection between many highly correlated parameters in weight matrix with gradient desc | General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a dupli | What is the connection between many highly correlated parameters in weight matrix with gradient descent converges slowly?
General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a duplicate feature, $x_1 = x_2$ (perfect correlation), and you want a linear function that maps $X$ to $Y$, $Y = f(X)$, where $f(X) = \beta_0 + \beta_1x_1 + \beta_2x_2$.
Assume $\beta_1 = 0, \beta_2 = 1$ gives an answer that is "ok"; but so does $\beta_1 = 1, \beta_2 = 0$, and every $\beta_1 + \beta_2 = 1$ will give the same answer.
This means that changing $\beta_1$ has a significant effect on the value $\beta_2$ should take. As Neural Networks compute such a linear model at each node, it can happen a lot from a low correlation to begin with.
Neural Net specifics: There is also another problem, more specific to the structure of neural networks, that will lead to this. Again, the extreme exemple: if two nodes in the same layer are initialized with the same values, they will have the same gradient at each iteration and learn the same transformed feature out of the last layer, leading to a duplicate.
Effect on gradient descent: This argument might be a bit loose, but Gradient Descent works 'best' when the direction of the gradient at each iteration points to the optimal point; that is, you could minimize each $\beta_i$ separately and get to a good answer. This is possible when the function to optimize is stricly convex. But when inputs are highly correlated, this is no longer the case. Obviously, it is not possible for neural networks since the function is not convex to begin with, but it has effects on reaching the local minimum as well.
A little bit off-topic, but to help figure out what is going on in gradient descent, you can check lecture 6.2 of Geoffrey Hinton's mooc; "A Bag of tricks for gradient descent" (Here on Coursera, Here on Youtube). He is not stricly talking about correlation, but he shows the effect of scaling and shifting the inputs, which helps get an intuition about how gradient descent works. | What is the connection between many highly correlated parameters in weight matrix with gradient desc
General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a dupli |
49,261 | Cross-validation vs random sampling for classification test | If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you absolutely need to do a validation of the resulting final model. Whether you do that by a separate validation study, nested cross validation or nested out-of-bootstrap probably won't matter that much.
The main difference between the resampling used for cross validation and that for out-of-bootstrap is that bootstrapping resamples with replacement, while cross validation resamples without replacement. In addition, cross
validation ensures that within each "run" each sample is tested exactly once.
I sometimes have questions that are more directly answered by cross repeated/iterated cross validation (stability of predictions), but:
We found repeated/iterated k-fold cross validation and out-of-bootstrap resampling having about the same total error based on equal numbers of surrogate models. I'm mostly working with vibrational spectra, 300 features would be quite typical for my data as well; but my features are highly correlated and I usually have far less independent cases (but maybe repeated measurements).
Here's the paper: Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 reports similar findings.
I did not yet thoroughly read the Vanwinckelen paper @Lennart linked above; but at a first glance it looks very promising. Note that while it points out that people may be relying too much on cross validation, it does not compare cross validation vs. bootstrap-based techniques.
I also think there's often a deep misunderstanding about what the repetitions/iterations of k-fold cross valiation can do and what they cannot. Importantly, they cannot reduce the variance that is due to the limited number of independent (different) cases tested. What it can and does is: it allows to measure and reduce variance due to model instability. My understanding of the bootstrap-based resampling schemes is that they are similar in that respect.
You may want to look into how to choose a cross-validation method questions here as they typically say something about bootstrap as well. Here's a starting point: How to evaluate/select cross validation method?
Finally, a totally different thought: at the very least your data driven optimization (feature selection) should use a proper scoring rule, not accuracy. Accuracy is not "well behaved" from a statistical point of view: it has an unnecessarily high variance and on top of that it doesn't necessarily get you the best model. | Cross-validation vs random sampling for classification test | If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you a | Cross-validation vs random sampling for classification test
If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you absolutely need to do a validation of the resulting final model. Whether you do that by a separate validation study, nested cross validation or nested out-of-bootstrap probably won't matter that much.
The main difference between the resampling used for cross validation and that for out-of-bootstrap is that bootstrapping resamples with replacement, while cross validation resamples without replacement. In addition, cross
validation ensures that within each "run" each sample is tested exactly once.
I sometimes have questions that are more directly answered by cross repeated/iterated cross validation (stability of predictions), but:
We found repeated/iterated k-fold cross validation and out-of-bootstrap resampling having about the same total error based on equal numbers of surrogate models. I'm mostly working with vibrational spectra, 300 features would be quite typical for my data as well; but my features are highly correlated and I usually have far less independent cases (but maybe repeated measurements).
Here's the paper: Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 reports similar findings.
I did not yet thoroughly read the Vanwinckelen paper @Lennart linked above; but at a first glance it looks very promising. Note that while it points out that people may be relying too much on cross validation, it does not compare cross validation vs. bootstrap-based techniques.
I also think there's often a deep misunderstanding about what the repetitions/iterations of k-fold cross valiation can do and what they cannot. Importantly, they cannot reduce the variance that is due to the limited number of independent (different) cases tested. What it can and does is: it allows to measure and reduce variance due to model instability. My understanding of the bootstrap-based resampling schemes is that they are similar in that respect.
You may want to look into how to choose a cross-validation method questions here as they typically say something about bootstrap as well. Here's a starting point: How to evaluate/select cross validation method?
Finally, a totally different thought: at the very least your data driven optimization (feature selection) should use a proper scoring rule, not accuracy. Accuracy is not "well behaved" from a statistical point of view: it has an unnecessarily high variance and on top of that it doesn't necessarily get you the best model. | Cross-validation vs random sampling for classification test
If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you a |
49,262 | Spearman's Rank-Order Correlation for higher dimensions | Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ldots , x_{ip})$ and $x_j = (x_{j1}, x_{j2}, \ldots, x_{jp})$. We could start by comparing the first two coordinates and say that $x_i < x_j$ if $x_{i1} < x_{j1}$ (or vice versa), and if $x_{i1} = x_{j1}$ then go to the next coordinate and repeat.
Now suppose we have a data set $\{ (x_1, y_1), (x_2, y_2), \ldots , (x_n, y_n) \}$ where each element is a pair of vectors. It's straightforward to apply an ordering such as the one above to the set of $x$ and $y$ vectors separately and convert each point to a pair of ranks and then calculate Spearman's rank correlation.
This shows that it's possible, but it's not clear when you would actually want to do this. In practice it would depend on whether or not some ordering makes sense and is interesting given the problem. | Spearman's Rank-Order Correlation for higher dimensions | Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ld | Spearman's Rank-Order Correlation for higher dimensions
Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ldots , x_{ip})$ and $x_j = (x_{j1}, x_{j2}, \ldots, x_{jp})$. We could start by comparing the first two coordinates and say that $x_i < x_j$ if $x_{i1} < x_{j1}$ (or vice versa), and if $x_{i1} = x_{j1}$ then go to the next coordinate and repeat.
Now suppose we have a data set $\{ (x_1, y_1), (x_2, y_2), \ldots , (x_n, y_n) \}$ where each element is a pair of vectors. It's straightforward to apply an ordering such as the one above to the set of $x$ and $y$ vectors separately and convert each point to a pair of ranks and then calculate Spearman's rank correlation.
This shows that it's possible, but it's not clear when you would actually want to do this. In practice it would depend on whether or not some ordering makes sense and is interesting given the problem. | Spearman's Rank-Order Correlation for higher dimensions
Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ld |
49,263 | Spearman's Rank-Order Correlation for higher dimensions | If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $Y$, and $Z$ with rank statistics into $ranks(X),ranks(Y),ranks(Z)$
If dependence is (perfectly) comonotonic, then the scatter plot must show a straight line; if it is imperfectly comonotic you will observe some dispersion from the diagonal.
To quantify the dependence, you can delve into copulas and find a way to measure the difference between perfect comonotic dependence (as expressed by the Frechet-Hoeffding upper bound copula) and the dependence you have measured.
I have seen in the literature the use of $L_1$, $L_\infty$, optimal transport either on the copula density, or the cumulative distribution function.
Scatter plot of the original data $X\sim \mathcal{U}[0,1], Y \sim \ln(X), Z \sim \exp(X)$
Scatter plot of the rank-transformed data (estimation of $F_X(X),F_Y(Y),F_Z(Z)$)
Example-code for producing the illustrations and doing the rank-transformation (empirical version of the probability integral transform):
import numpy as np
import scipy
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
n = 1000
X = np.random.uniform(0,1,n)
Y = np.log(X)
Z = np.exp(X)
#display the scatterplot of X,Y,Z
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(X, Y, Z, c=X,
cmap=plt.cm.Paired)
ax.set_title("X,Y,Z")
ax.set_xlabel("X")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("Y")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("Z")
ax.w_zaxis.set_ticklabels([])
plt.show()
#rank transform
Xrk = scipy.stats.rankdata(X)/n
Yrk = scipy.stats.rankdata(Y)/n
Zrk = scipy.stats.rankdata(Z)/n
#display the scatterplot of rank transform
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(Xrk, Yrk, Zrk,c=Xrk,
cmap=plt.cm.Paired)
ax.set_title("F_X(X),F_Y(Y),F_Z(Z)")
ax.set_xlabel("F_X(X)")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("F_Y(Y)")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("F_Z(Z)")
ax.w_zaxis.set_ticklabels([])
plt.show() | Spearman's Rank-Order Correlation for higher dimensions | If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $ | Spearman's Rank-Order Correlation for higher dimensions
If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $Y$, and $Z$ with rank statistics into $ranks(X),ranks(Y),ranks(Z)$
If dependence is (perfectly) comonotonic, then the scatter plot must show a straight line; if it is imperfectly comonotic you will observe some dispersion from the diagonal.
To quantify the dependence, you can delve into copulas and find a way to measure the difference between perfect comonotic dependence (as expressed by the Frechet-Hoeffding upper bound copula) and the dependence you have measured.
I have seen in the literature the use of $L_1$, $L_\infty$, optimal transport either on the copula density, or the cumulative distribution function.
Scatter plot of the original data $X\sim \mathcal{U}[0,1], Y \sim \ln(X), Z \sim \exp(X)$
Scatter plot of the rank-transformed data (estimation of $F_X(X),F_Y(Y),F_Z(Z)$)
Example-code for producing the illustrations and doing the rank-transformation (empirical version of the probability integral transform):
import numpy as np
import scipy
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
n = 1000
X = np.random.uniform(0,1,n)
Y = np.log(X)
Z = np.exp(X)
#display the scatterplot of X,Y,Z
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(X, Y, Z, c=X,
cmap=plt.cm.Paired)
ax.set_title("X,Y,Z")
ax.set_xlabel("X")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("Y")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("Z")
ax.w_zaxis.set_ticklabels([])
plt.show()
#rank transform
Xrk = scipy.stats.rankdata(X)/n
Yrk = scipy.stats.rankdata(Y)/n
Zrk = scipy.stats.rankdata(Z)/n
#display the scatterplot of rank transform
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(Xrk, Yrk, Zrk,c=Xrk,
cmap=plt.cm.Paired)
ax.set_title("F_X(X),F_Y(Y),F_Z(Z)")
ax.set_xlabel("F_X(X)")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("F_Y(Y)")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("F_Z(Z)")
ax.w_zaxis.set_ticklabels([])
plt.show() | Spearman's Rank-Order Correlation for higher dimensions
If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $ |
49,264 | Spearman's Rank-Order Correlation for higher dimensions | You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, 100 (471), 916-925. It gives some (rather technical) generalisations of Spearman's rank correlation coefficient to higher dimensions in section 3. | Spearman's Rank-Order Correlation for higher dimensions | You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, | Spearman's Rank-Order Correlation for higher dimensions
You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, 100 (471), 916-925. It gives some (rather technical) generalisations of Spearman's rank correlation coefficient to higher dimensions in section 3. | Spearman's Rank-Order Correlation for higher dimensions
You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, |
49,265 | Why can correlograms indicate non-stationarity? | The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific kind of nonstationarity), you should see an ACF that doesn't exhibit the kind of geometric "decay" in the characteristic manner that you see with data generated by typical lowish-order stationary ARMA. [There will still be a tendency to decrease in the ACF of an $I(1)$ series, but it often tends to look more "linear" than geometric]
[You can get some sense of this by actually generating data from stationary low order AR and ARMA models and comparing the ACFs from (say) that of a random walk. It's worth trying for a number of different models]
You will also tend to see it with $I(2)$ (etc) series, but data that's reasonably modelled by $I(1)$ or is at least stationary after differencing tends to be more common.
So if you regard the major possibilities as either the series being I(1) or low-to-moderate order stationary ARMA, then the ACF can sometimes be of some help in distinguishing them (but you'd typically difference and look again before saying too much). | Why can correlograms indicate non-stationarity? | The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific ki | Why can correlograms indicate non-stationarity?
The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific kind of nonstationarity), you should see an ACF that doesn't exhibit the kind of geometric "decay" in the characteristic manner that you see with data generated by typical lowish-order stationary ARMA. [There will still be a tendency to decrease in the ACF of an $I(1)$ series, but it often tends to look more "linear" than geometric]
[You can get some sense of this by actually generating data from stationary low order AR and ARMA models and comparing the ACFs from (say) that of a random walk. It's worth trying for a number of different models]
You will also tend to see it with $I(2)$ (etc) series, but data that's reasonably modelled by $I(1)$ or is at least stationary after differencing tends to be more common.
So if you regard the major possibilities as either the series being I(1) or low-to-moderate order stationary ARMA, then the ACF can sometimes be of some help in distinguishing them (but you'd typically difference and look again before saying too much). | Why can correlograms indicate non-stationarity?
The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific ki |
49,266 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^{2}}{2\sigma^{2}}}
\end{eqnarray*}
for $-\infty<x<\infty$ and $\sigma>0$.
Now, when $Z$ has a standard normal distribution, $\mu=0$ and $\sigma^2=1$, so, it's pdf is given by:
\begin{eqnarray*}
f_Z(z)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}
\end{eqnarray*}
for $-\infty<z<\infty$.
If we then multiply $Z$ by the standard deviation $\sigma$ and let that be equal to the function $g(Z)$, (i.e. $Y=g(Z)=\sigma Z$) we can use the formula for transforming functions of random variables (see Casella and Berger (2002), Theorem 2.1.8):
\begin{eqnarray*}
{f_Y(y)=f_Z(z)(g^{-1}(y))}{{d\over{dy}}{g^{-1}(y)}}
\end{eqnarray*}
First we find $Z=g^{-1}(y)={y\over{\sigma}}$ and ${d\over{dy}}{g^{-1}(y)}={1\over{\sigma}}$.
So, substituting these terms, we have:
\begin{eqnarray*}
f_{Y}(y) & = & f_{Z}\left(g^{-1}(y)\right){\frac{d}{{dy}}{g^{-1}(y)}}\\
& = & f_{Z}\left(\frac{y}{\sigma}\right)\frac{1}{\sigma}\\
& = & \frac{1}{{\sqrt{2\pi}}}{e^{-}\frac{\left(\frac{y}{\sigma}\right)^{2}}{{2}}}\left(\frac{1}{\sigma}\right)\\
& = & \frac{1}{{\sqrt{2\pi}}}\left(\frac{1}{\sigma}\right){e^{-}\frac{y^2}{{2\sigma^{2}}}}\\
& = & \frac{1}{{\sqrt{2\pi}\sigma}}{e^{-\frac{y^2}{{2\sigma^{2}}}}}
\end{eqnarray*}
This PDF is identical to the PDF of $f_X$ given at the beginning of the proof which is simply the pdf of a normal random variable with mean $\mu=0$ and variance $\sigma^2$. Hence, $Y\sim N(0, \sigma^2)$. So if you look closely back through the proof, you'll see that the squared $\sigma$ exponent term is introduced through the original squared $x$ term via composite functions with the inner function being the inverse of transformation. So this is how multiplying by $\sigma$ introduces $\sigma^2$ into the the pdf. | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^{2}}{2\sigma^{2}}}
\end{eqnarray*}
for $-\infty<x<\infty$ and $\sigma>0$.
Now, when $Z$ has a standard normal distribution, $\mu=0$ and $\sigma^2=1$, so, it's pdf is given by:
\begin{eqnarray*}
f_Z(z)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}
\end{eqnarray*}
for $-\infty<z<\infty$.
If we then multiply $Z$ by the standard deviation $\sigma$ and let that be equal to the function $g(Z)$, (i.e. $Y=g(Z)=\sigma Z$) we can use the formula for transforming functions of random variables (see Casella and Berger (2002), Theorem 2.1.8):
\begin{eqnarray*}
{f_Y(y)=f_Z(z)(g^{-1}(y))}{{d\over{dy}}{g^{-1}(y)}}
\end{eqnarray*}
First we find $Z=g^{-1}(y)={y\over{\sigma}}$ and ${d\over{dy}}{g^{-1}(y)}={1\over{\sigma}}$.
So, substituting these terms, we have:
\begin{eqnarray*}
f_{Y}(y) & = & f_{Z}\left(g^{-1}(y)\right){\frac{d}{{dy}}{g^{-1}(y)}}\\
& = & f_{Z}\left(\frac{y}{\sigma}\right)\frac{1}{\sigma}\\
& = & \frac{1}{{\sqrt{2\pi}}}{e^{-}\frac{\left(\frac{y}{\sigma}\right)^{2}}{{2}}}\left(\frac{1}{\sigma}\right)\\
& = & \frac{1}{{\sqrt{2\pi}}}\left(\frac{1}{\sigma}\right){e^{-}\frac{y^2}{{2\sigma^{2}}}}\\
& = & \frac{1}{{\sqrt{2\pi}\sigma}}{e^{-\frac{y^2}{{2\sigma^{2}}}}}
\end{eqnarray*}
This PDF is identical to the PDF of $f_X$ given at the beginning of the proof which is simply the pdf of a normal random variable with mean $\mu=0$ and variance $\sigma^2$. Hence, $Y\sim N(0, \sigma^2)$. So if you look closely back through the proof, you'll see that the squared $\sigma$ exponent term is introduced through the original squared $x$ term via composite functions with the inner function being the inverse of transformation. So this is how multiplying by $\sigma$ introduces $\sigma^2$ into the the pdf. | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac |
49,267 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sigma=1$. If you were to multiply your random variate $x$ by constant $a$, the only way in which you could keep the cumulative probability $p$ from changing would be to multiply the same constant by the standard deviation. $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{a(x-\mu)}{a \sigma\sqrt{2}}\right)\right]$$ | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sig | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sigma=1$. If you were to multiply your random variate $x$ by constant $a$, the only way in which you could keep the cumulative probability $p$ from changing would be to multiply the same constant by the standard deviation. $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{a(x-\mu)}{a \sigma\sqrt{2}}\right)\right]$$ | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sig |
49,268 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the values will get doubled so as the distance from the mean will be doubled too, henceforth the sd too will get doubled.
The other way around, if the numbers are multiplied by a factor, the same factor will be affecting the sd. | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the va | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the values will get doubled so as the distance from the mean will be doubled too, henceforth the sd too will get doubled.
The other way around, if the numbers are multiplied by a factor, the same factor will be affecting the sd. | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the va |
49,269 | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible? | I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily of positive probability, we can
express it as the disjoint union of the events $A\cap B$ and
$A^c\cap B$, that is, $B = (A\cap B) \cup (A^c\cap B)$.
Hence we have that
$$P(B) = P(A\cap B) + P(A^c\cap B).\tag{1}$$
If $P(A) = 1$ (i.e. $P(A^c) = 0$), then, since $A^c \cap B \subset A^c$,
we have $P(A^c \cap B) \leq P(A^c) = 0$, that is,
$P(A^c \cap B) = 0$. It follows from $(1)$ and the assumption that
$P(A) = 1$ that
$$P(B) = P(A\cap B) \Longrightarrow P(A)P(B) = P(A\cap B),$$
that is, $A$ and $B$ are independent events.
If $P(A) = 0$ (i.e. $P(A^c) = 1$), then, since $A\cap B \subset A$,
we have that $P(A\cap B) \leq P(A) = 0$ and so
$$0 = P(A\cap B) = P(A)P(B),$$
that is, $A$ and $B$ are independent events.
Events of probability $1$ (or of probability $0$) have the
property that they are independent of all other events
including (somewhat surprisingly) themselves! | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi | I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily o | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible?
I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily of positive probability, we can
express it as the disjoint union of the events $A\cap B$ and
$A^c\cap B$, that is, $B = (A\cap B) \cup (A^c\cap B)$.
Hence we have that
$$P(B) = P(A\cap B) + P(A^c\cap B).\tag{1}$$
If $P(A) = 1$ (i.e. $P(A^c) = 0$), then, since $A^c \cap B \subset A^c$,
we have $P(A^c \cap B) \leq P(A^c) = 0$, that is,
$P(A^c \cap B) = 0$. It follows from $(1)$ and the assumption that
$P(A) = 1$ that
$$P(B) = P(A\cap B) \Longrightarrow P(A)P(B) = P(A\cap B),$$
that is, $A$ and $B$ are independent events.
If $P(A) = 0$ (i.e. $P(A^c) = 1$), then, since $A\cap B \subset A$,
we have that $P(A\cap B) \leq P(A) = 0$ and so
$$0 = P(A\cap B) = P(A)P(B),$$
that is, $A$ and $B$ are independent events.
Events of probability $1$ (or of probability $0$) have the
property that they are independent of all other events
including (somewhat surprisingly) themselves! | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi
I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily o |
49,270 | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible? | Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P(B)$$
$$\to P(A|B)P(B) = P(B)$$
$$\to P(A|B) = 1 \ QED$$
The last line assumes $P(B) > 0$.
Prove $P(A|B) = 0$ if $P(A) = 0, P(B) > 0$:
$$P(A) = 0$$
$$\to 1_A = 1_{\emptyset} \ \text{a.s.}$$
$$\to 1_A 1_B = 1_{\emptyset} 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} = 1_{\emptyset} \ \text{a.s.}$$
$$\to P(A \cap B) = P(\emptyset)$$
$$\to P(A|B)P(B) = 0$$
$$\to P(A|B) = 0 \ QED$$
Note: The last line assumes $P(B) > 0$.
Prove A and B are independent if $P(A) = 0$
$$A \cap B \subseteq A$$
$$\to 0 \le P(A \cap B) \le P(A) = 0$$
Also, $P(A)P(B) = 0$. Hence, we have
$$P(A \cap B) = P(A)P(B) \ QED$$
Note: This does not seem to assume that $P(B) > 0$
Prove A and B are independent if $P(A) = 1, P(B) > 0$
$$P(A \cap B) = P(A|B)P(B) = P(B) \tag{*}$$
$$P(A)P(B) = P(B)$$
$$\to P(A \cap B) = P(A)P(B) \ QED$$
Note: $(*)$ makes use of '$P(A|B) = 1$ if $P(A) = 1, P(B) > 0$' | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi | Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P( | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible?
Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P(B)$$
$$\to P(A|B)P(B) = P(B)$$
$$\to P(A|B) = 1 \ QED$$
The last line assumes $P(B) > 0$.
Prove $P(A|B) = 0$ if $P(A) = 0, P(B) > 0$:
$$P(A) = 0$$
$$\to 1_A = 1_{\emptyset} \ \text{a.s.}$$
$$\to 1_A 1_B = 1_{\emptyset} 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} = 1_{\emptyset} \ \text{a.s.}$$
$$\to P(A \cap B) = P(\emptyset)$$
$$\to P(A|B)P(B) = 0$$
$$\to P(A|B) = 0 \ QED$$
Note: The last line assumes $P(B) > 0$.
Prove A and B are independent if $P(A) = 0$
$$A \cap B \subseteq A$$
$$\to 0 \le P(A \cap B) \le P(A) = 0$$
Also, $P(A)P(B) = 0$. Hence, we have
$$P(A \cap B) = P(A)P(B) \ QED$$
Note: This does not seem to assume that $P(B) > 0$
Prove A and B are independent if $P(A) = 1, P(B) > 0$
$$P(A \cap B) = P(A|B)P(B) = P(B) \tag{*}$$
$$P(A)P(B) = P(B)$$
$$\to P(A \cap B) = P(A)P(B) \ QED$$
Note: $(*)$ makes use of '$P(A|B) = 1$ if $P(A) = 1, P(B) > 0$' | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi
Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P( |
49,271 | Can I ignore under-dispersion in my count data? | If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a quasipoisson model is valid and fairly efficient. A quasipoisson model just extends the poisson by estimating a dispersion parameter.
It is, of course, important to think about the originating nature of the data, the question you're trying to answer, and to ask about why this dispersion comes about, and how it might affect your interpretation of the results. | Can I ignore under-dispersion in my count data? | If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a q | Can I ignore under-dispersion in my count data?
If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a quasipoisson model is valid and fairly efficient. A quasipoisson model just extends the poisson by estimating a dispersion parameter.
It is, of course, important to think about the originating nature of the data, the question you're trying to answer, and to ask about why this dispersion comes about, and how it might affect your interpretation of the results. | Can I ignore under-dispersion in my count data?
If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a q |
49,272 | Can I ignore under-dispersion in my count data? | A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It seems that just modeling with a Normality assumption is inefficient.
A review paper and responses for the COM model are here and an R package to implement it is here. A Stata package with references to the GP regression model is described here.
Full disclosure: I've never used these things in anger. | Can I ignore under-dispersion in my count data? | A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It s | Can I ignore under-dispersion in my count data?
A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It seems that just modeling with a Normality assumption is inefficient.
A review paper and responses for the COM model are here and an R package to implement it is here. A Stata package with references to the GP regression model is described here.
Full disclosure: I've never used these things in anger. | Can I ignore under-dispersion in my count data?
A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It s |
49,273 | Why is it true that a sampling distribution of a test statistic is easier to derive under the null? | Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If the null is true, i.e., $\mu=\mu_0$, you have automatically also already derived the sampling distribution of the test statistic $t=\sqrt{n}(\bar{X}_n-\mu_0)$ under the null.
When $\mu\neq\mu_0$, write
\begin{align*}
t&=\sqrt{n}(\bar{X}_n-\mu_0)\\
&=\sqrt{n}(\bar{X}_n-\mu+\mu-\mu_0)
\end{align*}
This is the $N(0,1)$ random variable plus the deterministic quantity $\sqrt{n}(\mu-\mu_0)$, so $t\sim N(\sqrt{n}(\mu-\mu_0),1)$.
So:
Q1) Getting the distribution under the alternative was a little trickier even in this arguably very simple example.
Q2) I do not quite understand this question (or its difference to Q1) - the test statistic must be the same under H0 and H1 - in practice we do not know which of the two is true, so if the test statistic did depend on which is true, hypothesis testing would be impossible (a good thing, some would argue ;-) )
Q3) Asymmetry - I suppose (see the connect by whuber, though) - refers to that the test statistic behaves differently depending on whether H0 or H1 is true, and this is what we want and need: If the null is false, we want the test to be able to detect that. Now, if the test statistic had the same distribution ("behavior") under H0 and H1, there would be no reason to interpret a large value of the test statistic as evidence in favor of H1. As the above example demonstrates, this is also the case here: under H1, the mean of the statistic is shifted away from zero, so that the statistic is more likely to produce large realizations. Plausibly, that effect becomes stronger the larger the sample size. | Why is it true that a sampling distribution of a test statistic is easier to derive under the null? | Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If | Why is it true that a sampling distribution of a test statistic is easier to derive under the null?
Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If the null is true, i.e., $\mu=\mu_0$, you have automatically also already derived the sampling distribution of the test statistic $t=\sqrt{n}(\bar{X}_n-\mu_0)$ under the null.
When $\mu\neq\mu_0$, write
\begin{align*}
t&=\sqrt{n}(\bar{X}_n-\mu_0)\\
&=\sqrt{n}(\bar{X}_n-\mu+\mu-\mu_0)
\end{align*}
This is the $N(0,1)$ random variable plus the deterministic quantity $\sqrt{n}(\mu-\mu_0)$, so $t\sim N(\sqrt{n}(\mu-\mu_0),1)$.
So:
Q1) Getting the distribution under the alternative was a little trickier even in this arguably very simple example.
Q2) I do not quite understand this question (or its difference to Q1) - the test statistic must be the same under H0 and H1 - in practice we do not know which of the two is true, so if the test statistic did depend on which is true, hypothesis testing would be impossible (a good thing, some would argue ;-) )
Q3) Asymmetry - I suppose (see the connect by whuber, though) - refers to that the test statistic behaves differently depending on whether H0 or H1 is true, and this is what we want and need: If the null is false, we want the test to be able to detect that. Now, if the test statistic had the same distribution ("behavior") under H0 and H1, there would be no reason to interpret a large value of the test statistic as evidence in favor of H1. As the above example demonstrates, this is also the case here: under H1, the mean of the statistic is shifted away from zero, so that the statistic is more likely to produce large realizations. Plausibly, that effect becomes stronger the larger the sample size. | Why is it true that a sampling distribution of a test statistic is easier to derive under the null?
Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If |
49,274 | Generate tail of distribution by a given sample in R | I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed above the truncation point be $n_o$.
A simple approach could go as follows:
Estimate $p$, the proportion of the distribution below the threshold from the fitted parameters and the truncation point (this ignores the uncertainty in the parameter estimates; however if your sample is large that may not be a terrible approximation)
Simulate $N_u$, the number of missing values from the negative binomial $NB(n_o,p)$, obtaining the specific value $n_u$.
Simulate $n_u$ values from the truncated Gumbel on (0,$t$); for example you might consider some form of accept-reject. | Generate tail of distribution by a given sample in R | I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed | Generate tail of distribution by a given sample in R
I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed above the truncation point be $n_o$.
A simple approach could go as follows:
Estimate $p$, the proportion of the distribution below the threshold from the fitted parameters and the truncation point (this ignores the uncertainty in the parameter estimates; however if your sample is large that may not be a terrible approximation)
Simulate $N_u$, the number of missing values from the negative binomial $NB(n_o,p)$, obtaining the specific value $n_u$.
Simulate $n_u$ values from the truncated Gumbel on (0,$t$); for example you might consider some form of accept-reject. | Generate tail of distribution by a given sample in R
I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed |
49,275 | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ is exponentially distributed | Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as checking whether covariance is zero. If the variables are independent then their squares will be independent too.
Update: We have $Eu^HFv_i=0$ for each $i$, since $EF=0$. Thus
$$cov(u^HFv_i,u^HFv_j)=Eu^HFv_iv_j^HF^Hu,$$
Now since $v_i$ come from unitary matrix $V$, we have that $v_i^Hv_j=0$. After some algebra it is possible to see that this gives us
$$EFv_iv_j^HF^H=0.$$
The key is to try to write down the element of this matrix and see that either the products in that element are zero because of the iid elements, or the element is zero because $v_i^Hv_j=0$. | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf | Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as c | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ is exponentially distributed
Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as checking whether covariance is zero. If the variables are independent then their squares will be independent too.
Update: We have $Eu^HFv_i=0$ for each $i$, since $EF=0$. Thus
$$cov(u^HFv_i,u^HFv_j)=Eu^HFv_iv_j^HF^Hu,$$
Now since $v_i$ come from unitary matrix $V$, we have that $v_i^Hv_j=0$. After some algebra it is possible to see that this gives us
$$EFv_iv_j^HF^H=0.$$
The key is to try to write down the element of this matrix and see that either the products in that element are zero because of the iid elements, or the element is zero because $v_i^Hv_j=0$. | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf
Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as c |
49,276 | Implications of point-wise convergence of the MGF - reference request | I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variables for which:
The MGF is defined for $t \in [-r,r]$
The MGF converges pointwise for $t \in [-r,r]$ to the MGF of $X$
then $X_n \rightarrow X$ in weak convergence, and further all moments of $X_n$ converge to the corresponding moment of $X$
Proof: First we prove that the sequence $X_n$ is tight. Since $E( \exp(-r X_n) + \exp(r X_n) )$ converges, it's bounded. From this boundedness we can prove tightness of the sequence $X_n$.
Since the sequence is tight, we can extract a subsequence $X_{n_k}$ which converges weakly to some limit $\tilde X$. By continuity, $\tilde X$ has a MGF which is equal to that of $X$. Since the MGF characterizes a random variable $\tilde X = X$ (see lemma below)
Every convergent subsequence converges to $X$ and $X_n$ is tight, so $X_n \rightarrow X$ weakly (Billingsley Thm 25.10, corollary)
Lemma: the MGF characterizes the distribution of a random variable: If $X$ and $Y$ have the same MGF for $t \in [-r,r]$, then $X$ and $Y$ have the same distribution
Proof: If the MGF is defined over $[-r,r]$, then it is analytical over $]-r,r[$. We can then extend it to the complex plane for $Re(z) \in ]-r,r[$ and this extension is unique. Note $\psi(z)$ that extension. $\phi(t)=\psi(it)$ is the characteristic function of $X$ which uniquely determines it. | Implications of point-wise convergence of the MGF - reference request | I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variable | Implications of point-wise convergence of the MGF - reference request
I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variables for which:
The MGF is defined for $t \in [-r,r]$
The MGF converges pointwise for $t \in [-r,r]$ to the MGF of $X$
then $X_n \rightarrow X$ in weak convergence, and further all moments of $X_n$ converge to the corresponding moment of $X$
Proof: First we prove that the sequence $X_n$ is tight. Since $E( \exp(-r X_n) + \exp(r X_n) )$ converges, it's bounded. From this boundedness we can prove tightness of the sequence $X_n$.
Since the sequence is tight, we can extract a subsequence $X_{n_k}$ which converges weakly to some limit $\tilde X$. By continuity, $\tilde X$ has a MGF which is equal to that of $X$. Since the MGF characterizes a random variable $\tilde X = X$ (see lemma below)
Every convergent subsequence converges to $X$ and $X_n$ is tight, so $X_n \rightarrow X$ weakly (Billingsley Thm 25.10, corollary)
Lemma: the MGF characterizes the distribution of a random variable: If $X$ and $Y$ have the same MGF for $t \in [-r,r]$, then $X$ and $Y$ have the same distribution
Proof: If the MGF is defined over $[-r,r]$, then it is analytical over $]-r,r[$. We can then extend it to the complex plane for $Re(z) \in ]-r,r[$ and this extension is unique. Note $\psi(z)$ that extension. $\phi(t)=\psi(it)$ is the characteristic function of $X$ which uniquely determines it. | Implications of point-wise convergence of the MGF - reference request
I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variable |
49,277 | How to train classifier for unbalanced class distributions? | Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of interest and helps the model to formulate general rules.
However, as @rep_ho points out you should definitely use a test set that represents the population of your data. Otherwise, you would skew your results.
Note though that relying on accuracy as a performance measure in a highly unbalanced dataset can be a misleading metric. If you have a dataset with two groups with a 90/10 split, then the model might simply 'guess' the first category all the time and nevertheless achieve a 90% accuracy. | How to train classifier for unbalanced class distributions? | Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of inter | How to train classifier for unbalanced class distributions?
Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of interest and helps the model to formulate general rules.
However, as @rep_ho points out you should definitely use a test set that represents the population of your data. Otherwise, you would skew your results.
Note though that relying on accuracy as a performance measure in a highly unbalanced dataset can be a misleading metric. If you have a dataset with two groups with a 90/10 split, then the model might simply 'guess' the first category all the time and nevertheless achieve a 90% accuracy. | How to train classifier for unbalanced class distributions?
Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of inter |
49,278 | How to train classifier for unbalanced class distributions? | For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if you feel that your
sample doesn't represent the true population
Now the question arises how do we know whether my sample is a correct representation of the population ? It depends on two factors
1) Either you have to consult a subject expert or
2) The results of your test are saying that : For eg Systolic and Diastolic BP of population would certainly lie within confined intervals , but your sample might have a dataset which have people with high BP only .
You can refer www.analyticsvidhya.com to learn how to do over and under sampling in R. | How to train classifier for unbalanced class distributions? | For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if y | How to train classifier for unbalanced class distributions?
For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if you feel that your
sample doesn't represent the true population
Now the question arises how do we know whether my sample is a correct representation of the population ? It depends on two factors
1) Either you have to consult a subject expert or
2) The results of your test are saying that : For eg Systolic and Diastolic BP of population would certainly lie within confined intervals , but your sample might have a dataset which have people with high BP only .
You can refer www.analyticsvidhya.com to learn how to do over and under sampling in R. | How to train classifier for unbalanced class distributions?
For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if y |
49,279 | How to train classifier for unbalanced class distributions? | You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specificity and sensitivity), kappa, F score. Those measures depends on your possibly arbitrary decision of where to put cutout point. You might use area under ROC curve or area under precision/recall curve, which might be of interest of you, since precision with reasonable recall is what you care about.
Other thing you can do, is to move your cutout/cutoff point. So you will not predict class A if your network is confident about A for > 0.333, but for example > 0.1. Other thing you can do is to use synthetic data points, as you already did. There is SMOTE and ROSE algorithm, that might work better than your naive noise imputation. You can also put more weight, in your training to your minority class. | How to train classifier for unbalanced class distributions? | You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specif | How to train classifier for unbalanced class distributions?
You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specificity and sensitivity), kappa, F score. Those measures depends on your possibly arbitrary decision of where to put cutout point. You might use area under ROC curve or area under precision/recall curve, which might be of interest of you, since precision with reasonable recall is what you care about.
Other thing you can do, is to move your cutout/cutoff point. So you will not predict class A if your network is confident about A for > 0.333, but for example > 0.1. Other thing you can do is to use synthetic data points, as you already did. There is SMOTE and ROSE algorithm, that might work better than your naive noise imputation. You can also put more weight, in your training to your minority class. | How to train classifier for unbalanced class distributions?
You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specif |
49,280 | Properties of MaxEnt posterior distribution for a die with prescribed average | The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ in order for the posterior to stay close to uniform.
Why such tendency? Perhaps it has something to do with the shortest message possible. It's simple to encode distributions close to the uniform. Why? Well, the uniform distribution is at the extreme end of this: instead of encoding $(\frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6})$ one gets away with encoding roughly $[6, \frac{1}{6}]$. I.e. $\frac{1}{6}$ $6$-times.
So what MaxEnt does is akin to Occam's razor/minimum message length and the like: through the space of all possible explanations (posteriors) it finds the simplest one which explains the data. And then it tells you that the simplest one is most likely: so if I were to encode each posterior as a binary string, the entropy of each string could itself be renormalised as some probability measure. The shortest string is then just the mode of such entropy-probability-measure on all strings.
My intuition called for $4$ to have more mass but perhaps it was just a fallacy of my own intuition: given the above reasoning I find it plausible that guessing $6$ is the right thing to do if I am rewarded only when I guess a toss correctly.
On the other hand, it seems that conditioning on the most probable hypothesis is short-sighted: it ignores all other gains to be made from other hypotheses which are not likely, but plausible. Therefore a very important question arises: when conditioning on the mode of the entropy-probability-measure is equivalent to taking the expectation with respect to entropy-probability-measure? I suspect that MaxEnt distribution is simply the expectation of this entropy-probability-measure, so always.
As for the loss functions which would force me to guess $4$, it is a well-known result that if my loss-function is a mean-squared error, I should guess the expectation of the posterior, which is, of course, $4$.
I am yet to connect this with MLE's... So to be continued/edited. | Properties of MaxEnt posterior distribution for a die with prescribed average | The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ i | Properties of MaxEnt posterior distribution for a die with prescribed average
The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ in order for the posterior to stay close to uniform.
Why such tendency? Perhaps it has something to do with the shortest message possible. It's simple to encode distributions close to the uniform. Why? Well, the uniform distribution is at the extreme end of this: instead of encoding $(\frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6})$ one gets away with encoding roughly $[6, \frac{1}{6}]$. I.e. $\frac{1}{6}$ $6$-times.
So what MaxEnt does is akin to Occam's razor/minimum message length and the like: through the space of all possible explanations (posteriors) it finds the simplest one which explains the data. And then it tells you that the simplest one is most likely: so if I were to encode each posterior as a binary string, the entropy of each string could itself be renormalised as some probability measure. The shortest string is then just the mode of such entropy-probability-measure on all strings.
My intuition called for $4$ to have more mass but perhaps it was just a fallacy of my own intuition: given the above reasoning I find it plausible that guessing $6$ is the right thing to do if I am rewarded only when I guess a toss correctly.
On the other hand, it seems that conditioning on the most probable hypothesis is short-sighted: it ignores all other gains to be made from other hypotheses which are not likely, but plausible. Therefore a very important question arises: when conditioning on the mode of the entropy-probability-measure is equivalent to taking the expectation with respect to entropy-probability-measure? I suspect that MaxEnt distribution is simply the expectation of this entropy-probability-measure, so always.
As for the loss functions which would force me to guess $4$, it is a well-known result that if my loss-function is a mean-squared error, I should guess the expectation of the posterior, which is, of course, $4$.
I am yet to connect this with MLE's... So to be continued/edited. | Properties of MaxEnt posterior distribution for a die with prescribed average
The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ i |
49,281 | Properties of MaxEnt posterior distribution for a die with prescribed average | A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a large probability of 4 is high.
The "problem" is that that inference results in a posterior non-uniform distribution even for a fair dice, with an average value of 3.5. That is for the very same reason: some loaded dices also result in an average value of 3.5. So even as $N \to \infty$ the posterior is different form a the uniform distribution that MaxEnt would give you.
Uffink explained that with much more detail, here:
http://www.projects.science.uu.nl/igg/jos/mep2def/mep2def.pdf | Properties of MaxEnt posterior distribution for a die with prescribed average | A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a la | Properties of MaxEnt posterior distribution for a die with prescribed average
A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a large probability of 4 is high.
The "problem" is that that inference results in a posterior non-uniform distribution even for a fair dice, with an average value of 3.5. That is for the very same reason: some loaded dices also result in an average value of 3.5. So even as $N \to \infty$ the posterior is different form a the uniform distribution that MaxEnt would give you.
Uffink explained that with much more detail, here:
http://www.projects.science.uu.nl/igg/jos/mep2def/mep2def.pdf | Properties of MaxEnt posterior distribution for a die with prescribed average
A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a la |
49,282 | Goodness of fit: power-law or discrete log-normal? | Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). The corresponding p-value follows the statistic and is large (i.e. doesn't show a deviation large enough to be able to tell from deviations due to randomness).
Assuming they've calculated the p-value correctly, there's nothing there that indicates a deviation from the proposed model. Of course, with enough data almost any distribution will be rejected, but that doesn't necessarily indicate a poor fit* or mean it wouldn't make a suitable model for all kinds of purposes.
* (just one whose deviations from the proposed model you can tell from randomness)
That a continuous function might fit a discrete distribution well enough not to be detected isn't necessarily surprising, as long as the discreteness isn't so heavy** or there isn't so much data that the deviations between the step-function nature of the actual distribution and the continuous form of the tested distribution becomes obvious from the sample.
** e.g. where most of the probability is taken up by only a small number of values.
That said, if you'd like a discrete distribution that can look sort of lognormalish, a negative binomial is one that can sometimes look a bit like a
"discrete lognormal".
Very heavy-tailed distributions can be hard to assess from Q-Q plots because the high quantiles are extremely variable and so deviations even from a correct model can be considerable (to assess how much, simulate data from similar power-law distributions).
If you don't have zeros in your data, I'd suggest looking on the log-log scale, or if the discreteness dominates the appearance on that scale, you might consider a P-P plot (which will work even with zeroes).
Rather than just trying to guess distributions from some arbitrary list of common distributions, what should drive the choice of distribution and alternatives is theory, first and foremost. I'm not really in a position to do that for you.
If you haven't read A. Clauset, C.R. Shalizi, and M.E.J. Newman (2009), "Power-law distributions in empirical data" SIAM Review 51(4), 661-703
(arxiv here) and Shalizi's So You Think You Have a Power Law — Well Isn't That Special? (see here), I would suggest giving them both a look (probably the second one first). | Goodness of fit: power-law or discrete log-normal? | Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). Th | Goodness of fit: power-law or discrete log-normal?
Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). The corresponding p-value follows the statistic and is large (i.e. doesn't show a deviation large enough to be able to tell from deviations due to randomness).
Assuming they've calculated the p-value correctly, there's nothing there that indicates a deviation from the proposed model. Of course, with enough data almost any distribution will be rejected, but that doesn't necessarily indicate a poor fit* or mean it wouldn't make a suitable model for all kinds of purposes.
* (just one whose deviations from the proposed model you can tell from randomness)
That a continuous function might fit a discrete distribution well enough not to be detected isn't necessarily surprising, as long as the discreteness isn't so heavy** or there isn't so much data that the deviations between the step-function nature of the actual distribution and the continuous form of the tested distribution becomes obvious from the sample.
** e.g. where most of the probability is taken up by only a small number of values.
That said, if you'd like a discrete distribution that can look sort of lognormalish, a negative binomial is one that can sometimes look a bit like a
"discrete lognormal".
Very heavy-tailed distributions can be hard to assess from Q-Q plots because the high quantiles are extremely variable and so deviations even from a correct model can be considerable (to assess how much, simulate data from similar power-law distributions).
If you don't have zeros in your data, I'd suggest looking on the log-log scale, or if the discreteness dominates the appearance on that scale, you might consider a P-P plot (which will work even with zeroes).
Rather than just trying to guess distributions from some arbitrary list of common distributions, what should drive the choice of distribution and alternatives is theory, first and foremost. I'm not really in a position to do that for you.
If you haven't read A. Clauset, C.R. Shalizi, and M.E.J. Newman (2009), "Power-law distributions in empirical data" SIAM Review 51(4), 661-703
(arxiv here) and Shalizi's So You Think You Have a Power Law — Well Isn't That Special? (see here), I would suggest giving them both a look (probably the second one first). | Goodness of fit: power-law or discrete log-normal?
Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). Th |
49,283 | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables? | $P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that the density at atoms like 0 and 1 is equal to the weight against the counting measure and only the counting measure!, hence it is 1/6 and 2/6 for 0 and 1, respectively, in your example. (See this other entry for additional comments on mixed measures.)
Therefore, MCMC and in particular Metropolis-Hastings apply to this case. This means that the proposal must also be continuous against the measure made of the Lebesgue measure plus the counting measure, hence the proposal can have atoms at 0 and 1. For the chain to be converging it must have atoms at 0 and 1.
Here is an example of a (dumb) Metropolis-Hastings algorithm for your target
#target is N(0,1) with prob 1/2, mass at 0 with prob 1/6 and at 1 with prob 1/3
targ=function(x,isint){
if (isint){ t=(x==0)/6+(x==1)/3
}else{ t=.5*dnorm(x)}
return(t)
}
#Metropolis with random walk+U{0,1} proposal
prop=function(val,isint){
isint[2]*.25+.5*(1-isint[2])*dnorm(val[2],mean=val[1],sd=.3)
}
T=1e5
samp=rep(NaN,T)
sampint=(runif(T)<.5)
samp[1]=runif(1)
sampint[1]=FALSE
for (t in 2:T){
if (sampint[t]){
samp[t]=as.integer(sample(c(0,1),1))
}else{
samp[t]=samp[t-1]+rnorm(1,0,.3)
}
metro=targ(samp[t],sampint[t])*prop(samp[t:(t-1)],sampint[t:(t-1)])/
targ(samp[t-1],sampint[t-1])/prop(samp[(t-1):t],sampint[(t-1):t])
if (runif(1)>metro){ samp[t]=samp[t-1];sampint[t]=sampint[t-1]}
}
This gives the proper fit, as shown by the following histogram:
and the (right) proportion of zeros and ones:
> sum(sampint)/T
[1] 0.49676
> sum(samp==0)/sum(samp==1)
[1] 0.5027832 | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables? | $P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that th | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables?
$P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that the density at atoms like 0 and 1 is equal to the weight against the counting measure and only the counting measure!, hence it is 1/6 and 2/6 for 0 and 1, respectively, in your example. (See this other entry for additional comments on mixed measures.)
Therefore, MCMC and in particular Metropolis-Hastings apply to this case. This means that the proposal must also be continuous against the measure made of the Lebesgue measure plus the counting measure, hence the proposal can have atoms at 0 and 1. For the chain to be converging it must have atoms at 0 and 1.
Here is an example of a (dumb) Metropolis-Hastings algorithm for your target
#target is N(0,1) with prob 1/2, mass at 0 with prob 1/6 and at 1 with prob 1/3
targ=function(x,isint){
if (isint){ t=(x==0)/6+(x==1)/3
}else{ t=.5*dnorm(x)}
return(t)
}
#Metropolis with random walk+U{0,1} proposal
prop=function(val,isint){
isint[2]*.25+.5*(1-isint[2])*dnorm(val[2],mean=val[1],sd=.3)
}
T=1e5
samp=rep(NaN,T)
sampint=(runif(T)<.5)
samp[1]=runif(1)
sampint[1]=FALSE
for (t in 2:T){
if (sampint[t]){
samp[t]=as.integer(sample(c(0,1),1))
}else{
samp[t]=samp[t-1]+rnorm(1,0,.3)
}
metro=targ(samp[t],sampint[t])*prop(samp[t:(t-1)],sampint[t:(t-1)])/
targ(samp[t-1],sampint[t-1])/prop(samp[(t-1):t],sampint[(t-1):t])
if (runif(1)>metro){ samp[t]=samp[t-1];sampint[t]=sampint[t-1]}
}
This gives the proper fit, as shown by the following histogram:
and the (right) proportion of zeros and ones:
> sum(sampint)/T
[1] 0.49676
> sum(samp==0)/sum(samp==1)
[1] 0.5027832 | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables?
$P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that th |
49,284 | How can you resolve mixed normal distributions into their component data sets? | One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density.
There are a number of approaches to doing so; the E-M algorithm (by introducing latent variables - in your case indicating the relative weight to being from one of the two sexes) is one common approach. This should converge to the maximum likelihood estimate of the 5 unknown parameters above.
The book Elements of Statistical Learning, 2nd.Ed by Hastie, Tibshirani and Friedman gives an explicit algorithm (Algorithm 8.2 in the 10th printing, p277). This book is commonly available in university libraries and is also downloadable from the web-page for the book (in pdf form) here at one of the authors' academic webpages.
A number of questions on our site discuss this method.
There's a set of slides by some of the same authors here that also discuss this approach. One suitable search term on our site that turns up some of the previous posts on this topic is gaussian mixture EM.
This is a pretty standard method and lots of software is available to fit it.
For example, if you use R, the function normalmixEM2comp in package mixtools is specifically for 2-component Gaussian mixtures; this automates the process of fitting the mixture.
I created some data and fitted a mixture using it (I had never used this package before, but it's very simple and works like many other such programs):
The simulated data set of wing lengths (just under 5000 values) is in the variable "wing". Here's a histogram of the data:
After loading the package, here's how I fitted the mixture (the value 0.5 is the initial guess at the proportion in the first component, the 64,71 are initial guesses at wing length for the two components, and the 1.2,1.2 are initial guesses at standard deviation for the two components):
mixres = normalmixEM2comp(wing, 0.5, c(64,71), c(1.2,1.2))
number of iterations= 38
So let's look at the results:
summary(mixres)
summary of normalmixEM object:
comp 1 comp 2
lambda 0.554041 0.445959
mu 64.477140 70.460471
sigma 1.301563 1.841486
loglik at estimate: -12251.45
Pretty good actually, since those are really close to the values I used to generate the data to begin with.
Summarizing that information back onto the histogram:
In this case I obtained the estimated count form females by taking the proportion for component 1 times the overall count. This undercounted the females by 14, which is well within the uncertainty involved. On the other hand, the output of the function above also gives an estimated (posterior) probability of being in each component (which was returned in mixres$probability). If I allocate each of the individual birds to a sex based on which one has the higher relative probability for that winglength, the estimated count is 2800 female (an overcount of 17 ... again, within the uncertainty one might expect for the count with this fitted model).
[However, this approach will tend to lead to an overcount of the more prevalent group, as it did here.]
You should be able to do similar things with other software for fitting such mixtures. | How can you resolve mixed normal distributions into their component data sets? | One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density | How can you resolve mixed normal distributions into their component data sets?
One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density.
There are a number of approaches to doing so; the E-M algorithm (by introducing latent variables - in your case indicating the relative weight to being from one of the two sexes) is one common approach. This should converge to the maximum likelihood estimate of the 5 unknown parameters above.
The book Elements of Statistical Learning, 2nd.Ed by Hastie, Tibshirani and Friedman gives an explicit algorithm (Algorithm 8.2 in the 10th printing, p277). This book is commonly available in university libraries and is also downloadable from the web-page for the book (in pdf form) here at one of the authors' academic webpages.
A number of questions on our site discuss this method.
There's a set of slides by some of the same authors here that also discuss this approach. One suitable search term on our site that turns up some of the previous posts on this topic is gaussian mixture EM.
This is a pretty standard method and lots of software is available to fit it.
For example, if you use R, the function normalmixEM2comp in package mixtools is specifically for 2-component Gaussian mixtures; this automates the process of fitting the mixture.
I created some data and fitted a mixture using it (I had never used this package before, but it's very simple and works like many other such programs):
The simulated data set of wing lengths (just under 5000 values) is in the variable "wing". Here's a histogram of the data:
After loading the package, here's how I fitted the mixture (the value 0.5 is the initial guess at the proportion in the first component, the 64,71 are initial guesses at wing length for the two components, and the 1.2,1.2 are initial guesses at standard deviation for the two components):
mixres = normalmixEM2comp(wing, 0.5, c(64,71), c(1.2,1.2))
number of iterations= 38
So let's look at the results:
summary(mixres)
summary of normalmixEM object:
comp 1 comp 2
lambda 0.554041 0.445959
mu 64.477140 70.460471
sigma 1.301563 1.841486
loglik at estimate: -12251.45
Pretty good actually, since those are really close to the values I used to generate the data to begin with.
Summarizing that information back onto the histogram:
In this case I obtained the estimated count form females by taking the proportion for component 1 times the overall count. This undercounted the females by 14, which is well within the uncertainty involved. On the other hand, the output of the function above also gives an estimated (posterior) probability of being in each component (which was returned in mixres$probability). If I allocate each of the individual birds to a sex based on which one has the higher relative probability for that winglength, the estimated count is 2800 female (an overcount of 17 ... again, within the uncertainty one might expect for the count with this fitted model).
[However, this approach will tend to lead to an overcount of the more prevalent group, as it did here.]
You should be able to do similar things with other software for fitting such mixtures. | How can you resolve mixed normal distributions into their component data sets?
One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density |
49,285 | How can you resolve mixed normal distributions into their component data sets? | You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$, you could try a Kolmogorov-Smirnov test.
I don't know how accurate you need it, but with a histogram like this ...
... my prototype code finds what it thinks is the optimal $\theta$ comfortably within 5% of all components of the true $\theta^\ast$. The neater and more obviously bimodal your histogram, the better the results will be.
Would R code be helpful? Are your observations in some convenient format like a CSV file? Would you be willing to upload the data? | How can you resolve mixed normal distributions into their component data sets? | You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$ | How can you resolve mixed normal distributions into their component data sets?
You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$, you could try a Kolmogorov-Smirnov test.
I don't know how accurate you need it, but with a histogram like this ...
... my prototype code finds what it thinks is the optimal $\theta$ comfortably within 5% of all components of the true $\theta^\ast$. The neater and more obviously bimodal your histogram, the better the results will be.
Would R code be helpful? Are your observations in some convenient format like a CSV file? Would you be willing to upload the data? | How can you resolve mixed normal distributions into their component data sets?
You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$ |
49,286 | Difference between principal directions and principal component scores in the context of dimensionality reduction | Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Still, here I will try to answer your specific concerns.
Think about it like that. You have, let's say, $1000$ data points in $12$-dimensional space (i.e. your data matrix $X$ is of $1000\times12$ size). PCA finds directions in this space that capture maximal variance. So for example PC1 direction is a certain axis in this $12$-dimensional space, i.e. a vector of length $12$. PC2 direction is another axis, etc. These directions are given by columns of your matrix $V$. All your $1000$ data points can be projected onto each of these directions/axes, yielding coordinates of $1000$ data points along each PC direction; these projections are what is called PC scores, and what I prefer to simply call PCs. They are given by the columns of $US$.
So for each PC you have a $12$-dimensional vector specifying the PC direction or axis and a $1000$-dimensional vector specifying the PC projection on this axis.
"Reducing dimensionality" means that you take several PC projections as your new variables (e.g. if you take $6$ of them, then your new data matrix will be of $1000\times 6$ size) and essentially forget about the PC directions in the original $12$-dimensional space.
Most websites about PCA say that I should choose some principal components, but isn't it more correct to choose principal directions/axes since my objective is to reduce dimensionality?
This is equivalent. One column of $V$ corresponds to one column of $US$. You can say that you choose some columns of $V$ or you can say that you choose some columns of $US$. Doesn't matter. Also, by "principal components" some people mean columns of $V$ and some people mean columns of $US$. Again, most of the time it does not matter.
I have seen that my matrix V consists of 12 column vectors, each with 12 elements. If I choose 6 of these columns vectors, each vector still has 12 elements - but how is this possible if I have reduced the dimensionality?
You chose 6 axes in the 12-dimensional space. If you only consider these 6 axes and discard the other 6, then you reduced your dimensionality from 12 to 6. But each of the 6 chosen axes is originally a vector in the 12-dimensional space. No contradiction.
Besides, there are 12 column vectors of US, representing the principal components (scores), but each column vector has an awful lot of elements. What does it mean?
As I said, these are the projections on the principal axes. If your data matrix had 1000 points, then each PC score vector will have 1000 points. Makes sense. | Difference between principal directions and principal component scores in the context of dimensional | Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Sti | Difference between principal directions and principal component scores in the context of dimensionality reduction
Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Still, here I will try to answer your specific concerns.
Think about it like that. You have, let's say, $1000$ data points in $12$-dimensional space (i.e. your data matrix $X$ is of $1000\times12$ size). PCA finds directions in this space that capture maximal variance. So for example PC1 direction is a certain axis in this $12$-dimensional space, i.e. a vector of length $12$. PC2 direction is another axis, etc. These directions are given by columns of your matrix $V$. All your $1000$ data points can be projected onto each of these directions/axes, yielding coordinates of $1000$ data points along each PC direction; these projections are what is called PC scores, and what I prefer to simply call PCs. They are given by the columns of $US$.
So for each PC you have a $12$-dimensional vector specifying the PC direction or axis and a $1000$-dimensional vector specifying the PC projection on this axis.
"Reducing dimensionality" means that you take several PC projections as your new variables (e.g. if you take $6$ of them, then your new data matrix will be of $1000\times 6$ size) and essentially forget about the PC directions in the original $12$-dimensional space.
Most websites about PCA say that I should choose some principal components, but isn't it more correct to choose principal directions/axes since my objective is to reduce dimensionality?
This is equivalent. One column of $V$ corresponds to one column of $US$. You can say that you choose some columns of $V$ or you can say that you choose some columns of $US$. Doesn't matter. Also, by "principal components" some people mean columns of $V$ and some people mean columns of $US$. Again, most of the time it does not matter.
I have seen that my matrix V consists of 12 column vectors, each with 12 elements. If I choose 6 of these columns vectors, each vector still has 12 elements - but how is this possible if I have reduced the dimensionality?
You chose 6 axes in the 12-dimensional space. If you only consider these 6 axes and discard the other 6, then you reduced your dimensionality from 12 to 6. But each of the 6 chosen axes is originally a vector in the 12-dimensional space. No contradiction.
Besides, there are 12 column vectors of US, representing the principal components (scores), but each column vector has an awful lot of elements. What does it mean?
As I said, these are the projections on the principal axes. If your data matrix had 1000 points, then each PC score vector will have 1000 points. Makes sense. | Difference between principal directions and principal component scores in the context of dimensional
Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Sti |
49,287 | Estimation of quantile given quantiles of subset | I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of any single one of those machines. This squares neatly with the definition of big data as defined here.
Suppose that the dataset $\{X_i\}_{i=1}^n$ is composed of independent draws from a distribution $F$ and denote $q(\alpha)$ the $\alpha$ quantile of $F$. Throughout, I will assume that $F$ is differentiable at $q(\alpha)$ and that $F^\prime(q(\alpha))>0$.
The sample estimator of $q(\alpha)$ is $\hat{q}_n(\alpha)$ and is defined as the minimizer over $t\in\mathbb{R}$ of:
$$(0)\quad h_n(t)=\sum_{i=1}^n\rho_\alpha(X_i-t).$$
where $\rho_{\alpha}(u)=|u(\alpha-\mathbb{I}_{(u<0)})|$ and
$\mathbb{I}$ is the indicator variable [0].
Assume that the dataset $\{X_i\}_{i=1}^n$ is partitioned into $k$ non-overlapping sub-samples of lengths $\{m_j\}_{j=1}^k$. I will denote $\{\hat{q}_n^{j}(\alpha)\}_{j=1}^k$ the estimators of
$\hat{q}_n(\alpha)$ obtained by solving (0) on the respective sub-samples. Then, if:
$$\min_{j=1}^k\lim_{n\to\infty} \frac{m_j}{n}>0$$ the estimator:
$$\bar{q}_n(\alpha)=\sum_{j=1}^k\frac{m_j}{n}\hat{q}_n^{j}(\alpha)$$
satisfies:
$$\sqrt{n}(\bar{q}_n(\alpha)-\hat{q}_n(\alpha))=o_p(1).$$
(You can find more details of these computations including the asymptotic variances of $\bar{q}_n(\alpha)$ in [1]).
Therefore, letting $m_j=m\;\forall j$, you can use non overlapping sub-samples of your original data set to get a computationally convenient estimate of $\hat{q}_n(\alpha)$. Given $p$ computing units, this divide and conquer strategy will have cost $O(km/p)$ and storage $O(km)$ (versus $O(n)$ and $O(n)$ for computing (0) directly) at an asymptotically negligible cost in terms of precision. The finite sample costs will depend on how small $F^\prime(\alpha)$ is, but will typically be acceptable as soon as $m\geq 2^7$ to $m\geq 2^6$.
Compared to running the parallel version of $\texttt{nth_element}$ I linked to in my comment above, the implementation based on sub-samples will be simpler (and also scale much better as $p$ becomes larger than the number of cores on a single computing machine). Another advantage of the approach based on sub-samples over the one based on the parallel version of $\texttt{nth_element}$ is that the $O(km)$-space complexity can be split across the memory of multiple machines, even if $O(n)$ cannot fit inside the memory of any single one of them.
On the minus side, the solution based on $\bar{q}_n(\alpha)$ will not be permutation invariant (changing the order of the observation will give a different result).
[0]. Koencker, R. Quantile regression.
[1]. Knight, K. and Bassett, J.W. Second order improvements of sample quantiles using subsamples | Estimation of quantile given quantiles of subset | I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of a | Estimation of quantile given quantiles of subset
I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of any single one of those machines. This squares neatly with the definition of big data as defined here.
Suppose that the dataset $\{X_i\}_{i=1}^n$ is composed of independent draws from a distribution $F$ and denote $q(\alpha)$ the $\alpha$ quantile of $F$. Throughout, I will assume that $F$ is differentiable at $q(\alpha)$ and that $F^\prime(q(\alpha))>0$.
The sample estimator of $q(\alpha)$ is $\hat{q}_n(\alpha)$ and is defined as the minimizer over $t\in\mathbb{R}$ of:
$$(0)\quad h_n(t)=\sum_{i=1}^n\rho_\alpha(X_i-t).$$
where $\rho_{\alpha}(u)=|u(\alpha-\mathbb{I}_{(u<0)})|$ and
$\mathbb{I}$ is the indicator variable [0].
Assume that the dataset $\{X_i\}_{i=1}^n$ is partitioned into $k$ non-overlapping sub-samples of lengths $\{m_j\}_{j=1}^k$. I will denote $\{\hat{q}_n^{j}(\alpha)\}_{j=1}^k$ the estimators of
$\hat{q}_n(\alpha)$ obtained by solving (0) on the respective sub-samples. Then, if:
$$\min_{j=1}^k\lim_{n\to\infty} \frac{m_j}{n}>0$$ the estimator:
$$\bar{q}_n(\alpha)=\sum_{j=1}^k\frac{m_j}{n}\hat{q}_n^{j}(\alpha)$$
satisfies:
$$\sqrt{n}(\bar{q}_n(\alpha)-\hat{q}_n(\alpha))=o_p(1).$$
(You can find more details of these computations including the asymptotic variances of $\bar{q}_n(\alpha)$ in [1]).
Therefore, letting $m_j=m\;\forall j$, you can use non overlapping sub-samples of your original data set to get a computationally convenient estimate of $\hat{q}_n(\alpha)$. Given $p$ computing units, this divide and conquer strategy will have cost $O(km/p)$ and storage $O(km)$ (versus $O(n)$ and $O(n)$ for computing (0) directly) at an asymptotically negligible cost in terms of precision. The finite sample costs will depend on how small $F^\prime(\alpha)$ is, but will typically be acceptable as soon as $m\geq 2^7$ to $m\geq 2^6$.
Compared to running the parallel version of $\texttt{nth_element}$ I linked to in my comment above, the implementation based on sub-samples will be simpler (and also scale much better as $p$ becomes larger than the number of cores on a single computing machine). Another advantage of the approach based on sub-samples over the one based on the parallel version of $\texttt{nth_element}$ is that the $O(km)$-space complexity can be split across the memory of multiple machines, even if $O(n)$ cannot fit inside the memory of any single one of them.
On the minus side, the solution based on $\bar{q}_n(\alpha)$ will not be permutation invariant (changing the order of the observation will give a different result).
[0]. Koencker, R. Quantile regression.
[1]. Knight, K. and Bassett, J.W. Second order improvements of sample quantiles using subsamples | Estimation of quantile given quantiles of subset
I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of a |
49,288 | How to get an effect size in nlme? | You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where you want to assess the main effects of var1, var2, and the var1*var2 interaction.
To get the likelihood ratio, you can take a multilevel approach using nlme whereby you compare models against each other in a nested design.
Here's how you set the null model (where the dependent variable is predicted by its overall mean).
nullModel <- lme(depvar ~ 1, random = ~1 | id/var1/var2, data = data, method = "ML")
The var1 model
var1Model <- lme(depvar ~ var1, random = ~1 | id/var1/var2, data = data,
method = "ML")
The var2 model
var2Model <- lme(depvar ~ var1 + var2, random = ~1 | id/var1/var2, data = data, method = "ML")
And finally, the interaction or 'full' model, which includes the main effects and the interaction
IModel <- lme(depvar ~ var1 * var2, random = ~1 | id/var1/var2, data = data, method = "ML")
Then, you compare the models using the anova() function
anova(nullModel, var1Model, var2Model, IModel)
The L.ratios in your output are ratios of how much more likely the data is under a given model compared to another. You get p-values.
Here's example anova output from a dataset I'm currently working on (note the different model names). Here we can see that that the data are only 1.12 times more likely under model 2 (cond_model_dp) than the model 1 (baseline_dp), the null model. The interaction model (fullModel_dp) isn't much better than the model with both main effects (em_model_dp). According the p-values below, none of the models are significantly better fits of the data.
Model df AIC BIC logLik Test L.Ratio p-value
baseline_dp 1 5 1073.524 1086.549 -531.7618
cond_model_dp 2 7 1076.408 1094.644 -531.2038 1 vs 2 1.115851 0.5724
em_model_dp 3 6 1072.636 1088.267 -530.3180 2 vs 3 1.771603 0.1832
fullModel_dp 4 10 1076.956 1103.008 -528.4780 3 vs 4 3.680001 0.4510
While many people (particularly in the biobehavioral sciences) would expect an eta-squared statistic for such an analysis, the likelihood ratio is arguably a "better" statistic as it's easier to interpret the magnitude of the effect. That is, Cohen's guidelines of small, medium, and, and large effects for eta squared are just rules of thumb, whereas the likelihood ratio is more immediately intuitive as it operates like an odds ratio.
Finally, if you have any missing data, just add "na.action = na.exclude" to your model, like this
var1Model <- lme(depvar ~ var1, random = ~1 | id/var1/var2, data = data,
method = "ML", na.action = na.exclude) | How to get an effect size in nlme? | You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where y | How to get an effect size in nlme?
You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where you want to assess the main effects of var1, var2, and the var1*var2 interaction.
To get the likelihood ratio, you can take a multilevel approach using nlme whereby you compare models against each other in a nested design.
Here's how you set the null model (where the dependent variable is predicted by its overall mean).
nullModel <- lme(depvar ~ 1, random = ~1 | id/var1/var2, data = data, method = "ML")
The var1 model
var1Model <- lme(depvar ~ var1, random = ~1 | id/var1/var2, data = data,
method = "ML")
The var2 model
var2Model <- lme(depvar ~ var1 + var2, random = ~1 | id/var1/var2, data = data, method = "ML")
And finally, the interaction or 'full' model, which includes the main effects and the interaction
IModel <- lme(depvar ~ var1 * var2, random = ~1 | id/var1/var2, data = data, method = "ML")
Then, you compare the models using the anova() function
anova(nullModel, var1Model, var2Model, IModel)
The L.ratios in your output are ratios of how much more likely the data is under a given model compared to another. You get p-values.
Here's example anova output from a dataset I'm currently working on (note the different model names). Here we can see that that the data are only 1.12 times more likely under model 2 (cond_model_dp) than the model 1 (baseline_dp), the null model. The interaction model (fullModel_dp) isn't much better than the model with both main effects (em_model_dp). According the p-values below, none of the models are significantly better fits of the data.
Model df AIC BIC logLik Test L.Ratio p-value
baseline_dp 1 5 1073.524 1086.549 -531.7618
cond_model_dp 2 7 1076.408 1094.644 -531.2038 1 vs 2 1.115851 0.5724
em_model_dp 3 6 1072.636 1088.267 -530.3180 2 vs 3 1.771603 0.1832
fullModel_dp 4 10 1076.956 1103.008 -528.4780 3 vs 4 3.680001 0.4510
While many people (particularly in the biobehavioral sciences) would expect an eta-squared statistic for such an analysis, the likelihood ratio is arguably a "better" statistic as it's easier to interpret the magnitude of the effect. That is, Cohen's guidelines of small, medium, and, and large effects for eta squared are just rules of thumb, whereas the likelihood ratio is more immediately intuitive as it operates like an odds ratio.
Finally, if you have any missing data, just add "na.action = na.exclude" to your model, like this
var1Model <- lme(depvar ~ var1, random = ~1 | id/var1/var2, data = data,
method = "ML", na.action = na.exclude) | How to get an effect size in nlme?
You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where y |
49,289 | Bayesian updating, point for point? | You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an appropriate latent variable.
That is, you have
$$
p(X_{1}, \ldots, X_{n} \, | \, \theta) = \prod_{i = 1}^{n}p(X_{i} \, | \, \theta)
$$
Since $p(\theta \, | \, X_{1}, \ldots, X_{n}) \propto \prod_{i}^{n} p(X_{i} \, | \, \theta)p(\theta)$, it doesn't matter in what order you multiply the $p(X_{i} | \theta)$ terms. You can do it one point at a time, in mini batches of $m < n$ points, all-at-once with $n$ points, etc.
Here's an inductive proof that performing $n$ point-by-point updates corresponds to doing a single $n$-point batch update. It suffices to show that
$$
p(\theta \, | \, X_{1}, \ldots, X_{n}) \propto p(X_{n} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n-1}).
$$
Notice that $p(\theta \, | \, X_{1}) \propto p(X_{1} \, | \, \theta) p(\theta)$ holds by Bayes' theorem. Assume that the result holds for the $n^{th}$ case, and consider the case for $n+1$.
We have:
\begin{align}
p(\theta \, | \, X_{1}, \ldots, X_{n + 1})
& \propto p(X_{1}, \ldots, X_{n+1} \, | \, \theta)\, p(\theta) & \text{Bayes' theorem} \\
& = p(X_{1}, \ldots, X_{n-1} \, | \, \theta) \,p(X_{n} \, | \, \theta) p(X_{n+1} \, | \, \theta) p(\theta) & \text{exchangeability} \\
& \propto p(X_{n+1} \, | \, \theta) \,p(X_{n} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n-1}) & \text{Bayes' theorem} \\
& = p(X_{n+1} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n}). \square & \text{inductive hypothesis}
\end{align} | Bayesian updating, point for point? | You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an app | Bayesian updating, point for point?
You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an appropriate latent variable.
That is, you have
$$
p(X_{1}, \ldots, X_{n} \, | \, \theta) = \prod_{i = 1}^{n}p(X_{i} \, | \, \theta)
$$
Since $p(\theta \, | \, X_{1}, \ldots, X_{n}) \propto \prod_{i}^{n} p(X_{i} \, | \, \theta)p(\theta)$, it doesn't matter in what order you multiply the $p(X_{i} | \theta)$ terms. You can do it one point at a time, in mini batches of $m < n$ points, all-at-once with $n$ points, etc.
Here's an inductive proof that performing $n$ point-by-point updates corresponds to doing a single $n$-point batch update. It suffices to show that
$$
p(\theta \, | \, X_{1}, \ldots, X_{n}) \propto p(X_{n} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n-1}).
$$
Notice that $p(\theta \, | \, X_{1}) \propto p(X_{1} \, | \, \theta) p(\theta)$ holds by Bayes' theorem. Assume that the result holds for the $n^{th}$ case, and consider the case for $n+1$.
We have:
\begin{align}
p(\theta \, | \, X_{1}, \ldots, X_{n + 1})
& \propto p(X_{1}, \ldots, X_{n+1} \, | \, \theta)\, p(\theta) & \text{Bayes' theorem} \\
& = p(X_{1}, \ldots, X_{n-1} \, | \, \theta) \,p(X_{n} \, | \, \theta) p(X_{n+1} \, | \, \theta) p(\theta) & \text{exchangeability} \\
& \propto p(X_{n+1} \, | \, \theta) \,p(X_{n} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n-1}) & \text{Bayes' theorem} \\
& = p(X_{n+1} \, | \, \theta) \,p(\theta \, | \, X_{1}, \ldots, X_{n}). \square & \text{inductive hypothesis}
\end{align} | Bayesian updating, point for point?
You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an app |
49,290 | GLM diagnostics and Deviance residual | Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simple identity link gamma fit (to simulated data for which the model was appropriate; in this case the shape parameter of the gamma was 3):
The plot on the left is a typical deviance residuals vs fitted type plot. The one on the right splits the fitted values into bins so we can use boxplots to help judge whether the spread is near constant; the 0 line is marked in red.
As you can see from the boxplots, judging from the IQR, the spread is pretty much constant (with some random variation at the right where there are few values), but the medians there are consistently below 0. We can see that (in this case) the deviance residuals appear to be close to symmetric.
The mean deviance residual for this model is -0.1126, (marked in blue) which is very close to where those marked medians are sitting. With such a big sample, this mean is many standard errors from 0, but the mean is still "near" 0 (in the sense that the standard deviation of the residuals is more than 5 times larger than 0.1126).
Based on simulations, it looks like (as long as n is large and the shape parameter is not too small) the average deviance residual for a Gamma will be about $-\frac{1}{3\alpha}$, where $\alpha$ is the common shape parameter for the gamma-distributed response. The relationship comes in fairly well by about $\alpha=2$, but much below that it tends to overestimate.
In summary: the mean deviance residual should be close to constant, with close to constant variance, but the mean of the deviance residuals should be "near" 0 rather than 0. | GLM diagnostics and Deviance residual | Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simpl | GLM diagnostics and Deviance residual
Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simple identity link gamma fit (to simulated data for which the model was appropriate; in this case the shape parameter of the gamma was 3):
The plot on the left is a typical deviance residuals vs fitted type plot. The one on the right splits the fitted values into bins so we can use boxplots to help judge whether the spread is near constant; the 0 line is marked in red.
As you can see from the boxplots, judging from the IQR, the spread is pretty much constant (with some random variation at the right where there are few values), but the medians there are consistently below 0. We can see that (in this case) the deviance residuals appear to be close to symmetric.
The mean deviance residual for this model is -0.1126, (marked in blue) which is very close to where those marked medians are sitting. With such a big sample, this mean is many standard errors from 0, but the mean is still "near" 0 (in the sense that the standard deviation of the residuals is more than 5 times larger than 0.1126).
Based on simulations, it looks like (as long as n is large and the shape parameter is not too small) the average deviance residual for a Gamma will be about $-\frac{1}{3\alpha}$, where $\alpha$ is the common shape parameter for the gamma-distributed response. The relationship comes in fairly well by about $\alpha=2$, but much below that it tends to overestimate.
In summary: the mean deviance residual should be close to constant, with close to constant variance, but the mean of the deviance residuals should be "near" 0 rather than 0. | GLM diagnostics and Deviance residual
Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simpl |
49,291 | GLM diagnostics and Deviance residual | I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of linear models. If the residuals deviate from the fitted values in an uniform way it would indicate that the model is either biased (or unbiased) and heteroscedastic(or homoscedastic). Therefore Yes, these should apply to all GLM link functions (I think). | GLM diagnostics and Deviance residual | I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of line | GLM diagnostics and Deviance residual
I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of linear models. If the residuals deviate from the fitted values in an uniform way it would indicate that the model is either biased (or unbiased) and heteroscedastic(or homoscedastic). Therefore Yes, these should apply to all GLM link functions (I think). | GLM diagnostics and Deviance residual
I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of line |
49,292 | Why is coxph() so fast for survival analysis on big data? | Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data? | Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data?
Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data?
Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). |
49,293 | Moving Average (MA) process: numerical intuition | Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}$$
(I assume no intercept for simplicity.) Note that this is not what you have in your equation.
Now if you are going to forecast $Y_t$ using the information available up to time $t-1$, $I_{t-1}$, the point forecast of $Y_t$ will be
$$
\begin{multline}
\begin{split}
\operatorname{E}(Y_t|I_{t-1})
&= \operatorname{E}(\varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}|I_{t-1}) \\
&= \operatorname{E}(\varepsilon_t|I_{t-1}) + \theta_1 \operatorname{E}(\varepsilon_{t-1}|I_{t-1}) + \theta_2 \operatorname{E}(\varepsilon_{t-2}|I_{t-1}) \\
&= 0 + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2} \\
&= \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}
\end{split}
\end{multline}
$$
Example:
if
$\varepsilon_1 = 5$,
$\varepsilon_2 = 6$,
$\theta_1 = 0.5$,
$\theta_2 = -0.25$,
then the point forecast of $Y_3$ given the information at time 2 is
$$
\begin{multline}
\begin{split}
\operatorname{E}(Y_3|I_2)
&= \theta_1 \varepsilon_{3-1} + \theta_2 \varepsilon_{3-2} \\
&= \theta_1 \varepsilon_2 + \theta_2 \varepsilon_1 \\
&= 0.5 \cdot 6 - 0.25 \cdot 5 \\
&= 1.75
\end{split}
\end{multline}
$$ | Moving Average (MA) process: numerical intuition | Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon | Moving Average (MA) process: numerical intuition
Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}$$
(I assume no intercept for simplicity.) Note that this is not what you have in your equation.
Now if you are going to forecast $Y_t$ using the information available up to time $t-1$, $I_{t-1}$, the point forecast of $Y_t$ will be
$$
\begin{multline}
\begin{split}
\operatorname{E}(Y_t|I_{t-1})
&= \operatorname{E}(\varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}|I_{t-1}) \\
&= \operatorname{E}(\varepsilon_t|I_{t-1}) + \theta_1 \operatorname{E}(\varepsilon_{t-1}|I_{t-1}) + \theta_2 \operatorname{E}(\varepsilon_{t-2}|I_{t-1}) \\
&= 0 + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2} \\
&= \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}
\end{split}
\end{multline}
$$
Example:
if
$\varepsilon_1 = 5$,
$\varepsilon_2 = 6$,
$\theta_1 = 0.5$,
$\theta_2 = -0.25$,
then the point forecast of $Y_3$ given the information at time 2 is
$$
\begin{multline}
\begin{split}
\operatorname{E}(Y_3|I_2)
&= \theta_1 \varepsilon_{3-1} + \theta_2 \varepsilon_{3-2} \\
&= \theta_1 \varepsilon_2 + \theta_2 \varepsilon_1 \\
&= 0.5 \cdot 6 - 0.25 \cdot 5 \\
&= 1.75
\end{split}
\end{multline}
$$ | Moving Average (MA) process: numerical intuition
Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon |
49,294 | Moving Average (MA) process: numerical intuition | The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible until period 3. | Moving Average (MA) process: numerical intuition | The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible | Moving Average (MA) process: numerical intuition
The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible until period 3. | Moving Average (MA) process: numerical intuition
The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible |
49,295 | Margin of Error of Sample Variance | If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathcal{N} \left( \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] , \left[ \begin{array}{cc} \sigma^2 & 0 \\ 0 & 2\sigma^4 \end{array} \right] \right)$$
you would get an asymptotic distribution that's more efficient... since this gives CIs that do approach 0. That's just not possible :)
For $X$ distributed as you say, $r_i^2 = (X_i - \mu)^2 \sim \chi^2_1$ and $\sum_{i=1}^n r_i^2 \sim \chi^2_{n}$ with 95% exact bounds: $F_{\chi^2_{N}}(\alpha/2), 1-F_{\chi^2_{N}}(\alpha/2)$. Now, I haven't handled the issue of the plug in $\bar{X}$ estimator, but we can get some intuition by ignoring it for now. We should at least verify at this point that the upper bound quantile is less than $\mathcal{O}(N)$.
Chernoff bounds have been derived in terms of the quantile function, and a similar method may derive the percentile function... but I haven't done so. | Margin of Error of Sample Variance | If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathc | Margin of Error of Sample Variance
If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathcal{N} \left( \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] , \left[ \begin{array}{cc} \sigma^2 & 0 \\ 0 & 2\sigma^4 \end{array} \right] \right)$$
you would get an asymptotic distribution that's more efficient... since this gives CIs that do approach 0. That's just not possible :)
For $X$ distributed as you say, $r_i^2 = (X_i - \mu)^2 \sim \chi^2_1$ and $\sum_{i=1}^n r_i^2 \sim \chi^2_{n}$ with 95% exact bounds: $F_{\chi^2_{N}}(\alpha/2), 1-F_{\chi^2_{N}}(\alpha/2)$. Now, I haven't handled the issue of the plug in $\bar{X}$ estimator, but we can get some intuition by ignoring it for now. We should at least verify at this point that the upper bound quantile is less than $\mathcal{O}(N)$.
Chernoff bounds have been derived in terms of the quantile function, and a similar method may derive the percentile function... but I haven't done so. | Margin of Error of Sample Variance
If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathc |
49,296 | Margin of Error of Sample Variance | You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean $\mathbb{E}(W_n) = n/(n-2)$ and variance $\mathbb{V}(W_n) = 2 n^2/(n-2)^2 (n-4)$ so you have $\mathbb{E}(W_n) \rightarrow 1$ and $\mathbb{V}(W_n) \rightarrow 0$, which means that $W_n$ converges in probability to unity (i.e., the distribution becomes tighter and tighter around one as $n \rightarrow \infty$). Using standard notation for the critical points of this distribution you can write your confidence interval for the variance as:
$$\text{CI}(1-\alpha) = \Big[ W_{\alpha/2, n-1} \cdot S^2, W_{1-\alpha/2, n-1} \cdot S^2 \Big].$$
Thus, you can write the length of the interval as:
$$|\text{CI}(1-\alpha)| = S^2 \cdot L_\alpha(n)
\quad \quad \quad
L_\alpha(n) \equiv W_{1-\alpha/2, n-1}-W_{\alpha/2, n-1}.$$
The function $L_\alpha$ is the object you want to examine if you are interested in looking at the width of the confidence interval analytically. This function is related directly to the quantile function for the inverse gamma distribution. For any fixed $0 <\alpha < 1$ you can examine how this function changes with $n$. It should be possible to show that this function is a decreasing function (i.e., the confidence interval gets more accurate as $n$ increases) and to show that $L_\alpha(n) \rightarrow 0$. I imagine it would be possible to prove this by establishing appropriate bounds on the quantile function and then using the squeeze theorem. | Margin of Error of Sample Variance | You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean | Margin of Error of Sample Variance
You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean $\mathbb{E}(W_n) = n/(n-2)$ and variance $\mathbb{V}(W_n) = 2 n^2/(n-2)^2 (n-4)$ so you have $\mathbb{E}(W_n) \rightarrow 1$ and $\mathbb{V}(W_n) \rightarrow 0$, which means that $W_n$ converges in probability to unity (i.e., the distribution becomes tighter and tighter around one as $n \rightarrow \infty$). Using standard notation for the critical points of this distribution you can write your confidence interval for the variance as:
$$\text{CI}(1-\alpha) = \Big[ W_{\alpha/2, n-1} \cdot S^2, W_{1-\alpha/2, n-1} \cdot S^2 \Big].$$
Thus, you can write the length of the interval as:
$$|\text{CI}(1-\alpha)| = S^2 \cdot L_\alpha(n)
\quad \quad \quad
L_\alpha(n) \equiv W_{1-\alpha/2, n-1}-W_{\alpha/2, n-1}.$$
The function $L_\alpha$ is the object you want to examine if you are interested in looking at the width of the confidence interval analytically. This function is related directly to the quantile function for the inverse gamma distribution. For any fixed $0 <\alpha < 1$ you can examine how this function changes with $n$. It should be possible to show that this function is a decreasing function (i.e., the confidence interval gets more accurate as $n$ increases) and to show that $L_\alpha(n) \rightarrow 0$. I imagine it would be possible to prove this by establishing appropriate bounds on the quantile function and then using the squeeze theorem. | Margin of Error of Sample Variance
You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean |
49,297 | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or Mann-Whitney)? | From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or Dunn's test
3) He want to run his own multiple comparison adjustment procedure, so he needs the uncorrected p-values of each pairwise comparions.
The source of confusion is that Dunn test implemented in GraphPad seems to already include a multiple comparison adjustment (which looks like a Bonferroni adjustment - see http://www.graphpad.com/guides/prism/6/statistics/index.htm?stat_nonparametric_multiple_compari.htm).
Answering:
2) You should use Dunn test. Both the CV answer by @Alexis for Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests? and this site from XLSAT http://www.xlstat.com/en/products-solutions/feature/kruskal-wallis-test.html agree that Dunn (or Conover-Iman or Steel-Dwass-Critchlow-Fligner ) are the appropriate post-hoc tests after a KW (disclosure - I did not know that until today - have been using Mann-Whitney as post-hoc to KW until today).
3) I did not understand the GaphPad page, but let me point you to the dunn.test package in R does what the OP want. In particular it distinguishes the Dunn test and multiple comparison adjustments, and one can set the adjustment method to "none", which will return the unadjusted p-values.
Also notice that among the adjustment procedures there are the Benjamini-Hochberg (95) and the Benjamini-Yekutieli (2001) adjustments that are FDR (maybe one of them is the one the OP is thinking in using).
Let me stress of many of the commentators have been saying - there is no good reason to use the unadjusted p-values EXCEPT to implement your own adjustment procedure - no decision should be made based on the unadjusted p-values. | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or M | From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or Mann-Whitney)?
From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or Dunn's test
3) He want to run his own multiple comparison adjustment procedure, so he needs the uncorrected p-values of each pairwise comparions.
The source of confusion is that Dunn test implemented in GraphPad seems to already include a multiple comparison adjustment (which looks like a Bonferroni adjustment - see http://www.graphpad.com/guides/prism/6/statistics/index.htm?stat_nonparametric_multiple_compari.htm).
Answering:
2) You should use Dunn test. Both the CV answer by @Alexis for Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests? and this site from XLSAT http://www.xlstat.com/en/products-solutions/feature/kruskal-wallis-test.html agree that Dunn (or Conover-Iman or Steel-Dwass-Critchlow-Fligner ) are the appropriate post-hoc tests after a KW (disclosure - I did not know that until today - have been using Mann-Whitney as post-hoc to KW until today).
3) I did not understand the GaphPad page, but let me point you to the dunn.test package in R does what the OP want. In particular it distinguishes the Dunn test and multiple comparison adjustments, and one can set the adjustment method to "none", which will return the unadjusted p-values.
Also notice that among the adjustment procedures there are the Benjamini-Hochberg (95) and the Benjamini-Yekutieli (2001) adjustments that are FDR (maybe one of them is the one the OP is thinking in using).
Let me stress of many of the commentators have been saying - there is no good reason to use the unadjusted p-values EXCEPT to implement your own adjustment procedure - no decision should be made based on the unadjusted p-values. | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or M
From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or |
49,298 | Rule of thumb for using logarithmic scale | As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement, one thing that you could aim for is to maximize the distribution’s entropy for a fixed variance.
So, if your data is approximately log-normal distributed, taking its logarithm would probably be a good idea since afterwards it would be approximately normal distributed.
Another way to determine how to preprocess the data would be to transform it to a distribution in which an additive perturbation of a certain size would be equally significant no matter what the value that was being perturbed was. For example, if a 5 % raise in salary can be said to be equally significant no matter how much money you earn, you should probably logarithmize the data since that would make an additive perturbation equally significant for all values. | Rule of thumb for using logarithmic scale | As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement | Rule of thumb for using logarithmic scale
As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement, one thing that you could aim for is to maximize the distribution’s entropy for a fixed variance.
So, if your data is approximately log-normal distributed, taking its logarithm would probably be a good idea since afterwards it would be approximately normal distributed.
Another way to determine how to preprocess the data would be to transform it to a distribution in which an additive perturbation of a certain size would be equally significant no matter what the value that was being perturbed was. For example, if a 5 % raise in salary can be said to be equally significant no matter how much money you earn, you should probably logarithmize the data since that would make an additive perturbation equally significant for all values. | Rule of thumb for using logarithmic scale
As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement |
49,299 | Rule of thumb for using logarithmic scale | (So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x})^2 \rangle}}$$
That is, after normalizing a variable (i.e. mean 0 and variance 1) I am looking to have the 4th moment as low as possible (so to penalize too long-tailed, or otherwise disperse, distributions).
For me it works (but I am not sure if it's only my using it; and if there are any easy pitfalls). | Rule of thumb for using logarithmic scale | (So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x} | Rule of thumb for using logarithmic scale
(So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x})^2 \rangle}}$$
That is, after normalizing a variable (i.e. mean 0 and variance 1) I am looking to have the 4th moment as low as possible (so to penalize too long-tailed, or otherwise disperse, distributions).
For me it works (but I am not sure if it's only my using it; and if there are any easy pitfalls). | Rule of thumb for using logarithmic scale
(So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x} |
49,300 | Evaluate glmtree model | The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or conversely classification accuracy), log-likelihood, ROC, AUC, etc. Personally, I often use the ROCR package but the pROC package you used appears to offer useful tools for this as well.
For improving the model, you might consider whether extending the model part from an intercept (fD ~ 1) to something with a regressor. I would recommend to do so based on subject-matter knowledge which I presume you have for this analysis. If, for example, you suspect that the Qualification effect or the Age effect depends on interactions with the remaining variables than you could use fD ~ Age + Qualification | fGender + fOccupation + SizeWorkplc or something like this. The the choice of the model certainly depends on what you could interpret or which interactions you would want to assess. | Evaluate glmtree model | The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or | Evaluate glmtree model
The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or conversely classification accuracy), log-likelihood, ROC, AUC, etc. Personally, I often use the ROCR package but the pROC package you used appears to offer useful tools for this as well.
For improving the model, you might consider whether extending the model part from an intercept (fD ~ 1) to something with a regressor. I would recommend to do so based on subject-matter knowledge which I presume you have for this analysis. If, for example, you suspect that the Qualification effect or the Age effect depends on interactions with the remaining variables than you could use fD ~ Age + Qualification | fGender + fOccupation + SizeWorkplc or something like this. The the choice of the model certainly depends on what you could interpret or which interactions you would want to assess. | Evaluate glmtree model
The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.