idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,001
Doubt regarding mixed modeling format
But, I'm wondering if I can also add information about the food and weather, which are not part of my fixed effects into my model( without including interactions) Yes, you can. By not including them as fixed effects, but including them as random slopes, you are saying that the overall mean slope is zero, but each individual subject (baby and weather in your model) will have it's own slope. Whether it makes sense in your modelling context, is another matter altogether. Note that in your 2nd model you include weather as a grouping variable for random intercepts. You said that there are only 2 levels of weather - sunny or cloudy - so in this case that would not make sense becuase the software will try to estimate a variance for a normally distributed variable based on only 2 observations. So in this case you would specify weather as a fixed effect. Also note that in your 2nd model the || syntax means, at least in the lme4 package, that the software will not estimate a correlation between the random slopes and the random intercepts
Doubt regarding mixed modeling format
But, I'm wondering if I can also add information about the food and weather, which are not part of my fixed effects into my model( without including interactions) Yes, you can. By not including them
Doubt regarding mixed modeling format But, I'm wondering if I can also add information about the food and weather, which are not part of my fixed effects into my model( without including interactions) Yes, you can. By not including them as fixed effects, but including them as random slopes, you are saying that the overall mean slope is zero, but each individual subject (baby and weather in your model) will have it's own slope. Whether it makes sense in your modelling context, is another matter altogether. Note that in your 2nd model you include weather as a grouping variable for random intercepts. You said that there are only 2 levels of weather - sunny or cloudy - so in this case that would not make sense becuase the software will try to estimate a variance for a normally distributed variable based on only 2 observations. So in this case you would specify weather as a fixed effect. Also note that in your 2nd model the || syntax means, at least in the lme4 package, that the software will not estimate a correlation between the random slopes and the random intercepts
Doubt regarding mixed modeling format But, I'm wondering if I can also add information about the food and weather, which are not part of my fixed effects into my model( without including interactions) Yes, you can. By not including them
49,002
What are some non-toy applications of autoencoders?
One statistical application of denoising autoencoders is multiple imputation: the autoencoder tries to compress the data to a low-dimensional signal (that isn't missing) plus noise (that's sometimes missing). Compared to either Bayesian data augmentation or the popular 'mice' algorithms, the autoencoders seem to scale better to large numbers of variables, and may potentially handle nonlinearity and interaction better. (This is still a research area, but it's a serious application.) Andrew Gelman writes about an early attempt here, and the current version of that specific project is here
What are some non-toy applications of autoencoders?
One statistical application of denoising autoencoders is multiple imputation: the autoencoder tries to compress the data to a low-dimensional signal (that isn't missing) plus noise (that's sometimes m
What are some non-toy applications of autoencoders? One statistical application of denoising autoencoders is multiple imputation: the autoencoder tries to compress the data to a low-dimensional signal (that isn't missing) plus noise (that's sometimes missing). Compared to either Bayesian data augmentation or the popular 'mice' algorithms, the autoencoders seem to scale better to large numbers of variables, and may potentially handle nonlinearity and interaction better. (This is still a research area, but it's a serious application.) Andrew Gelman writes about an early attempt here, and the current version of that specific project is here
What are some non-toy applications of autoencoders? One statistical application of denoising autoencoders is multiple imputation: the autoencoder tries to compress the data to a low-dimensional signal (that isn't missing) plus noise (that's sometimes m
49,003
What are some non-toy applications of autoencoders?
From the Autoencoder Wikipedia article: One milestone paper on the subject was that of Geoffrey Hinton with his publication in Science Magazine in 2006 [Reducing the Dimensionality of Data with Neural Networks by G. E. Hinton and et al.]: in that study, he pretrained a multi-layer autoencoder with a stack of RBMs and then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 principal components of a PCA, and learned a representation that was qualitatively easier to interpret, clearly separating clusters in the original data.
What are some non-toy applications of autoencoders?
From the Autoencoder Wikipedia article: One milestone paper on the subject was that of Geoffrey Hinton with his publication in Science Magazine in 2006 [Reducing the Dimensionality of Data with Neura
What are some non-toy applications of autoencoders? From the Autoencoder Wikipedia article: One milestone paper on the subject was that of Geoffrey Hinton with his publication in Science Magazine in 2006 [Reducing the Dimensionality of Data with Neural Networks by G. E. Hinton and et al.]: in that study, he pretrained a multi-layer autoencoder with a stack of RBMs and then used their weights to initialize a deep autoencoder with gradually smaller hidden layers until a bottleneck of 30 neurons. The resulting 30 dimensions of the code yielded a smaller reconstruction error compared to the first 30 principal components of a PCA, and learned a representation that was qualitatively easier to interpret, clearly separating clusters in the original data.
What are some non-toy applications of autoencoders? From the Autoencoder Wikipedia article: One milestone paper on the subject was that of Geoffrey Hinton with his publication in Science Magazine in 2006 [Reducing the Dimensionality of Data with Neura
49,004
What are some non-toy applications of autoencoders?
One increasingly popular biological area of application for autoenconders is single cell transcriptomics, which typically generates large sparse data matrixes. Here autoencoders have been applied for both de-noising purposes and rapid dimensionality reduction.
What are some non-toy applications of autoencoders?
One increasingly popular biological area of application for autoenconders is single cell transcriptomics, which typically generates large sparse data matrixes. Here autoencoders have been applied for
What are some non-toy applications of autoencoders? One increasingly popular biological area of application for autoenconders is single cell transcriptomics, which typically generates large sparse data matrixes. Here autoencoders have been applied for both de-noising purposes and rapid dimensionality reduction.
What are some non-toy applications of autoencoders? One increasingly popular biological area of application for autoenconders is single cell transcriptomics, which typically generates large sparse data matrixes. Here autoencoders have been applied for
49,005
What are some non-toy applications of autoencoders?
One of the application of auto-encoder that i am exploring is for building content-based image search engine. Training auto-encoder network on product catalogue data ( images ) Extract encoder layer from the trained model and encode images [ based on latent dimensions ] Index the encoded features of images During query, encode query image features and search in indexes for finding "similar" images.
What are some non-toy applications of autoencoders?
One of the application of auto-encoder that i am exploring is for building content-based image search engine. Training auto-encoder network on product catalogue data ( images ) Extract encoder layer
What are some non-toy applications of autoencoders? One of the application of auto-encoder that i am exploring is for building content-based image search engine. Training auto-encoder network on product catalogue data ( images ) Extract encoder layer from the trained model and encode images [ based on latent dimensions ] Index the encoded features of images During query, encode query image features and search in indexes for finding "similar" images.
What are some non-toy applications of autoencoders? One of the application of auto-encoder that i am exploring is for building content-based image search engine. Training auto-encoder network on product catalogue data ( images ) Extract encoder layer
49,006
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as the number of LSTM-cell?
I came across this link https://stackoverflow.com/questions/38080035/how-to-calculate-the-number-of-parameters-of-an-lstm-network, and it seems to suggest that hidden output state dimension = number of lstm cells in the layer. Why is that? Each cell's hidden state is 1 float. As an example, the reason you'd have output dimension 256 is because you have 256 units. Each unit produces 1 output dimension. For example, see this documentation page for Pytorch https://www.pytorch.org/docs/stable/nn.html. If we look at the output entry for an LSTM, the hidden state has shape (num_layers * num_directions, batch, hidden_size). So for a model with 1 layer, 1 direction (i.e. not bidirectional), and batch size 1, we have hidden_size floats in total. You can also see this if you keep track of the dimensions used in the LSTM computation. At each timestep (element of the input sequence) the layer of an LSTM carries out these operations, which are just compositions of matrix-vector products and activation functions. $$ \begin{aligned} i_t &= \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\ f_t &= \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\ g_t &= \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t &= \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t \\ h_t &= o_t \odot \tanh(c_t) \\ \end{aligned} $$ We're focused on the hidden state, $h_t$, so look at the operations involving $h_{t-1}$ because this is the hidden state at the previous time-step. The hidden-to-hidden connections must have have size hidden_size by hidden_size because they're matrices which must by conformable in a matrix-vector product where the vector has size hidden_size. The input-to-hidden connections must have size hidden size by input size because this is a matrix-vector product where the vector has size input size. Importantly, your distinction between hidden size and number of units never makes an appearance. If hidden size and number of units were different, then this matrix-vector arithmetic would, somewhere, not be conformable because it wouldn't have compatible dimension. As for counting the number of parameters in an LSTM model, see How can calculate number of weights in LSTM I believe the confusion arises because OP has confused the hidden output state, which is an output of the model, with the weights of the hidden state. I think this is the case because you insist that the hidden state has shape (n,n). It’s not, but the hidden weights are square matrices. LSTM cells have memory, which is returned as a part of the output. This is used together with the model weights and biases to yield the prediction for the next time step. The difference between hidden state output and the hidden weights is that the model weights are the same for all time steps, while the hidden state can vary. This “memory” component is where LSTMs get their name.
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as
I came across this link https://stackoverflow.com/questions/38080035/how-to-calculate-the-number-of-parameters-of-an-lstm-network, and it seems to suggest that hidden output state dimension = number o
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as the number of LSTM-cell? I came across this link https://stackoverflow.com/questions/38080035/how-to-calculate-the-number-of-parameters-of-an-lstm-network, and it seems to suggest that hidden output state dimension = number of lstm cells in the layer. Why is that? Each cell's hidden state is 1 float. As an example, the reason you'd have output dimension 256 is because you have 256 units. Each unit produces 1 output dimension. For example, see this documentation page for Pytorch https://www.pytorch.org/docs/stable/nn.html. If we look at the output entry for an LSTM, the hidden state has shape (num_layers * num_directions, batch, hidden_size). So for a model with 1 layer, 1 direction (i.e. not bidirectional), and batch size 1, we have hidden_size floats in total. You can also see this if you keep track of the dimensions used in the LSTM computation. At each timestep (element of the input sequence) the layer of an LSTM carries out these operations, which are just compositions of matrix-vector products and activation functions. $$ \begin{aligned} i_t &= \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\ f_t &= \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\ g_t &= \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t &= \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t \\ h_t &= o_t \odot \tanh(c_t) \\ \end{aligned} $$ We're focused on the hidden state, $h_t$, so look at the operations involving $h_{t-1}$ because this is the hidden state at the previous time-step. The hidden-to-hidden connections must have have size hidden_size by hidden_size because they're matrices which must by conformable in a matrix-vector product where the vector has size hidden_size. The input-to-hidden connections must have size hidden size by input size because this is a matrix-vector product where the vector has size input size. Importantly, your distinction between hidden size and number of units never makes an appearance. If hidden size and number of units were different, then this matrix-vector arithmetic would, somewhere, not be conformable because it wouldn't have compatible dimension. As for counting the number of parameters in an LSTM model, see How can calculate number of weights in LSTM I believe the confusion arises because OP has confused the hidden output state, which is an output of the model, with the weights of the hidden state. I think this is the case because you insist that the hidden state has shape (n,n). It’s not, but the hidden weights are square matrices. LSTM cells have memory, which is returned as a part of the output. This is used together with the model weights and biases to yield the prediction for the next time step. The difference between hidden state output and the hidden weights is that the model weights are the same for all time steps, while the hidden state can vary. This “memory” component is where LSTMs get their name.
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as I came across this link https://stackoverflow.com/questions/38080035/how-to-calculate-the-number-of-parameters-of-an-lstm-network, and it seems to suggest that hidden output state dimension = number o
49,007
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as the number of LSTM-cell?
Regarding the question: "why the dimension of the hidden state is related to the number of cells in a LSTM layer"?, what I understand, a layer of 4 cells would be represented as the picture I attached. It is clear with the picture that the state H has dimension 4, which is directly related to the number of cells (hidden states) of the layer. I hope that clarifies the original question, and please correct me if I'm wrong.
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as
Regarding the question: "why the dimension of the hidden state is related to the number of cells in a LSTM layer"?, what I understand, a layer of 4 cells would be represented as the picture I attached
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as the number of LSTM-cell? Regarding the question: "why the dimension of the hidden state is related to the number of cells in a LSTM layer"?, what I understand, a layer of 4 cells would be represented as the picture I attached. It is clear with the picture that the state H has dimension 4, which is directly related to the number of cells (hidden states) of the layer. I hope that clarifies the original question, and please correct me if I'm wrong.
When computing parameters, why is dimensions of hidden-output state of an LSTM-cell assumed same as Regarding the question: "why the dimension of the hidden state is related to the number of cells in a LSTM layer"?, what I understand, a layer of 4 cells would be represented as the picture I attached
49,008
Expected Value of Maximum of Uniform Random Variables
The issue is that you aren't considering the full support of cdf of $Y=\text{max}\{X_1,X_2,X_3\}$. The full support is $(0, \infty)$. Taking a look here: https://en.wikipedia.org/wiki/Uniform_distribution_(continuous) at the definition of $F(x)$. Then consider that you'll have 1 minus this value, so for your problem you'd have: $a=200$, $b=600$ and then $1-F(y) = 1$ if $x < 200$, $1-F(y)=0$ if $x>600$ and $1-\frac{y-200}{400}$ when $y \in [200, 600]$. So the part you are missing in your calculations is: $$\int_0^{200}dy=200.$$ which is what you're undershooting. The portion of the integral above $600$ is all $0$ so can be safely omitted from the calculation. If you wanted to be complete, you'd write: $$ \mathbb{E}(Y_{3:1}) = \int_0^{200}(1-F(y))dy + \int_{200}^{600}(1-F(y))dy + \int_{600}^{\infty}(1-F(y))dy $$ which is: $$ \int_0^{200}1dy + \int_{200}^{600}\left(1-\left(\frac{y-200}{400}\right)^3\right)dy + \int_{600}^{\infty}0dy, $$ which simplifies to: $$ 200 + 300 + 0. $$
Expected Value of Maximum of Uniform Random Variables
The issue is that you aren't considering the full support of cdf of $Y=\text{max}\{X_1,X_2,X_3\}$. The full support is $(0, \infty)$. Taking a look here: https://en.wikipedia.org/wiki/Uniform_distribu
Expected Value of Maximum of Uniform Random Variables The issue is that you aren't considering the full support of cdf of $Y=\text{max}\{X_1,X_2,X_3\}$. The full support is $(0, \infty)$. Taking a look here: https://en.wikipedia.org/wiki/Uniform_distribution_(continuous) at the definition of $F(x)$. Then consider that you'll have 1 minus this value, so for your problem you'd have: $a=200$, $b=600$ and then $1-F(y) = 1$ if $x < 200$, $1-F(y)=0$ if $x>600$ and $1-\frac{y-200}{400}$ when $y \in [200, 600]$. So the part you are missing in your calculations is: $$\int_0^{200}dy=200.$$ which is what you're undershooting. The portion of the integral above $600$ is all $0$ so can be safely omitted from the calculation. If you wanted to be complete, you'd write: $$ \mathbb{E}(Y_{3:1}) = \int_0^{200}(1-F(y))dy + \int_{200}^{600}(1-F(y))dy + \int_{600}^{\infty}(1-F(y))dy $$ which is: $$ \int_0^{200}1dy + \int_{200}^{600}\left(1-\left(\frac{y-200}{400}\right)^3\right)dy + \int_{600}^{\infty}0dy, $$ which simplifies to: $$ 200 + 300 + 0. $$
Expected Value of Maximum of Uniform Random Variables The issue is that you aren't considering the full support of cdf of $Y=\text{max}\{X_1,X_2,X_3\}$. The full support is $(0, \infty)$. Taking a look here: https://en.wikipedia.org/wiki/Uniform_distribu
49,009
Expected Value of Maximum of Uniform Random Variables
I think if $\ x \sim Uniform(a=200,b=600)$ ,$\ n={3}$ then $\ E{[max(X1,X2,X3)=m]}=\int_a^b m.p(m).dm= \\(n/(n+1)).(b-a)+a=(3/4).(400)+200=500$ where $\ p(m)=P(max(X1,X2,X3))= \\P(X1)⋅P(X2≤X1)⋅P(X3≤X1)+P(X2)⋅P(X1≤X2)⋅P(X3≤X2)+P(X3)⋅P(X1≤X3)⋅P(X2≤X3)=$ \begin{equation} \\ \sum^{n=3}_{i=1} [1/(b-a)].[(m-a)/(b-a)]^{n-1}=[n/(b-a)].[(m-a)/(b-a)]^{n-1} \end{equation}
Expected Value of Maximum of Uniform Random Variables
I think if $\ x \sim Uniform(a=200,b=600)$ ,$\ n={3}$ then $\ E{[max(X1,X2,X3)=m]}=\int_a^b m.p(m).dm= \\(n/(n+1)).(b-a)+a=(3/4).(400)+200=500$ where $\ p(m)=P(max(X1,X2,X3))= \\P(X1)⋅P(X2≤X1)⋅P(X3≤X1
Expected Value of Maximum of Uniform Random Variables I think if $\ x \sim Uniform(a=200,b=600)$ ,$\ n={3}$ then $\ E{[max(X1,X2,X3)=m]}=\int_a^b m.p(m).dm= \\(n/(n+1)).(b-a)+a=(3/4).(400)+200=500$ where $\ p(m)=P(max(X1,X2,X3))= \\P(X1)⋅P(X2≤X1)⋅P(X3≤X1)+P(X2)⋅P(X1≤X2)⋅P(X3≤X2)+P(X3)⋅P(X1≤X3)⋅P(X2≤X3)=$ \begin{equation} \\ \sum^{n=3}_{i=1} [1/(b-a)].[(m-a)/(b-a)]^{n-1}=[n/(b-a)].[(m-a)/(b-a)]^{n-1} \end{equation}
Expected Value of Maximum of Uniform Random Variables I think if $\ x \sim Uniform(a=200,b=600)$ ,$\ n={3}$ then $\ E{[max(X1,X2,X3)=m]}=\int_a^b m.p(m).dm= \\(n/(n+1)).(b-a)+a=(3/4).(400)+200=500$ where $\ p(m)=P(max(X1,X2,X3))= \\P(X1)⋅P(X2≤X1)⋅P(X3≤X1
49,010
Three-Way Anova: What does a significant three way interaction tell you, conceptually?
Your techinical interpretation is quite correct. So, let's say that Gatorade was associated with faster mile times than water, and this association was larger in males than in females. The three-way interaction with age group may then tell you, for example, that this association disappeared entirely in the older age group, but was still evident in the younger age group. Or may be more pronounced in the older age group, or indeed that it was the same in both age groups.
Three-Way Anova: What does a significant three way interaction tell you, conceptually?
Your techinical interpretation is quite correct. So, let's say that Gatorade was associated with faster mile times than water, and this association was larger in males than in females. The three-way i
Three-Way Anova: What does a significant three way interaction tell you, conceptually? Your techinical interpretation is quite correct. So, let's say that Gatorade was associated with faster mile times than water, and this association was larger in males than in females. The three-way interaction with age group may then tell you, for example, that this association disappeared entirely in the older age group, but was still evident in the younger age group. Or may be more pronounced in the older age group, or indeed that it was the same in both age groups.
Three-Way Anova: What does a significant three way interaction tell you, conceptually? Your techinical interpretation is quite correct. So, let's say that Gatorade was associated with faster mile times than water, and this association was larger in males than in females. The three-way i
49,011
Kernel Mean Embedding relationship to regular kernel functions
To simplify matters, I'll assume the kernel $k$ is bounded. Otherwise for technical reasons (basically to guarantee the expectation in the definition of the kernel mean map exists), we need to restrict attention to only probability distributions satisfying $$\mathbb{E}_{X\sim P} \sqrt{k(X,X)} <\infty$$ Let $\mathrm{Prob}(\mathcal{X})$ denote the set of probability measures on $\mathcal{X}$. You can think of $\mathcal{X}$ as being essentially a subset of $\mathrm{Prob}(\mathcal{X})$, by identifying each point with the measure that assigns probability $1$ to that point. The main result here is that for a bounded kernel, the map $\phi: \mathcal{X}\rightarrow\mathcal{H}$ can always be extended to a map $\tilde{\phi}: \mathrm{Prob}(\mathcal{X})\rightarrow\mathcal{H}$ which maps probability distributions to vectors in $\mathcal{H}$. Similarly a bounded kernel on $\mathcal{X}$ can always be extended to a kernel on $\mathrm{Prob}(\mathcal{X})$. To answer the second question, since the map $\phi$ is often called an embedding (even if it isn't injective), it is common to call $\tilde{\phi}$ the kernel mean embedding. Note that it is $\tilde{\phi}$ that is called an embedding and not $\mu_X = \tilde{\phi}(P)$. There is no need to work with an RKHS instead of an explicit Hilbert space. However, it is sometimes simpler to do so. Additionally, it isn't significantly less general. To study a map $\phi:\mathcal{X}\rightarrow \mathcal{H}$, we don't need to think about the entire space $\mathcal{H}$. It suffices instead to work with the smallest closed subspace containing the image of $\phi$. Since it follows from the proof of the Moore–Aronszajn theorem that this is isometrically isomorphic to the RKHS with kernel $k(x,y)=\langle \phi(x),\phi(y)\rangle$, we may as well work with an RKHS instead of a general Hilbert space. There are two natural ways of constructing $\mu_X = \tilde{\phi}(P)$ for a random variable $X\sim P$. The first is to consider $\mathbb{E}\phi(X)$ as in your post. This runs in to the issue that we are taking the expectation of a Hilbert space valued variable, which is a bit more technical to define than for real valued variables. However, in the case of an RKHS, the elements of $\mathcal{H}$ are just functions and it turns out you get the right result by taking expectations pointwise. In other words, $\mu_X$ is the function given by $$\mu_X(t) = \mathbb{E}\phi(X)(t)$$ This expression involves only real valued expectations so is somewhat simpler. There is an alternate (more technical) approach, which is similar to how the kernel associated to an RKHS $\mathcal{H}$ is usually constructed. For $x\in\mathcal{X}$, define the evaluation functional $ev_x:\mathcal{H}\rightarrow \mathbb{R}$ by $ev_x(f)=f(x)$. Part of the definition of an RKHS is that this functional is bounded so we can apply the Riesz representation theorem to get some $k_x\in X$ such that for every f $$f(X) = \langle k_x, f \rangle$$ This property is called the reproducing property. The map $\phi$ given by $\phi(x)=k_x$ is the canonical embedding into the RKHS. The kernel is then constructed as $k(x,y)=\langle k_x,k_y\rangle$. You can mimic this for the expectation functional $f\mapsto \mathbb{E}_{X\sim P} f(X)$. A simple argument involving Cauchy-Schwartz and the condition $\mathbb{E}_{X\sim P} \sqrt{k(X,X)} <\infty$ shows this is bounded, so we can apply the Riesz representation theorem to get some function $\mu_X$ such that $$\mathbb{E}\phi(X) = \langle \mu_X, f\rangle$$ We can see explicitly that this gives the same answer as the other construction as follows $$\mu_X(t) = \langle \mu_X, k_t\rangle = \mathbb{E} k_t(X) = \mathbb{E} \langle k_X, k_t\rangle = \mathbb{E} k_X(t)= \mathbb{E} \phi(X)(t)$$ The distribution of a random variable $X$ is entirely determined by the expectations of functions of $X$. This is still true if you restrict to a suitably large class of functions - many reproducing kernel Hilbert spaces work. You can think of $\mu_X$ as a representation of the distribution of $X$ since for any $f$ in the RKHS, it determines $$\langle f(X), \mu_X\rangle=\mathbb{E}f(X)$$ I think the similiarity to kernel density estimation is coincidental. To define the kernel mean embedding, the kernel does not need to have integral equal to $1$ or to be centered near $x=y$. In fact, we can define the kernel mean embedding on more general spaces than $\mathbb{R}^n$ (e.g. strings of text) including some where notions of integrals and pdfs aren't really defined. On the other hand, the kernels in kernel density estimation don't need to be positive semidefinite.
Kernel Mean Embedding relationship to regular kernel functions
To simplify matters, I'll assume the kernel $k$ is bounded. Otherwise for technical reasons (basically to guarantee the expectation in the definition of the kernel mean map exists), we need to restri
Kernel Mean Embedding relationship to regular kernel functions To simplify matters, I'll assume the kernel $k$ is bounded. Otherwise for technical reasons (basically to guarantee the expectation in the definition of the kernel mean map exists), we need to restrict attention to only probability distributions satisfying $$\mathbb{E}_{X\sim P} \sqrt{k(X,X)} <\infty$$ Let $\mathrm{Prob}(\mathcal{X})$ denote the set of probability measures on $\mathcal{X}$. You can think of $\mathcal{X}$ as being essentially a subset of $\mathrm{Prob}(\mathcal{X})$, by identifying each point with the measure that assigns probability $1$ to that point. The main result here is that for a bounded kernel, the map $\phi: \mathcal{X}\rightarrow\mathcal{H}$ can always be extended to a map $\tilde{\phi}: \mathrm{Prob}(\mathcal{X})\rightarrow\mathcal{H}$ which maps probability distributions to vectors in $\mathcal{H}$. Similarly a bounded kernel on $\mathcal{X}$ can always be extended to a kernel on $\mathrm{Prob}(\mathcal{X})$. To answer the second question, since the map $\phi$ is often called an embedding (even if it isn't injective), it is common to call $\tilde{\phi}$ the kernel mean embedding. Note that it is $\tilde{\phi}$ that is called an embedding and not $\mu_X = \tilde{\phi}(P)$. There is no need to work with an RKHS instead of an explicit Hilbert space. However, it is sometimes simpler to do so. Additionally, it isn't significantly less general. To study a map $\phi:\mathcal{X}\rightarrow \mathcal{H}$, we don't need to think about the entire space $\mathcal{H}$. It suffices instead to work with the smallest closed subspace containing the image of $\phi$. Since it follows from the proof of the Moore–Aronszajn theorem that this is isometrically isomorphic to the RKHS with kernel $k(x,y)=\langle \phi(x),\phi(y)\rangle$, we may as well work with an RKHS instead of a general Hilbert space. There are two natural ways of constructing $\mu_X = \tilde{\phi}(P)$ for a random variable $X\sim P$. The first is to consider $\mathbb{E}\phi(X)$ as in your post. This runs in to the issue that we are taking the expectation of a Hilbert space valued variable, which is a bit more technical to define than for real valued variables. However, in the case of an RKHS, the elements of $\mathcal{H}$ are just functions and it turns out you get the right result by taking expectations pointwise. In other words, $\mu_X$ is the function given by $$\mu_X(t) = \mathbb{E}\phi(X)(t)$$ This expression involves only real valued expectations so is somewhat simpler. There is an alternate (more technical) approach, which is similar to how the kernel associated to an RKHS $\mathcal{H}$ is usually constructed. For $x\in\mathcal{X}$, define the evaluation functional $ev_x:\mathcal{H}\rightarrow \mathbb{R}$ by $ev_x(f)=f(x)$. Part of the definition of an RKHS is that this functional is bounded so we can apply the Riesz representation theorem to get some $k_x\in X$ such that for every f $$f(X) = \langle k_x, f \rangle$$ This property is called the reproducing property. The map $\phi$ given by $\phi(x)=k_x$ is the canonical embedding into the RKHS. The kernel is then constructed as $k(x,y)=\langle k_x,k_y\rangle$. You can mimic this for the expectation functional $f\mapsto \mathbb{E}_{X\sim P} f(X)$. A simple argument involving Cauchy-Schwartz and the condition $\mathbb{E}_{X\sim P} \sqrt{k(X,X)} <\infty$ shows this is bounded, so we can apply the Riesz representation theorem to get some function $\mu_X$ such that $$\mathbb{E}\phi(X) = \langle \mu_X, f\rangle$$ We can see explicitly that this gives the same answer as the other construction as follows $$\mu_X(t) = \langle \mu_X, k_t\rangle = \mathbb{E} k_t(X) = \mathbb{E} \langle k_X, k_t\rangle = \mathbb{E} k_X(t)= \mathbb{E} \phi(X)(t)$$ The distribution of a random variable $X$ is entirely determined by the expectations of functions of $X$. This is still true if you restrict to a suitably large class of functions - many reproducing kernel Hilbert spaces work. You can think of $\mu_X$ as a representation of the distribution of $X$ since for any $f$ in the RKHS, it determines $$\langle f(X), \mu_X\rangle=\mathbb{E}f(X)$$ I think the similiarity to kernel density estimation is coincidental. To define the kernel mean embedding, the kernel does not need to have integral equal to $1$ or to be centered near $x=y$. In fact, we can define the kernel mean embedding on more general spaces than $\mathbb{R}^n$ (e.g. strings of text) including some where notions of integrals and pdfs aren't really defined. On the other hand, the kernels in kernel density estimation don't need to be positive semidefinite.
Kernel Mean Embedding relationship to regular kernel functions To simplify matters, I'll assume the kernel $k$ is bounded. Otherwise for technical reasons (basically to guarantee the expectation in the definition of the kernel mean map exists), we need to restri
49,012
Are two coin flips conditionally independent if we know that the coin is biased towards heads?
The quoted section is implicitly assuming that the event $C = \{ \theta > 0.5 \}$ is sufficient to fully describe the parameter, and so it attains conditional independence of the observable coin flips (e.g., there may be an assumption that there is only one allowable value of $\theta$ in the biased range). Contrarily, your own analysis is saying that even if $C$ is true, there is still uncertainty parameter value, so the coin flips still give information about the underlying parameter $\theta$, and so they remain dependent. Your analysis here is more realistic, and I agree with your assertion that there would still be dependence even once you condition on $C$. This issue has been discussed in detail in O'Neill (2009), which looks at conditional independence and marginal dependence in exchangeable sequences of random variables. You can also find some associated theorems for statistical dependence in coin-flipping in a series of papers on binomial prediction (see O'Neill and Puza 2005; O'Neill 2012; O'Neill 2015). These latter papers discuss the "gambler's fallacy", and show that ---under broad conditions--- one obtains a predictive advantage by betting on whichever outcome of the coin-flip has come up the most in the observed data (to take advantage on information about possible bias).
Are two coin flips conditionally independent if we know that the coin is biased towards heads?
The quoted section is implicitly assuming that the event $C = \{ \theta > 0.5 \}$ is sufficient to fully describe the parameter, and so it attains conditional independence of the observable coin flip
Are two coin flips conditionally independent if we know that the coin is biased towards heads? The quoted section is implicitly assuming that the event $C = \{ \theta > 0.5 \}$ is sufficient to fully describe the parameter, and so it attains conditional independence of the observable coin flips (e.g., there may be an assumption that there is only one allowable value of $\theta$ in the biased range). Contrarily, your own analysis is saying that even if $C$ is true, there is still uncertainty parameter value, so the coin flips still give information about the underlying parameter $\theta$, and so they remain dependent. Your analysis here is more realistic, and I agree with your assertion that there would still be dependence even once you condition on $C$. This issue has been discussed in detail in O'Neill (2009), which looks at conditional independence and marginal dependence in exchangeable sequences of random variables. You can also find some associated theorems for statistical dependence in coin-flipping in a series of papers on binomial prediction (see O'Neill and Puza 2005; O'Neill 2012; O'Neill 2015). These latter papers discuss the "gambler's fallacy", and show that ---under broad conditions--- one obtains a predictive advantage by betting on whichever outcome of the coin-flip has come up the most in the observed data (to take advantage on information about possible bias).
Are two coin flips conditionally independent if we know that the coin is biased towards heads? The quoted section is implicitly assuming that the event $C = \{ \theta > 0.5 \}$ is sufficient to fully describe the parameter, and so it attains conditional independence of the observable coin flip
49,013
Distribution of gradients across dimensions for neural networks
The exact answer is going to depend greatly on the type of network, the inputs, how it's trained.... For a simple way to see this: If we're at a (local) optimum, the full gradient (across the entire training dataset) will be zero. In the interpolating regime common to modern neural networks, the individual gradient for each training point may even be exactly zero; depending on the loss function and how much you've trained / etc, it might instead be mean zero but approximately normal, etc. At initialization, when we're very far from a solution, the gradient may be extremely similar for different datapoints (and very far from zero). In some particular limits (network becoming infinitely wide, initialized near zero, trained via SGD, no batch normalization, ...), the "neural tangent kernel" regime has an answer here: activations are distributed according to a particular Gaussian process, and their derivatives are too. (When using square loss, this means the final result corresponds to kernel ridge regression / Gaussian process regression with a particular kernel.) See e.g. Appendix 2 of Jacot et al. (2018), Neural Tangent Kernel: Convergence and Generalization in Neural Networks; Lee et al. (2019), Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent; Appendix D of Arora et al. (2019), On Exact Computation with an Infinitely Wide Neural Net.
Distribution of gradients across dimensions for neural networks
The exact answer is going to depend greatly on the type of network, the inputs, how it's trained.... For a simple way to see this: If we're at a (local) optimum, the full gradient (across the entire
Distribution of gradients across dimensions for neural networks The exact answer is going to depend greatly on the type of network, the inputs, how it's trained.... For a simple way to see this: If we're at a (local) optimum, the full gradient (across the entire training dataset) will be zero. In the interpolating regime common to modern neural networks, the individual gradient for each training point may even be exactly zero; depending on the loss function and how much you've trained / etc, it might instead be mean zero but approximately normal, etc. At initialization, when we're very far from a solution, the gradient may be extremely similar for different datapoints (and very far from zero). In some particular limits (network becoming infinitely wide, initialized near zero, trained via SGD, no batch normalization, ...), the "neural tangent kernel" regime has an answer here: activations are distributed according to a particular Gaussian process, and their derivatives are too. (When using square loss, this means the final result corresponds to kernel ridge regression / Gaussian process regression with a particular kernel.) See e.g. Appendix 2 of Jacot et al. (2018), Neural Tangent Kernel: Convergence and Generalization in Neural Networks; Lee et al. (2019), Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent; Appendix D of Arora et al. (2019), On Exact Computation with an Infinitely Wide Neural Net.
Distribution of gradients across dimensions for neural networks The exact answer is going to depend greatly on the type of network, the inputs, how it's trained.... For a simple way to see this: If we're at a (local) optimum, the full gradient (across the entire
49,014
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to my regression model?
Coefficient change Let some there be some data distributed according to a quadratic curve: $$y \sim \mathcal{N}(\mu = a+bx+cx^2, \sigma^2 = 10^{-3})$$ For instance with $x \sim \mathcal{U}(0,1)$ and $a=0.2$, $b=0$ and $c=1$. Then a linear curve and a polynomial curve will have very different coefficients for the linear term. set.seed(1) x <- runif(100, 0,1) y <- rnorm(100, mean = 0.2+0*x+1*x^2, sd = 10^-1.5) plot(x,y, ylim = c(0,1.5), pch = 21, col = 1 , bg = 1, cex = 0.7) mod1 <- lm(y~x) mod2 <- lm(y~poly(x,2, raw =TRUE)) xs <- seq(0,10,0.01) lines(xs,predict(mod1,newdata = list(x = xs)), lty = 2) lines(xs,predict(mod2,newdata = list(x = xs)),lty =1) legend(0,1.5,c("y = 0.009 + 1.023 x", "y = 0.193 + 0.016 x + 0.994 x^2"), lty = c(2,1)) Correlation The reason is that the variables/regressors $x$ and $x^2$ correlate. The coefficient estimates computed with a linear regression are not a simple correlation (perpendicular projection onto each regressor seperately): $$\hat{\beta} \neq \alpha = \mathbf{X^t} y$$ (this would give coefficients $\alpha_1$ and $\alpha_2$ in the image below, and these coordinates/coefficients/correlations do not change when you add or remove other regressors) Using the correlation/projection $\mathbf{X^t}y$ is wrong, because if there is a correlation between the vectors in $\mathbf{X}$, then there will be an overlap between some vectors. This part that overlaps will be redundant and added too much. The predicted value $\hat{y} = \alpha \mathbf{X}$ would be too large. For this reason there is a correction with a term $(\mathbf{X^t}\mathbf{X})^{-1}$ that accounts for the overlap/correlation between the regressors. This might be clear in the image below which stems from this question: Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Intuitive view So the regressors $x$ and $x^2$ both correlate with the data $y$ and they both will be able to express the variation in the dependent data. But when we use them together then we are not gonna add them according to their single independent effects (according to correlation with $y$) because that would be too much. If we use both $x$ and $x^2$ in the regression then obviously the coefficient for the linear term $x$ should be very small since this is the same in the true relation. However, when we are not using the quadratic term $x^2$ in the regression (or otherwise add a bias to the coefficient for the quadratic term), then the coefficient for $x$ which correlates somewhat with $x^2$ will partly take correct this (take over) and... the value of the estimate for the coefficient of the linear term will change. Standard error change (and confidence intervals and p-values) The errors of the variables may be correlated leading to very large errors in some coefficient when they strongly correlate with others. The matrix $(X^TX)^{−1}$ describes this correlation. Error in the regression line The image below shows intuitively how this changes when adding other regressors. The intercept is the point where a regression line crosses $x=0$. On the left the error of the intercept is the error of the mean of the population. On the right the error of the intercept is the error of the regression line intercept. Confidence regions for correlated parameters The next image displays the confidence regions (contrasting with confidence intervals) of the above regression in a 2-D plot. Here it takes into account the correlation between the parameters. The ellipse shows the confidence region which is a based on a multivariate distribution of the slope and intercept which may be related via a correlation matrix. For illustration an alternative type of region is also show. This is depicted by the box which is based on two single variate distributions assuming independence (now the confidence for the single variables is $\sqrt{0.95}$). By changing the model from $y = a + bx$ to a shifted model $y = a + b(x-35.5)$ we see that the correlation between the slope and intercept changes. Now the "intercept" coincides with the standard error of the line around the point $x=35.5$ which you see in the image above is smaller. #used model and data set.seed(1) xt <- seq(0,40,0.1) x <- c(1:10)+30 y <- 10+0.5*x+rnorm(10,0,3) See also: why does the same variable have a different slope when incorporated into a linear model with multiple x variables regression with multiple independent variables vs multiple regressions with one independent variable Why is the intercept in multiple regression changing when including/excluding regressors? Why and how does adding an interaction term affects the confidence interval of a main effect? Why is the intercept changing in a logistic regression when all predictors are standardized? Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Does adding more variables into a multivariable regression change coefficients of existing variables? Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ Why do regression coefficients change when excluding variables? Does the order of explanatory variables matter when calculating their regression coefficients? Intercept changing after adding an interaction
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to
Coefficient change Let some there be some data distributed according to a quadratic curve: $$y \sim \mathcal{N}(\mu = a+bx+cx^2, \sigma^2 = 10^{-3})$$ For instance with $x \sim \mathcal{U}(0,1)$ and $
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to my regression model? Coefficient change Let some there be some data distributed according to a quadratic curve: $$y \sim \mathcal{N}(\mu = a+bx+cx^2, \sigma^2 = 10^{-3})$$ For instance with $x \sim \mathcal{U}(0,1)$ and $a=0.2$, $b=0$ and $c=1$. Then a linear curve and a polynomial curve will have very different coefficients for the linear term. set.seed(1) x <- runif(100, 0,1) y <- rnorm(100, mean = 0.2+0*x+1*x^2, sd = 10^-1.5) plot(x,y, ylim = c(0,1.5), pch = 21, col = 1 , bg = 1, cex = 0.7) mod1 <- lm(y~x) mod2 <- lm(y~poly(x,2, raw =TRUE)) xs <- seq(0,10,0.01) lines(xs,predict(mod1,newdata = list(x = xs)), lty = 2) lines(xs,predict(mod2,newdata = list(x = xs)),lty =1) legend(0,1.5,c("y = 0.009 + 1.023 x", "y = 0.193 + 0.016 x + 0.994 x^2"), lty = c(2,1)) Correlation The reason is that the variables/regressors $x$ and $x^2$ correlate. The coefficient estimates computed with a linear regression are not a simple correlation (perpendicular projection onto each regressor seperately): $$\hat{\beta} \neq \alpha = \mathbf{X^t} y$$ (this would give coefficients $\alpha_1$ and $\alpha_2$ in the image below, and these coordinates/coefficients/correlations do not change when you add or remove other regressors) Using the correlation/projection $\mathbf{X^t}y$ is wrong, because if there is a correlation between the vectors in $\mathbf{X}$, then there will be an overlap between some vectors. This part that overlaps will be redundant and added too much. The predicted value $\hat{y} = \alpha \mathbf{X}$ would be too large. For this reason there is a correction with a term $(\mathbf{X^t}\mathbf{X})^{-1}$ that accounts for the overlap/correlation between the regressors. This might be clear in the image below which stems from this question: Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Intuitive view So the regressors $x$ and $x^2$ both correlate with the data $y$ and they both will be able to express the variation in the dependent data. But when we use them together then we are not gonna add them according to their single independent effects (according to correlation with $y$) because that would be too much. If we use both $x$ and $x^2$ in the regression then obviously the coefficient for the linear term $x$ should be very small since this is the same in the true relation. However, when we are not using the quadratic term $x^2$ in the regression (or otherwise add a bias to the coefficient for the quadratic term), then the coefficient for $x$ which correlates somewhat with $x^2$ will partly take correct this (take over) and... the value of the estimate for the coefficient of the linear term will change. Standard error change (and confidence intervals and p-values) The errors of the variables may be correlated leading to very large errors in some coefficient when they strongly correlate with others. The matrix $(X^TX)^{−1}$ describes this correlation. Error in the regression line The image below shows intuitively how this changes when adding other regressors. The intercept is the point where a regression line crosses $x=0$. On the left the error of the intercept is the error of the mean of the population. On the right the error of the intercept is the error of the regression line intercept. Confidence regions for correlated parameters The next image displays the confidence regions (contrasting with confidence intervals) of the above regression in a 2-D plot. Here it takes into account the correlation between the parameters. The ellipse shows the confidence region which is a based on a multivariate distribution of the slope and intercept which may be related via a correlation matrix. For illustration an alternative type of region is also show. This is depicted by the box which is based on two single variate distributions assuming independence (now the confidence for the single variables is $\sqrt{0.95}$). By changing the model from $y = a + bx$ to a shifted model $y = a + b(x-35.5)$ we see that the correlation between the slope and intercept changes. Now the "intercept" coincides with the standard error of the line around the point $x=35.5$ which you see in the image above is smaller. #used model and data set.seed(1) xt <- seq(0,40,0.1) x <- c(1:10)+30 y <- 10+0.5*x+rnorm(10,0,3) See also: why does the same variable have a different slope when incorporated into a linear model with multiple x variables regression with multiple independent variables vs multiple regressions with one independent variable Why is the intercept in multiple regression changing when including/excluding regressors? Why and how does adding an interaction term affects the confidence interval of a main effect? Why is the intercept changing in a logistic regression when all predictors are standardized? Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Does adding more variables into a multivariable regression change coefficients of existing variables? Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ Why do regression coefficients change when excluding variables? Does the order of explanatory variables matter when calculating their regression coefficients? Intercept changing after adding an interaction
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to Coefficient change Let some there be some data distributed according to a quadratic curve: $$y \sim \mathcal{N}(\mu = a+bx+cx^2, \sigma^2 = 10^{-3})$$ For instance with $x \sim \mathcal{U}(0,1)$ and $
49,015
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to my regression model?
The ordinary least squares solution is simply given by: $$\beta = (X'X)^{-1}X'y$$ Let's imagine we augment $X_{n\times p}$ with one or more variables $\tilde X_{n\times \tilde p}$, appending its corresponding values as columns, and call the resulting matrix ${X^*}_{n\times p^*}$, $p^* = p + \tilde p$. Now, given enough degrees of freedom, coefficients will be given by: $$\beta^* = ({X^*}'{X^*})^{-1}{X^*}'y$$ If $\left\{A\right\}_{k,:}$ represents the $k$-th row of a matrix $A$, the $k$-th coefficient in each vector corresponds to: $$\beta_k = \left\{(X'X)^{-1}\right\}_{k,:}X'y$$ $$\beta^*_k = \left\{({X^*}'{X^*})^{-1}\right\}_{k,:}{X^*}'y$$ Since $X^* \neq X$, then there is no reason for $\beta_k$ and $\beta^*_k$ to be equal. Notice, however, that it can still be true in a specific case. If the added variables are orthogonal the original independent variables, then $(\beta_k = \beta^*_k | k\leq p)$. This result can be achieved through: $${X^*}'{X^*}=\left[ \matrix{ {X}'{X} & X' \tilde X \\ \tilde X' X & \tilde X' \tilde X } \right]$$ Rows from the inverse of this covariance matrix left multiply ${X^*}'$. If $X' \tilde X = \tilde X' X = 0$ the covariance become a block matrix, and it can thus be inverted blockwise: $$({X^*}'{X^*})^{-1}=\left[ \matrix{ {X}'{X} & \color{red}{0} \\ \color{red}{0} & \tilde X' \tilde X } \right]^{-1}= \left[ \matrix{ ({X}'{X})^{-1} & 0 \\ 0 & (\tilde X' \tilde X) ^{-1} } \right]$$ The full solution becomes: $$ \begin{align} \beta^* &= ({X^*}'{X^*})^{-1}{X^*}'y=\\ &= \left[ \matrix{ ({X}'{X})^{-1} & 0 \\ 0 & (\tilde X' \tilde X) ^{-1} } \right] \left[\matrix{X'y \\ {\tilde X'y}}\right]=\\ &=\left[\matrix{({X}'{X})^{-1}X'y \\ {(\tilde X' \tilde X) ^{-1}\tilde X'y}}\right]=\left[\matrix{\beta \\ \tilde \beta}\right] \end{align} $$ Thus keeping the identity between both results, as the entries pertaining to $\tilde X$ do not affect the coefficients pertaining to the original $X$. Since coefficients change, so do CIs and p-values. A more in-depth look into how express things in terms of the hat matrix will lead to it all as well.
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to
The ordinary least squares solution is simply given by: $$\beta = (X'X)^{-1}X'y$$ Let's imagine we augment $X_{n\times p}$ with one or more variables $\tilde X_{n\times \tilde p}$, appending its corre
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to my regression model? The ordinary least squares solution is simply given by: $$\beta = (X'X)^{-1}X'y$$ Let's imagine we augment $X_{n\times p}$ with one or more variables $\tilde X_{n\times \tilde p}$, appending its corresponding values as columns, and call the resulting matrix ${X^*}_{n\times p^*}$, $p^* = p + \tilde p$. Now, given enough degrees of freedom, coefficients will be given by: $$\beta^* = ({X^*}'{X^*})^{-1}{X^*}'y$$ If $\left\{A\right\}_{k,:}$ represents the $k$-th row of a matrix $A$, the $k$-th coefficient in each vector corresponds to: $$\beta_k = \left\{(X'X)^{-1}\right\}_{k,:}X'y$$ $$\beta^*_k = \left\{({X^*}'{X^*})^{-1}\right\}_{k,:}{X^*}'y$$ Since $X^* \neq X$, then there is no reason for $\beta_k$ and $\beta^*_k$ to be equal. Notice, however, that it can still be true in a specific case. If the added variables are orthogonal the original independent variables, then $(\beta_k = \beta^*_k | k\leq p)$. This result can be achieved through: $${X^*}'{X^*}=\left[ \matrix{ {X}'{X} & X' \tilde X \\ \tilde X' X & \tilde X' \tilde X } \right]$$ Rows from the inverse of this covariance matrix left multiply ${X^*}'$. If $X' \tilde X = \tilde X' X = 0$ the covariance become a block matrix, and it can thus be inverted blockwise: $$({X^*}'{X^*})^{-1}=\left[ \matrix{ {X}'{X} & \color{red}{0} \\ \color{red}{0} & \tilde X' \tilde X } \right]^{-1}= \left[ \matrix{ ({X}'{X})^{-1} & 0 \\ 0 & (\tilde X' \tilde X) ^{-1} } \right]$$ The full solution becomes: $$ \begin{align} \beta^* &= ({X^*}'{X^*})^{-1}{X^*}'y=\\ &= \left[ \matrix{ ({X}'{X})^{-1} & 0 \\ 0 & (\tilde X' \tilde X) ^{-1} } \right] \left[\matrix{X'y \\ {\tilde X'y}}\right]=\\ &=\left[\matrix{({X}'{X})^{-1}X'y \\ {(\tilde X' \tilde X) ^{-1}\tilde X'y}}\right]=\left[\matrix{\beta \\ \tilde \beta}\right] \end{align} $$ Thus keeping the identity between both results, as the entries pertaining to $\tilde X$ do not affect the coefficients pertaining to the original $X$. Since coefficients change, so do CIs and p-values. A more in-depth look into how express things in terms of the hat matrix will lead to it all as well.
Why do my (coefficients, standard errors & CIs, p-values & significance) change when I add a term to The ordinary least squares solution is simply given by: $$\beta = (X'X)^{-1}X'y$$ Let's imagine we augment $X_{n\times p}$ with one or more variables $\tilde X_{n\times \tilde p}$, appending its corre
49,016
Where is the measure theoretic probability theory actually applied?
Two examples: All of functional analysis, which I guess you will know underlies a lot of machine learning, relies on measure theory. There is no "undergraduate" probability measure that describes a distribution over function spaces, as far as I'm aware. Statistical analysis of nonlinear dynamical systems - there is generically at least one starting condition for a dynamical system such that the infinite-time evolution of the system lies on a set of measure zero in state space. Statistical analysis of such systems can go badly wrong without rigorous measure theory.
Where is the measure theoretic probability theory actually applied?
Two examples: All of functional analysis, which I guess you will know underlies a lot of machine learning, relies on measure theory. There is no "undergraduate" probability measure that describes a di
Where is the measure theoretic probability theory actually applied? Two examples: All of functional analysis, which I guess you will know underlies a lot of machine learning, relies on measure theory. There is no "undergraduate" probability measure that describes a distribution over function spaces, as far as I'm aware. Statistical analysis of nonlinear dynamical systems - there is generically at least one starting condition for a dynamical system such that the infinite-time evolution of the system lies on a set of measure zero in state space. Statistical analysis of such systems can go badly wrong without rigorous measure theory.
Where is the measure theoretic probability theory actually applied? Two examples: All of functional analysis, which I guess you will know underlies a lot of machine learning, relies on measure theory. There is no "undergraduate" probability measure that describes a di
49,017
Relationship between completeness and sufficiency
A complete sufficient statistic is a minimal sufficient statistic whenever a minimal sufficient statistic exists. Suppose for a family of distributions parameterized by $\theta$, there exists a minimal sufficient statistic $S(X)$ and a complete sufficient statistic $T(X)$ based on the data $X$. We show that $T$ is also minimal sufficient. As $S$ is minimal sufficient and $T$ is sufficient, by definition of minimal sufficiency there exists a measurable function $h$ such that $S=h(T)$. Consider the function $g(T)=T-E_{\theta}[T\mid S]=T-E[T\mid S]$, so that $E_{\theta}[g(T)]=0$ for every $\theta$. As $T$ is complete, this implies $g(T)=0$ almost everywhere. That is, $$T=E[T\mid S]\quad,\text{a.e.}$$ So $T$ is a function of $S$. And as $S$ is a function of any other sufficient statistic, so is $T$. Therefore $T$ is minimal sufficient and equivalent to $S$.
Relationship between completeness and sufficiency
A complete sufficient statistic is a minimal sufficient statistic whenever a minimal sufficient statistic exists. Suppose for a family of distributions parameterized by $\theta$, there exists a minim
Relationship between completeness and sufficiency A complete sufficient statistic is a minimal sufficient statistic whenever a minimal sufficient statistic exists. Suppose for a family of distributions parameterized by $\theta$, there exists a minimal sufficient statistic $S(X)$ and a complete sufficient statistic $T(X)$ based on the data $X$. We show that $T$ is also minimal sufficient. As $S$ is minimal sufficient and $T$ is sufficient, by definition of minimal sufficiency there exists a measurable function $h$ such that $S=h(T)$. Consider the function $g(T)=T-E_{\theta}[T\mid S]=T-E[T\mid S]$, so that $E_{\theta}[g(T)]=0$ for every $\theta$. As $T$ is complete, this implies $g(T)=0$ almost everywhere. That is, $$T=E[T\mid S]\quad,\text{a.e.}$$ So $T$ is a function of $S$. And as $S$ is a function of any other sufficient statistic, so is $T$. Therefore $T$ is minimal sufficient and equivalent to $S$.
Relationship between completeness and sufficiency A complete sufficient statistic is a minimal sufficient statistic whenever a minimal sufficient statistic exists. Suppose for a family of distributions parameterized by $\theta$, there exists a minim
49,018
Number of Causal Assumptions in an Overview by Pearl
None of the below causal arrows appear in Fig. 2(a). I am assuming time flows from top left to bottom right (i.e. so that $Y \to X$ cannot be a causal assumption because causes must precede effects.). $U_{Z} \to U_{X}$ $U_{Z} \to U_{Y}$ $U_{Z} \to X$ $U_{Z} \to Y$ $U_{X} \to U_{Y}$ $U_{X} \to Y$ $Z \to Y$ This means that the causal world in Fig. 2(a) assumes there are none of the above seven direct causal effects. By contrast, each of the arrows actually appearing in the graph (e.g., $U_{Z} \to Z$, etc.) are assumptions of direct causal effects. EDIT: Based on correspondence with Judea Pearl. [Judea's quote is edited for the grammar/typos common in a brief email exchange.] I had in mind the following $U_{Z} \longleftrightarrow U_{X}$ $U_{Z} \longleftrightarrow U_{Y}$ $U_{X} \longleftrightarrow U_{Y}$ $Z \to Y$ $X \to Z$ $Y \to Z$ $Y \to X$ The missing arrows you listed e.g., $U_{X} \to Y$ are implied by the above, because $U_{Y}$ is defined as everything that affects $Y$ when $X$ is held constant.
Number of Causal Assumptions in an Overview by Pearl
None of the below causal arrows appear in Fig. 2(a). I am assuming time flows from top left to bottom right (i.e. so that $Y \to X$ cannot be a causal assumption because causes must precede effects.).
Number of Causal Assumptions in an Overview by Pearl None of the below causal arrows appear in Fig. 2(a). I am assuming time flows from top left to bottom right (i.e. so that $Y \to X$ cannot be a causal assumption because causes must precede effects.). $U_{Z} \to U_{X}$ $U_{Z} \to U_{Y}$ $U_{Z} \to X$ $U_{Z} \to Y$ $U_{X} \to U_{Y}$ $U_{X} \to Y$ $Z \to Y$ This means that the causal world in Fig. 2(a) assumes there are none of the above seven direct causal effects. By contrast, each of the arrows actually appearing in the graph (e.g., $U_{Z} \to Z$, etc.) are assumptions of direct causal effects. EDIT: Based on correspondence with Judea Pearl. [Judea's quote is edited for the grammar/typos common in a brief email exchange.] I had in mind the following $U_{Z} \longleftrightarrow U_{X}$ $U_{Z} \longleftrightarrow U_{Y}$ $U_{X} \longleftrightarrow U_{Y}$ $Z \to Y$ $X \to Z$ $Y \to Z$ $Y \to X$ The missing arrows you listed e.g., $U_{X} \to Y$ are implied by the above, because $U_{Y}$ is defined as everything that affects $Y$ when $X$ is held constant.
Number of Causal Assumptions in an Overview by Pearl None of the below causal arrows appear in Fig. 2(a). I am assuming time flows from top left to bottom right (i.e. so that $Y \to X$ cannot be a causal assumption because causes must precede effects.).
49,019
Number of Causal Assumptions in an Overview by Pearl
An exchange of comments with @Alexis (and their correspondence with Pearl himself) cleared things up for me. I can summarize as follows: For the exogenous variables $U_X, U_Y, U_Z$ we only allow/count double arrows (just... because?). For these variables we have three missing (double) arrows, which are $U_X \leftrightarrow U_Y, U_Z \leftrightarrow U_Y$ and $U_X \leftrightarrow U_Z$. For the endogenous variables $X,Y,Z$, we count only directed arrows (again, just because) and we have four missing such arrows, which are $X\to Z, Y\to Z, Y \to X$ and $Z\to Y$. We do not count arrows such as $U_X \to Z$ since $U_Z$ is defined as everything that affects $Z$ outside of the other endogenous variables ($X,Y$, in this case), so no other influence is allowed, specifically not $U_X$. This count gives us seven missing arrows total, as the text suggests.
Number of Causal Assumptions in an Overview by Pearl
An exchange of comments with @Alexis (and their correspondence with Pearl himself) cleared things up for me. I can summarize as follows: For the exogenous variables $U_X, U_Y, U_Z$ we only allow/coun
Number of Causal Assumptions in an Overview by Pearl An exchange of comments with @Alexis (and their correspondence with Pearl himself) cleared things up for me. I can summarize as follows: For the exogenous variables $U_X, U_Y, U_Z$ we only allow/count double arrows (just... because?). For these variables we have three missing (double) arrows, which are $U_X \leftrightarrow U_Y, U_Z \leftrightarrow U_Y$ and $U_X \leftrightarrow U_Z$. For the endogenous variables $X,Y,Z$, we count only directed arrows (again, just because) and we have four missing such arrows, which are $X\to Z, Y\to Z, Y \to X$ and $Z\to Y$. We do not count arrows such as $U_X \to Z$ since $U_Z$ is defined as everything that affects $Z$ outside of the other endogenous variables ($X,Y$, in this case), so no other influence is allowed, specifically not $U_X$. This count gives us seven missing arrows total, as the text suggests.
Number of Causal Assumptions in an Overview by Pearl An exchange of comments with @Alexis (and their correspondence with Pearl himself) cleared things up for me. I can summarize as follows: For the exogenous variables $U_X, U_Y, U_Z$ we only allow/coun
49,020
What proportion of missing data can be considered acceptable for inference with a mixed-effects model
Does this level of missingness catastrophically reduce the value of the inferences you would make from a longitudinal mixed effects model? Not necessarily. A great deal depends on the reasons for dropout. If the data are missing at random (MAR), then a suitable multiple imputation approach can result in unbiased, or at least much less biased, estimates. Here, by "suitable" I mean, a multiple imputation scheme that handles the clustering of data within individuals. If the data are missing completely at random (MCAR), then unbiased point estimates may be obtained from a mixed model using complete case analysis, but standard errors will be biased upwards. Again, a suitable multiple imputation approach will reduce this. On the other hand, if the data are missing not at random (MNAR), then you may have a very difficult time ahead. A good review of approaches to this problem can be found here: Huque, M.H., Carlin, J.B., Simpson, J.A. and Lee, K.J., 2018. A comparison of multiple imputation methods for missing data in longitudinal studies. BMC medical research methodology, 18(1), p.168. https://www.ncbi.nlm.nih.gov/pubmed/30541455
What proportion of missing data can be considered acceptable for inference with a mixed-effects mode
Does this level of missingness catastrophically reduce the value of the inferences you would make from a longitudinal mixed effects model? Not necessarily. A great deal depends on the reasons for dro
What proportion of missing data can be considered acceptable for inference with a mixed-effects model Does this level of missingness catastrophically reduce the value of the inferences you would make from a longitudinal mixed effects model? Not necessarily. A great deal depends on the reasons for dropout. If the data are missing at random (MAR), then a suitable multiple imputation approach can result in unbiased, or at least much less biased, estimates. Here, by "suitable" I mean, a multiple imputation scheme that handles the clustering of data within individuals. If the data are missing completely at random (MCAR), then unbiased point estimates may be obtained from a mixed model using complete case analysis, but standard errors will be biased upwards. Again, a suitable multiple imputation approach will reduce this. On the other hand, if the data are missing not at random (MNAR), then you may have a very difficult time ahead. A good review of approaches to this problem can be found here: Huque, M.H., Carlin, J.B., Simpson, J.A. and Lee, K.J., 2018. A comparison of multiple imputation methods for missing data in longitudinal studies. BMC medical research methodology, 18(1), p.168. https://www.ncbi.nlm.nih.gov/pubmed/30541455
What proportion of missing data can be considered acceptable for inference with a mixed-effects mode Does this level of missingness catastrophically reduce the value of the inferences you would make from a longitudinal mixed effects model? Not necessarily. A great deal depends on the reasons for dro
49,021
Why do output coefficients not resemble true coefficients in a linear model?
The day three The elders of the statistics guild have discovered a problem in the divine parameters. There is no single solution possible because the system is over-determined. We can scale the different values of the groups and the results will remain true. For instance when we divide the 'constant' coefficient by two and at the same time multiply the 'home' coefficients by two then the result remains unchanged Harvest = (Constant/2) * (2*Home) * Sex * Rank * Noise = Constant * Home * Sex * Rank * Noise The only divine values that matter are the ratio's of coefficients. We can see this in the divine R-code which only gives us $k-1$ coefficients for each characteristic variable of size $k$. > model <- lm(log(Y)~1+Home+Gender+Rank, data=dune) > > c <- exp(coef(model)) > c (Intercept) HomeHorekonen HomeOrdos GenderM RankJunior 1.4199111 1.6218721 2.2666767 1.2001998 0.7001083 RankVeteran 0.7786371 > > #comparing gender > c['GenderM'] #model GenderM 1.2002 > 1.1/0.9 #divine [1] 1.222222 > > #comparing homes > c['HomeHorekonen'] #model HomeHorekonen 1.621872 > 1/0.6 #divine [1] 1.666667 > c['HomeOrdos'] #model HomeOrdos 2.266677 > 1.4/0.6 #divine [1] 2.333333 > > #comparing rank > c['RankJunior'] #model RankJunior 0.7001083 > 0.9/1.3 #divine [1] 0.6923077 > c['RankVeteran'] #model RankVeteran 0.7786371 > 1/1.3 #divine [1] 0.7692308 > Comparing multiplicative versus linear model Note that there are different ways to make a 'multiplicative model/relation' (is the deterministic part multiplicative, the error term, or both). Sometimes people just compute a regular linear model for the logarithm of the response variable, however this is not the only way. When we compare with a regular linear model (a deterministic linear part $X\beta$ and a homeogeneous error term $\epsilon \sim N(0,\sigma^2)$: $$Y = X\beta + \epsilon$$ then modelling the logarithm of the response variable as such a linear model would look like this: $$log(Y) = X\beta + \epsilon$$ which can be transformed into: $$Y = e^{X\beta + \epsilon}$$ and this is different from $$Y = e^{X\beta} + \epsilon$$ So there is a difference whether the error term is inside or outside the logarithmic transformation. Basically you can make the following four different models based on whether you use a logarithmic/multiplicative model and whether you assume the error term to be heterogeneous (include in the log transformation) or homogeneous (not included in the log transformation). In R you can compute these four combinations by using: # copy from your data dune <- structure(list( Y = c(2.82, 1.02, 4.15, 2.78, 2.07, 3.2, 2.16, 2.25, 2.48, 1.1, 1.21, 1.61, 1.07, 1.06, 1.74, 1.14, 3.41, 1.41, 2.59, 1.98, 2.01, 2.98, 4.18, 1.04, 2.77, 1.88, 2.11, 1.47, 1.15, 1.69, 1.47, 2.15, 1.28, 1.91, 2.23, 2.5, 1.75, 2.22, 2.88, 1.62, 1.67, 2.43, 0.92, 2.01, 1.09, 2.12, 3.29, 2.17, 3.17, 2.83, 1.81, 3.2, 1.91, 0.92, 2.32, 1.6, 1.52, 2.4, 1.47, 1.51, 2.58, 1.25, 2.22, 1.22, 1.2, 1.3, 2.5, 2.23, 3.98, 2.26, 3.16, 1.25, 2.2, 3.81, 1.24, 1.66, 2.28, 2.84, 1.01, 1.23), Home = structure(c(3L, 1L, 3L, 2L, 3L, 3L, 2L, 3L, 3L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 2L, 2L, 3L, 3L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 3L, 2L, 2L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 1L, 3L, 3L, 2L, 3L, 3L, 1L, 3L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 3L, 1L, 2L, 1L, 1L, 1L, 3L, 3L, 3L, 2L, 3L, 1L, 3L, 3L, 1L, 2L, 3L, 3L, 1L, 1L), .Label = c("Atreides", "Horekonen", "Ordos"), class = "factor"), Gender = structure(c(2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L), .Label = c("F", "M"), class = "factor"), Rank = structure(c(3L, 2L, 1L, 1L, 2L, 3L, 3L, 2L, 3L, 3L, 2L, 2L, 3L, 2L, 1L, 3L, 1L, 3L, 3L, 3L, 2L, 3L, 1L, 3L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 3L, 3L, 3L, 2L, 1L, 3L, 1L, 2L, 1L, 2L, 3L, 2L, 3L, 3L, 2L, 3L, 3L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 3L, 3L, 2L, 3L, 3L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 3L, 2L, 2L, 3L, 2L, 2L), .Label = c("Elite", "Junior", "Veteran"), class = "factor")), class = "data.frame", row.names = c(NA, -80L)) # stuff to compute the models in various ways # # logarithmic deterministic model # homogeneous error terms modelglm <- glm(Y ~ 1+Home+Gender+Rank, family=gaussian(link="log"), data=dune) modelnls1 <- nls(Y ~ a * c(1,b1,b2)[Home] * c(1,c1)[Gender] * c(1,d1,d2)[Rank], start = c(a=1,b1=1,b2=1,c1=1,d1=1,d2=1), data=dune) modelnls2 <- nls(Y ~ exp(a + c(0,b1,b2)[Home] + c(0,c1)[Gender] + c(0,d1,d2)[Rank]), start = c(a=1,b1=1,b2=1,c1=1,d1=1,d2=1), data=dune) # logarithmic deterministic model # heterogeneous error terms modellm <- lm(log(Y)~1+Home+Gender+Rank, data=dune) # linear deterministic model # heterogeneous error terms modelquasi <- glm(Y~1+Home+Gender+Rank, family=quasi(link="identity", variance="mu"), data=dune) # linear deterministic model # homogeneous error terms modelind <- lm(Y~1+Home+Gender+Rank, data=dune) #### stuff to create the plots below plot(exp(predict(modellm)),dune$Y, ylab = "observed values", xlab="estimated mean", cex=0.7,pch=1,col=rgb(0,0,0,0.5),bg=rgb(0,0,0,0.5)) lines(c(0,10),c(0,10)) title(expression(Y == exp(X * beta + epsilon))) plot((predict(modelind)),dune$Y, ylab = "observed values", xlab="estimated mean", cex=0.7,pch=1,col=rgb(0,0,0,0.5),bg=rgb(0,0,0,0.5),log="") lines(seq(0.1,10,0.1),seq(0.1,10,0.1)) title(expression(Y == X * beta + epsilon)) The difference in the use of homogeneous or heterogeneous error terms is not so much important for the predicted values. Except the error estimates for the predicted values will be different (and given the increase of residuals for larger values of $Y$ the use of heterogeneous error terms would not be so bad). More important is that there is a difference whether you use a multiplicative model or an additive model: you can see that the linear model (on the right) has a bias and low and high values will be underestimated and middle values will be overestimated. You can also note that this curve on the right image seems to be like a nice function. And this might make you wonder whether there could not be some function to adapt it and make the fit better. For instance use $Y = f(X\beta) + \epsilon$. And indeed what you call 'multiplicative model' is just like an additive model with an exponential function. $$Y = e^{c_0 + c_{Home} + c_{Sex} + c_{rank}} + \epsilon = d_0 + d_{Home} + d_{Sex} + d_{rank} + \epsilon$$ with the relationships between the coefficients $d$ and $c$ as $d = exp(c)$ See the correspondence of the coefficients for the three different implementations of the model of the first category: > coefficients(modelnls1) a b1 b2 c1 d1 d2 1.4118260 1.6168696 2.2726165 1.2082349 0.7037927 0.7817258 > exp(coefficients(modelnls2)) a b1 b2 c1 d1 d2 1.4118251 1.6168700 2.2726173 1.2082353 0.7037931 0.7817260 > exp(coefficients(modelglm)) (Intercept) HomeHorekonen HomeOrdos GenderM RankJunior 1.4118257 1.6168696 2.2726169 1.2082349 0.7037929 RankVeteran 0.7817259
Why do output coefficients not resemble true coefficients in a linear model?
The day three The elders of the statistics guild have discovered a problem in the divine parameters. There is no single solution possible because the system is over-determined. We can scale the differ
Why do output coefficients not resemble true coefficients in a linear model? The day three The elders of the statistics guild have discovered a problem in the divine parameters. There is no single solution possible because the system is over-determined. We can scale the different values of the groups and the results will remain true. For instance when we divide the 'constant' coefficient by two and at the same time multiply the 'home' coefficients by two then the result remains unchanged Harvest = (Constant/2) * (2*Home) * Sex * Rank * Noise = Constant * Home * Sex * Rank * Noise The only divine values that matter are the ratio's of coefficients. We can see this in the divine R-code which only gives us $k-1$ coefficients for each characteristic variable of size $k$. > model <- lm(log(Y)~1+Home+Gender+Rank, data=dune) > > c <- exp(coef(model)) > c (Intercept) HomeHorekonen HomeOrdos GenderM RankJunior 1.4199111 1.6218721 2.2666767 1.2001998 0.7001083 RankVeteran 0.7786371 > > #comparing gender > c['GenderM'] #model GenderM 1.2002 > 1.1/0.9 #divine [1] 1.222222 > > #comparing homes > c['HomeHorekonen'] #model HomeHorekonen 1.621872 > 1/0.6 #divine [1] 1.666667 > c['HomeOrdos'] #model HomeOrdos 2.266677 > 1.4/0.6 #divine [1] 2.333333 > > #comparing rank > c['RankJunior'] #model RankJunior 0.7001083 > 0.9/1.3 #divine [1] 0.6923077 > c['RankVeteran'] #model RankVeteran 0.7786371 > 1/1.3 #divine [1] 0.7692308 > Comparing multiplicative versus linear model Note that there are different ways to make a 'multiplicative model/relation' (is the deterministic part multiplicative, the error term, or both). Sometimes people just compute a regular linear model for the logarithm of the response variable, however this is not the only way. When we compare with a regular linear model (a deterministic linear part $X\beta$ and a homeogeneous error term $\epsilon \sim N(0,\sigma^2)$: $$Y = X\beta + \epsilon$$ then modelling the logarithm of the response variable as such a linear model would look like this: $$log(Y) = X\beta + \epsilon$$ which can be transformed into: $$Y = e^{X\beta + \epsilon}$$ and this is different from $$Y = e^{X\beta} + \epsilon$$ So there is a difference whether the error term is inside or outside the logarithmic transformation. Basically you can make the following four different models based on whether you use a logarithmic/multiplicative model and whether you assume the error term to be heterogeneous (include in the log transformation) or homogeneous (not included in the log transformation). In R you can compute these four combinations by using: # copy from your data dune <- structure(list( Y = c(2.82, 1.02, 4.15, 2.78, 2.07, 3.2, 2.16, 2.25, 2.48, 1.1, 1.21, 1.61, 1.07, 1.06, 1.74, 1.14, 3.41, 1.41, 2.59, 1.98, 2.01, 2.98, 4.18, 1.04, 2.77, 1.88, 2.11, 1.47, 1.15, 1.69, 1.47, 2.15, 1.28, 1.91, 2.23, 2.5, 1.75, 2.22, 2.88, 1.62, 1.67, 2.43, 0.92, 2.01, 1.09, 2.12, 3.29, 2.17, 3.17, 2.83, 1.81, 3.2, 1.91, 0.92, 2.32, 1.6, 1.52, 2.4, 1.47, 1.51, 2.58, 1.25, 2.22, 1.22, 1.2, 1.3, 2.5, 2.23, 3.98, 2.26, 3.16, 1.25, 2.2, 3.81, 1.24, 1.66, 2.28, 2.84, 1.01, 1.23), Home = structure(c(3L, 1L, 3L, 2L, 3L, 3L, 2L, 3L, 3L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 2L, 2L, 3L, 3L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 2L, 3L, 2L, 2L, 2L, 3L, 1L, 2L, 3L, 1L, 2L, 1L, 3L, 3L, 2L, 3L, 3L, 1L, 3L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 3L, 1L, 2L, 1L, 1L, 1L, 3L, 3L, 3L, 2L, 3L, 1L, 3L, 3L, 1L, 2L, 3L, 3L, 1L, 1L), .Label = c("Atreides", "Horekonen", "Ordos"), class = "factor"), Gender = structure(c(2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 1L, 2L, 1L, 1L, 2L, 1L, 1L, 2L, 1L, 2L, 2L, 1L, 1L, 2L, 1L, 2L), .Label = c("F", "M"), class = "factor"), Rank = structure(c(3L, 2L, 1L, 1L, 2L, 3L, 3L, 2L, 3L, 3L, 2L, 2L, 3L, 2L, 1L, 3L, 1L, 3L, 3L, 3L, 2L, 3L, 1L, 3L, 1L, 2L, 2L, 1L, 2L, 1L, 2L, 3L, 3L, 3L, 2L, 1L, 3L, 1L, 2L, 1L, 2L, 3L, 2L, 3L, 3L, 2L, 3L, 3L, 1L, 2L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 3L, 3L, 2L, 3L, 3L, 2L, 2L, 1L, 1L, 1L, 2L, 2L, 1L, 3L, 2L, 2L, 3L, 2L, 2L), .Label = c("Elite", "Junior", "Veteran"), class = "factor")), class = "data.frame", row.names = c(NA, -80L)) # stuff to compute the models in various ways # # logarithmic deterministic model # homogeneous error terms modelglm <- glm(Y ~ 1+Home+Gender+Rank, family=gaussian(link="log"), data=dune) modelnls1 <- nls(Y ~ a * c(1,b1,b2)[Home] * c(1,c1)[Gender] * c(1,d1,d2)[Rank], start = c(a=1,b1=1,b2=1,c1=1,d1=1,d2=1), data=dune) modelnls2 <- nls(Y ~ exp(a + c(0,b1,b2)[Home] + c(0,c1)[Gender] + c(0,d1,d2)[Rank]), start = c(a=1,b1=1,b2=1,c1=1,d1=1,d2=1), data=dune) # logarithmic deterministic model # heterogeneous error terms modellm <- lm(log(Y)~1+Home+Gender+Rank, data=dune) # linear deterministic model # heterogeneous error terms modelquasi <- glm(Y~1+Home+Gender+Rank, family=quasi(link="identity", variance="mu"), data=dune) # linear deterministic model # homogeneous error terms modelind <- lm(Y~1+Home+Gender+Rank, data=dune) #### stuff to create the plots below plot(exp(predict(modellm)),dune$Y, ylab = "observed values", xlab="estimated mean", cex=0.7,pch=1,col=rgb(0,0,0,0.5),bg=rgb(0,0,0,0.5)) lines(c(0,10),c(0,10)) title(expression(Y == exp(X * beta + epsilon))) plot((predict(modelind)),dune$Y, ylab = "observed values", xlab="estimated mean", cex=0.7,pch=1,col=rgb(0,0,0,0.5),bg=rgb(0,0,0,0.5),log="") lines(seq(0.1,10,0.1),seq(0.1,10,0.1)) title(expression(Y == X * beta + epsilon)) The difference in the use of homogeneous or heterogeneous error terms is not so much important for the predicted values. Except the error estimates for the predicted values will be different (and given the increase of residuals for larger values of $Y$ the use of heterogeneous error terms would not be so bad). More important is that there is a difference whether you use a multiplicative model or an additive model: you can see that the linear model (on the right) has a bias and low and high values will be underestimated and middle values will be overestimated. You can also note that this curve on the right image seems to be like a nice function. And this might make you wonder whether there could not be some function to adapt it and make the fit better. For instance use $Y = f(X\beta) + \epsilon$. And indeed what you call 'multiplicative model' is just like an additive model with an exponential function. $$Y = e^{c_0 + c_{Home} + c_{Sex} + c_{rank}} + \epsilon = d_0 + d_{Home} + d_{Sex} + d_{rank} + \epsilon$$ with the relationships between the coefficients $d$ and $c$ as $d = exp(c)$ See the correspondence of the coefficients for the three different implementations of the model of the first category: > coefficients(modelnls1) a b1 b2 c1 d1 d2 1.4118260 1.6168696 2.2726165 1.2082349 0.7037927 0.7817258 > exp(coefficients(modelnls2)) a b1 b2 c1 d1 d2 1.4118251 1.6168700 2.2726173 1.2082353 0.7037931 0.7817260 > exp(coefficients(modelglm)) (Intercept) HomeHorekonen HomeOrdos GenderM RankJunior 1.4118257 1.6168696 2.2726169 1.2082349 0.7037929 RankVeteran 0.7817259
Why do output coefficients not resemble true coefficients in a linear model? The day three The elders of the statistics guild have discovered a problem in the divine parameters. There is no single solution possible because the system is over-determined. We can scale the differ
49,022
Bootstrap confidence interval on heavy tailed distribution
a) Distributions with heavy tails may have infinite variance, or mean (Ex: Cauchy distribution) True. b) Heavy tailed means that there are a few outliers that are very different from the most of the samples. And these outliers have non-negligible impact on the future statistic procedures. Partly true. This might be how it looks in a realization drawn from the distribution, but the outliers are not part of a discrete/separable component (the distribution is typically unimodal, meaning the tails decay gradually - just really slowly) c) Log-normal or exponential distributions have heavy tails Partly true. (updated with info from @glen_b's comment) These distributions are both heavier-tailed than the Gaussian distribution, but the exponential is not heavy-tailed enough to cause difficulties. The log-Normal has a finite variance, so is theoretically OK, but can cause problems. Pareto and Cauchy (and other extreme t distributions, e.g. Student $t$ with 2 df) are in the "highly problematic" category.
Bootstrap confidence interval on heavy tailed distribution
a) Distributions with heavy tails may have infinite variance, or mean (Ex: Cauchy distribution) True. b) Heavy tailed means that there are a few outliers that are very different from the most of the
Bootstrap confidence interval on heavy tailed distribution a) Distributions with heavy tails may have infinite variance, or mean (Ex: Cauchy distribution) True. b) Heavy tailed means that there are a few outliers that are very different from the most of the samples. And these outliers have non-negligible impact on the future statistic procedures. Partly true. This might be how it looks in a realization drawn from the distribution, but the outliers are not part of a discrete/separable component (the distribution is typically unimodal, meaning the tails decay gradually - just really slowly) c) Log-normal or exponential distributions have heavy tails Partly true. (updated with info from @glen_b's comment) These distributions are both heavier-tailed than the Gaussian distribution, but the exponential is not heavy-tailed enough to cause difficulties. The log-Normal has a finite variance, so is theoretically OK, but can cause problems. Pareto and Cauchy (and other extreme t distributions, e.g. Student $t$ with 2 df) are in the "highly problematic" category.
Bootstrap confidence interval on heavy tailed distribution a) Distributions with heavy tails may have infinite variance, or mean (Ex: Cauchy distribution) True. b) Heavy tailed means that there are a few outliers that are very different from the most of the
49,023
Bootstrap confidence interval on heavy tailed distribution
Here's what I understand: a) Distributions with heavy tails may have infinite variance, or mean (Ex: cauchy distribution) b) Heavy tailed means that there are a few outliers that are very different from the most of the samples. And these outliers have non-negligible impact on the future statistic procedures. c) Log-normal or exponential distributions have heavy tails Here's the question: Are my understandings (a, b, c) right? Can Bootstrapping be used to estimate confidence interval of mean or variance of lognormal, or exponential population? Why does Bootstrapping fail in case of heavy tail? 1)a) They can have an undefined mean or variance. Under some specifications, that is represented as infinite. 1)b)It depends upon the procedure being used. If you have a Cauchy distribution and you are using the sample mean to estimate the center of location then the answer is yes. If you have a Cauchy distribution and a large enough sample and are using a Bayesian method or Rothenberg's estimator then the answer is 'no.' It could but it need not. 1)c)Most definitions of heavy tails are those greater than the exponential distribution so the exponential is not a heavy-tailed distribution see Bryson, M. (1974). Heavy Tailed Distributions: Properties and Tests. Technometrics 16(1):61-68 (February 1974). 2) Yes, bootstrapping can be used for either. However, it somewhat begs the question as to why you would use it for either. If you really believed those were the distributions, then there are good parametric tools for both. 3) Yes, sort of. An example of where resampling is used would be Theil's regression across two Cauchy variables. There is a special limited sampling case for Theil's regression that is basically a bootstrap. The issue is that you are not seeking a mean as none exists. You would be seeking the median of the joint set. There is nothing intrinsic to bootstrapping that prohibits its use with heavy tails but you cannot use it to find something that does not exist, such as a variance or a mean. As with any problem where you have fewer good properties, the usefulness of bootstrap will be greatly reduced.
Bootstrap confidence interval on heavy tailed distribution
Here's what I understand: a) Distributions with heavy tails may have infinite variance, or mean (Ex: cauchy distribution) b) Heavy tailed means that there are a few outliers that are very differen
Bootstrap confidence interval on heavy tailed distribution Here's what I understand: a) Distributions with heavy tails may have infinite variance, or mean (Ex: cauchy distribution) b) Heavy tailed means that there are a few outliers that are very different from the most of the samples. And these outliers have non-negligible impact on the future statistic procedures. c) Log-normal or exponential distributions have heavy tails Here's the question: Are my understandings (a, b, c) right? Can Bootstrapping be used to estimate confidence interval of mean or variance of lognormal, or exponential population? Why does Bootstrapping fail in case of heavy tail? 1)a) They can have an undefined mean or variance. Under some specifications, that is represented as infinite. 1)b)It depends upon the procedure being used. If you have a Cauchy distribution and you are using the sample mean to estimate the center of location then the answer is yes. If you have a Cauchy distribution and a large enough sample and are using a Bayesian method or Rothenberg's estimator then the answer is 'no.' It could but it need not. 1)c)Most definitions of heavy tails are those greater than the exponential distribution so the exponential is not a heavy-tailed distribution see Bryson, M. (1974). Heavy Tailed Distributions: Properties and Tests. Technometrics 16(1):61-68 (February 1974). 2) Yes, bootstrapping can be used for either. However, it somewhat begs the question as to why you would use it for either. If you really believed those were the distributions, then there are good parametric tools for both. 3) Yes, sort of. An example of where resampling is used would be Theil's regression across two Cauchy variables. There is a special limited sampling case for Theil's regression that is basically a bootstrap. The issue is that you are not seeking a mean as none exists. You would be seeking the median of the joint set. There is nothing intrinsic to bootstrapping that prohibits its use with heavy tails but you cannot use it to find something that does not exist, such as a variance or a mean. As with any problem where you have fewer good properties, the usefulness of bootstrap will be greatly reduced.
Bootstrap confidence interval on heavy tailed distribution Here's what I understand: a) Distributions with heavy tails may have infinite variance, or mean (Ex: cauchy distribution) b) Heavy tailed means that there are a few outliers that are very differen
49,024
Bootstrap confidence interval on heavy tailed distribution
Bootstrapping the sampling distribution of a sample mean will work (in the sense of being consistent as n diverges) only if a Central Limit Theorem applies, thus existence of the variance is practically required. See Mammen, 'When Will Bootstrap Work, Springer 1992. The intuition is indeed that otherwise observations remain influential. In most counterexamples, the bootstrap distribution fails to converge anywhere.
Bootstrap confidence interval on heavy tailed distribution
Bootstrapping the sampling distribution of a sample mean will work (in the sense of being consistent as n diverges) only if a Central Limit Theorem applies, thus existence of the variance is practical
Bootstrap confidence interval on heavy tailed distribution Bootstrapping the sampling distribution of a sample mean will work (in the sense of being consistent as n diverges) only if a Central Limit Theorem applies, thus existence of the variance is practically required. See Mammen, 'When Will Bootstrap Work, Springer 1992. The intuition is indeed that otherwise observations remain influential. In most counterexamples, the bootstrap distribution fails to converge anywhere.
Bootstrap confidence interval on heavy tailed distribution Bootstrapping the sampling distribution of a sample mean will work (in the sense of being consistent as n diverges) only if a Central Limit Theorem applies, thus existence of the variance is practical
49,025
Confusion on how skip gram implementation is formulated
1 - The architecture in the CS224n course lecture notes is correct. The likelihood is given by the product of the probabilities $ \Pi_{w\in \rm{Text}} \Pi_{c \in C(w)} P(c | w)$ (where $C(w)$ is the context of the target word). Note that I have added the product over all the words in the corpus (see details https://arxiv.org/abs/1402.3722). Taking the log of the likelihood to define the loss function, you end up with a sum over all the target words. Your confusion with the first source is related to your comment "Based on the first link I posted, each training example has only one context word". What they mean by having "training examples" of the form (target, context) such as (The, quick)... is that the likelihood decomposes into these $P(c | w)$ terms. 2- As commented previously, the loss function is a sum over all the target words (and for each target word a sum over its context) so this should solve your confusion regarding "Unlike a standard neural network, in which we form the cost function by taking the average of loss function over all training examples...". 3- agree with you, can you give more details?
Confusion on how skip gram implementation is formulated
1 - The architecture in the CS224n course lecture notes is correct. The likelihood is given by the product of the probabilities $ \Pi_{w\in \rm{Text}} \Pi_{c \in C(w)} P(c | w)$ (where $C(w)$ is the
Confusion on how skip gram implementation is formulated 1 - The architecture in the CS224n course lecture notes is correct. The likelihood is given by the product of the probabilities $ \Pi_{w\in \rm{Text}} \Pi_{c \in C(w)} P(c | w)$ (where $C(w)$ is the context of the target word). Note that I have added the product over all the words in the corpus (see details https://arxiv.org/abs/1402.3722). Taking the log of the likelihood to define the loss function, you end up with a sum over all the target words. Your confusion with the first source is related to your comment "Based on the first link I posted, each training example has only one context word". What they mean by having "training examples" of the form (target, context) such as (The, quick)... is that the likelihood decomposes into these $P(c | w)$ terms. 2- As commented previously, the loss function is a sum over all the target words (and for each target word a sum over its context) so this should solve your confusion regarding "Unlike a standard neural network, in which we form the cost function by taking the average of loss function over all training examples...". 3- agree with you, can you give more details?
Confusion on how skip gram implementation is formulated 1 - The architecture in the CS224n course lecture notes is correct. The likelihood is given by the product of the probabilities $ \Pi_{w\in \rm{Text}} \Pi_{c \in C(w)} P(c | w)$ (where $C(w)$ is the
49,026
Confusion on how skip gram implementation is formulated
I also had similar confusion in the past so I made a skip-gram model demo in this GitHub repo in javascript. Hopefully, it can help people to understand how the skip-gram model works by visualizing it.
Confusion on how skip gram implementation is formulated
I also had similar confusion in the past so I made a skip-gram model demo in this GitHub repo in javascript. Hopefully, it can help people to understand how the skip-gram model works by visualizing it
Confusion on how skip gram implementation is formulated I also had similar confusion in the past so I made a skip-gram model demo in this GitHub repo in javascript. Hopefully, it can help people to understand how the skip-gram model works by visualizing it.
Confusion on how skip gram implementation is formulated I also had similar confusion in the past so I made a skip-gram model demo in this GitHub repo in javascript. Hopefully, it can help people to understand how the skip-gram model works by visualizing it
49,027
How does Fisher calculate his $p$-value?
Fisher's approach, in a fully parametric framework, was to reduce the data $X$ to a (one-dimensional) statistic sufficient, or conditionally sufficient, for the parameter of interest $\theta$, & to base inference on its distribution under the null hypothesis $\theta=\theta_0$. Typically he used the (or a) maximum-likelihood estimate $\hat\theta(X)$ (in any case the MLE, when unique, will be a one-to-one function of any one-dimensional sufficient statistic when there is one); though I don't recall any explicit discussion, viewing the maximum-likelihood estimate $\hat\theta(X_1)$ as more extreme than $\hat\theta(X_2)$ because it's further away from $\theta_0$ in the same direction follows naturally enough. See Fisher (1934), Proc. Royal Soc. Lond. A, 144 ,"Two New Properties of Mathematical Likelihood", § 2.6, for his emphasis on the connection between (maximum-likelihood) estimation & significance testing. He doesn't seem to have given a great deal of thought to the calculation of p-values for two-tailed tests (at least for test statistics having discrete distributions). Yates (1984), JRSS A, 147, "Tests of Significance for 2x2 Contingency Tables", p. 444, quotes Fisher's (1946) reply to a letter from D.J Finney asking about two-tailed p-values for Fisher's Exact Test: I believe I can defend the simple solution of doubling the total probability, not because it corresponds to any discrete subdivision of cases of the other tail, but because it corresponds with halving the probability, supposedly chosen in advance, with which the one observed is to be compared. [...] How does this strike you? On the face of it this argument belongs more to the Neyman – Pearson approach. Fisher (1973), Statistical Methods & Scientific Inference, pp 49 – 50, draws a distinction between testing a "general hypothesis"—a model—as a whole, & testing for a particular value of one of its parameters. In the latter case he reiterates the approach above; in the former his advice is this: In choosing the grounds upon which a general hypothesis should be rejected, personal judgement may & should properly be exercised. The experimenter will rightly consider all points on which, in the light of current knowledge, the hypothesis may be imperfectly accurate, & will select tests, so far as possible, sensitive to these possible faults, rather than to others. Which doesn't seem poles apart from the approach of stipulating an alternative hypothesis precisely & basing your choice of test statistic on considerations of power.
How does Fisher calculate his $p$-value?
Fisher's approach, in a fully parametric framework, was to reduce the data $X$ to a (one-dimensional) statistic sufficient, or conditionally sufficient, for the parameter of interest $\theta$, & to ba
How does Fisher calculate his $p$-value? Fisher's approach, in a fully parametric framework, was to reduce the data $X$ to a (one-dimensional) statistic sufficient, or conditionally sufficient, for the parameter of interest $\theta$, & to base inference on its distribution under the null hypothesis $\theta=\theta_0$. Typically he used the (or a) maximum-likelihood estimate $\hat\theta(X)$ (in any case the MLE, when unique, will be a one-to-one function of any one-dimensional sufficient statistic when there is one); though I don't recall any explicit discussion, viewing the maximum-likelihood estimate $\hat\theta(X_1)$ as more extreme than $\hat\theta(X_2)$ because it's further away from $\theta_0$ in the same direction follows naturally enough. See Fisher (1934), Proc. Royal Soc. Lond. A, 144 ,"Two New Properties of Mathematical Likelihood", § 2.6, for his emphasis on the connection between (maximum-likelihood) estimation & significance testing. He doesn't seem to have given a great deal of thought to the calculation of p-values for two-tailed tests (at least for test statistics having discrete distributions). Yates (1984), JRSS A, 147, "Tests of Significance for 2x2 Contingency Tables", p. 444, quotes Fisher's (1946) reply to a letter from D.J Finney asking about two-tailed p-values for Fisher's Exact Test: I believe I can defend the simple solution of doubling the total probability, not because it corresponds to any discrete subdivision of cases of the other tail, but because it corresponds with halving the probability, supposedly chosen in advance, with which the one observed is to be compared. [...] How does this strike you? On the face of it this argument belongs more to the Neyman – Pearson approach. Fisher (1973), Statistical Methods & Scientific Inference, pp 49 – 50, draws a distinction between testing a "general hypothesis"—a model—as a whole, & testing for a particular value of one of its parameters. In the latter case he reiterates the approach above; in the former his advice is this: In choosing the grounds upon which a general hypothesis should be rejected, personal judgement may & should properly be exercised. The experimenter will rightly consider all points on which, in the light of current knowledge, the hypothesis may be imperfectly accurate, & will select tests, so far as possible, sensitive to these possible faults, rather than to others. Which doesn't seem poles apart from the approach of stipulating an alternative hypothesis precisely & basing your choice of test statistic on considerations of power.
How does Fisher calculate his $p$-value? Fisher's approach, in a fully parametric framework, was to reduce the data $X$ to a (one-dimensional) statistic sufficient, or conditionally sufficient, for the parameter of interest $\theta$, & to ba
49,028
Can anyone suggest a distribution for this histogram
I collected $2^{20}$ values from a unit normal process, did the FFT and binned the magnitudes. Then overplotted with a Rayleigh distribution: I did no scaling on anything, because I was working fast, but I will go back and do it.
Can anyone suggest a distribution for this histogram
I collected $2^{20}$ values from a unit normal process, did the FFT and binned the magnitudes. Then overplotted with a Rayleigh distribution: I did no scaling on anything, because I was working fast,
Can anyone suggest a distribution for this histogram I collected $2^{20}$ values from a unit normal process, did the FFT and binned the magnitudes. Then overplotted with a Rayleigh distribution: I did no scaling on anything, because I was working fast, but I will go back and do it.
Can anyone suggest a distribution for this histogram I collected $2^{20}$ values from a unit normal process, did the FFT and binned the magnitudes. Then overplotted with a Rayleigh distribution: I did no scaling on anything, because I was working fast,
49,029
Can anyone suggest a distribution for this histogram
So I have understood my problem. This is basically a consequence of taking the FFT of time transient data and taking the absolute value of it. The FFT spectrum analyser device actually spits out the absolute value -- so the phase and sign information of the original transient is LOST. You can prove this simply by generating a random list of numbers, normal distributed, and FFT it. Then take the absolute value and plot as a histogram. You get exactly the same distribution as I have shown in my question. It would still be nice to know what the actual distribution of this data is -- as in the shape of it. But I can basically reconstruct my original noise distribution and verify that it is indeed Gaussian distributed.
Can anyone suggest a distribution for this histogram
So I have understood my problem. This is basically a consequence of taking the FFT of time transient data and taking the absolute value of it. The FFT spectrum analyser device actually spits out the a
Can anyone suggest a distribution for this histogram So I have understood my problem. This is basically a consequence of taking the FFT of time transient data and taking the absolute value of it. The FFT spectrum analyser device actually spits out the absolute value -- so the phase and sign information of the original transient is LOST. You can prove this simply by generating a random list of numbers, normal distributed, and FFT it. Then take the absolute value and plot as a histogram. You get exactly the same distribution as I have shown in my question. It would still be nice to know what the actual distribution of this data is -- as in the shape of it. But I can basically reconstruct my original noise distribution and verify that it is indeed Gaussian distributed.
Can anyone suggest a distribution for this histogram So I have understood my problem. This is basically a consequence of taking the FFT of time transient data and taking the absolute value of it. The FFT spectrum analyser device actually spits out the a
49,030
Coefficient of determination relationship?
$$R^2 = 1-\frac{SSRES}{SSTOT}$$ When there there is no residual variation ($SSRES=0$) then the regression line fits the data perfectly and $R^2$ is 1, whereas when there is there is large residual variation, $\frac{SSRES}{SSTOT}$ approaches 1 and so $R^2$ approaches 0. what do we say the relationship between X and Y is in this case given the coefficient of determination? In this case (simple linear regression) $R$ is the Pearson correlation coefficient between the two variables, and the standardized regression coefficient, whereas $R^2$ measures the proportion of variability explained by the model. Since your regression equation has a coefficient for $X$ of 0.006 and $R^2$ is 0.3 it is obvious that the two variables are measured on different scales. If you standardized the two variables, $R$ and the coefficient for $X$ would be the same. See here for further details. So the interpretation is that there is a positive relationship between X and Y, and that for every increase of 1 unit in $X$ the model predicts a 0.006 unit increase in $Y$, and that the model explains 30% of the variability in $Y$. It would be a good idea to plot your data and consider the assumptions of the model (especially the assumption of linearity), and if you intend to use the model for inference then further checks should be made.
Coefficient of determination relationship?
$$R^2 = 1-\frac{SSRES}{SSTOT}$$ When there there is no residual variation ($SSRES=0$) then the regression line fits the data perfectly and $R^2$ is 1, whereas when there is there is large residual var
Coefficient of determination relationship? $$R^2 = 1-\frac{SSRES}{SSTOT}$$ When there there is no residual variation ($SSRES=0$) then the regression line fits the data perfectly and $R^2$ is 1, whereas when there is there is large residual variation, $\frac{SSRES}{SSTOT}$ approaches 1 and so $R^2$ approaches 0. what do we say the relationship between X and Y is in this case given the coefficient of determination? In this case (simple linear regression) $R$ is the Pearson correlation coefficient between the two variables, and the standardized regression coefficient, whereas $R^2$ measures the proportion of variability explained by the model. Since your regression equation has a coefficient for $X$ of 0.006 and $R^2$ is 0.3 it is obvious that the two variables are measured on different scales. If you standardized the two variables, $R$ and the coefficient for $X$ would be the same. See here for further details. So the interpretation is that there is a positive relationship between X and Y, and that for every increase of 1 unit in $X$ the model predicts a 0.006 unit increase in $Y$, and that the model explains 30% of the variability in $Y$. It would be a good idea to plot your data and consider the assumptions of the model (especially the assumption of linearity), and if you intend to use the model for inference then further checks should be made.
Coefficient of determination relationship? $$R^2 = 1-\frac{SSRES}{SSTOT}$$ When there there is no residual variation ($SSRES=0$) then the regression line fits the data perfectly and $R^2$ is 1, whereas when there is there is large residual var
49,031
Coefficient of determination relationship?
I think it's important to consider what regression is doing. Then the coefficient of determination makes sense. Let's say that we collect some data on the heights of people. From our data, we find a mean of 5'2" with a middle range (Q1 to Q3) of 4'2" to 6'2". Given a new person, what height do you guess? Depending on your application, mean may or may not be what you're looking for, but let's say it is, since OLS regression is predicting conditional means. You could guess 5'2" with a range of 4'2" to 6'2", but that's such a wide range! You kind of have no idea how tall this random subject will be. However, in the absence of any other knowledge, you know that the mean value gets the right answer on average, so you guess the mean. You get the answer wrong--by a lot. However, you could have collected other information about the people whose heights you measured. I would expect height differences between men and women. Certainly I expect height differences at different ages. Now you know that the person whose height you have to guess is a 40-year-old man. You go look at your subjects with known height and don't have any 40-year-old men, but you have men who are 39, 41, and 42, who average 5'9" with a range of 5'8" to 5'10". You have much more confidence in your answer of 5'9". In fact, the subject was 5'8". By doing regression on the age and gender, you have decreased your error from 6" to 1". The goal of regression is to decrease that variability by using other information. That other information form the predictive variables in your model. Regression then predicts the mean of a conditional distribution, conditioned on that other information (e.g. male and 40 years old). Let's get back to the formula for the coefficient of determination. The total sum of squares is by how much you miss the correct value by guessing the average of all observations. The sum of squared errors is by how much you miss the right answer by predicting the mean of the conditional distribution. The sum squared of the regression is by how much you decrease your error by accounting for the additional information. Instead of naively guessing the overall mean, you something about the subject and tighten up your guess. When we say that the coefficient of determination is SSReg/SSTotal, we are saying what percentage of the variability in the observations is accounted for by the additional information (predictive variables). If we have a conditional distribution with little variability, then we can make a very tight guess about what a new observation would be.
Coefficient of determination relationship?
I think it's important to consider what regression is doing. Then the coefficient of determination makes sense. Let's say that we collect some data on the heights of people. From our data, we find a m
Coefficient of determination relationship? I think it's important to consider what regression is doing. Then the coefficient of determination makes sense. Let's say that we collect some data on the heights of people. From our data, we find a mean of 5'2" with a middle range (Q1 to Q3) of 4'2" to 6'2". Given a new person, what height do you guess? Depending on your application, mean may or may not be what you're looking for, but let's say it is, since OLS regression is predicting conditional means. You could guess 5'2" with a range of 4'2" to 6'2", but that's such a wide range! You kind of have no idea how tall this random subject will be. However, in the absence of any other knowledge, you know that the mean value gets the right answer on average, so you guess the mean. You get the answer wrong--by a lot. However, you could have collected other information about the people whose heights you measured. I would expect height differences between men and women. Certainly I expect height differences at different ages. Now you know that the person whose height you have to guess is a 40-year-old man. You go look at your subjects with known height and don't have any 40-year-old men, but you have men who are 39, 41, and 42, who average 5'9" with a range of 5'8" to 5'10". You have much more confidence in your answer of 5'9". In fact, the subject was 5'8". By doing regression on the age and gender, you have decreased your error from 6" to 1". The goal of regression is to decrease that variability by using other information. That other information form the predictive variables in your model. Regression then predicts the mean of a conditional distribution, conditioned on that other information (e.g. male and 40 years old). Let's get back to the formula for the coefficient of determination. The total sum of squares is by how much you miss the correct value by guessing the average of all observations. The sum of squared errors is by how much you miss the right answer by predicting the mean of the conditional distribution. The sum squared of the regression is by how much you decrease your error by accounting for the additional information. Instead of naively guessing the overall mean, you something about the subject and tighten up your guess. When we say that the coefficient of determination is SSReg/SSTotal, we are saying what percentage of the variability in the observations is accounted for by the additional information (predictive variables). If we have a conditional distribution with little variability, then we can make a very tight guess about what a new observation would be.
Coefficient of determination relationship? I think it's important to consider what regression is doing. Then the coefficient of determination makes sense. Let's say that we collect some data on the heights of people. From our data, we find a m
49,032
What is the relation between a surrogate function and an acquisition function?
I think of an acquisition function as describing the utility of the point to be evaluated next in the Bayesian optimization framework. To give more details, let's think about the general concept of Bayesian Optimization and the setting in which it is usually applied. Consider a black-box function $f$ which is expensive to evaluate and we want to find the optimal point of $f$ over a search space $X$ with minimum number of function evaluations. Since $f$ is blackbox, we model $f$ with a Gaussian process(GP) based on some assumptions(captured in the type of kernel(rbf, periodic, etc.)) and update the GP iteratively based on new function($f$) evaluations. Therefore, GP acts a cheap surrogate for $f$. Following is a pseudo code for the complete process: Intialize a GP model $M$ For maximum number of iterations allowed: Find next point to evaluate($x^*$) Update GP based on $x^*$ return the best evaluated point as the optimal point All parts other than 'Find next point to evaluate($x^*$)' are straightforward here. Now, how should we pick this point for evaluating the function at each iteration. One idea is to pick it randomly from the search space but that doesn't seem really insightful. Remember our GP learns more and more about the function as we evaluate more points. Therefore, we should somehow utilize this information contained in the GP to pick the next point. Here comes the acquisition function! An acquisition function utilizes the GP's information to find the utility of a point to be evaluated next in the above process. Intuitively, we can think of an acquisition function trying to figure out the value of points in $X$ as the "potential optimal point" based on the information contained in GP. Let's try to come up with an acquisition function ourselves. GPs allows us to compute the predictive mean $\mu(x)$ of a point $x$ in the search space $X$. Should we go ahead and pick the point with maximum $\mu$ then? No, this will result in over confidence in the current state of our GP. Remember, our GP is learning more about the function as we evaluate more points. We should account for the fact that the current $\mu$ might be a bad approximation of the value of some points. This is captured by the predictive variance $\sigma(x)$ given by the GP. Therefore, we should also allow our acquisition function to explore a bit. Let's make a trade-off between $\mu$ and $\sigma$ with a parameter $\beta$. And there we have it, an acquisition function given as $\mu(x)+ \beta \sigma(x)$. This is one of the most popular acquisition function in BO known as GP-UCB[1]. Some of the arguments might be handwavy here but they were mostly meant for intuition and not technical rigor. [1]. Srinivas et al., Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design (https://arxiv.org/pdf/0912.3995.pdf)
What is the relation between a surrogate function and an acquisition function?
I think of an acquisition function as describing the utility of the point to be evaluated next in the Bayesian optimization framework. To give more details, let's think about the general concept of Ba
What is the relation between a surrogate function and an acquisition function? I think of an acquisition function as describing the utility of the point to be evaluated next in the Bayesian optimization framework. To give more details, let's think about the general concept of Bayesian Optimization and the setting in which it is usually applied. Consider a black-box function $f$ which is expensive to evaluate and we want to find the optimal point of $f$ over a search space $X$ with minimum number of function evaluations. Since $f$ is blackbox, we model $f$ with a Gaussian process(GP) based on some assumptions(captured in the type of kernel(rbf, periodic, etc.)) and update the GP iteratively based on new function($f$) evaluations. Therefore, GP acts a cheap surrogate for $f$. Following is a pseudo code for the complete process: Intialize a GP model $M$ For maximum number of iterations allowed: Find next point to evaluate($x^*$) Update GP based on $x^*$ return the best evaluated point as the optimal point All parts other than 'Find next point to evaluate($x^*$)' are straightforward here. Now, how should we pick this point for evaluating the function at each iteration. One idea is to pick it randomly from the search space but that doesn't seem really insightful. Remember our GP learns more and more about the function as we evaluate more points. Therefore, we should somehow utilize this information contained in the GP to pick the next point. Here comes the acquisition function! An acquisition function utilizes the GP's information to find the utility of a point to be evaluated next in the above process. Intuitively, we can think of an acquisition function trying to figure out the value of points in $X$ as the "potential optimal point" based on the information contained in GP. Let's try to come up with an acquisition function ourselves. GPs allows us to compute the predictive mean $\mu(x)$ of a point $x$ in the search space $X$. Should we go ahead and pick the point with maximum $\mu$ then? No, this will result in over confidence in the current state of our GP. Remember, our GP is learning more about the function as we evaluate more points. We should account for the fact that the current $\mu$ might be a bad approximation of the value of some points. This is captured by the predictive variance $\sigma(x)$ given by the GP. Therefore, we should also allow our acquisition function to explore a bit. Let's make a trade-off between $\mu$ and $\sigma$ with a parameter $\beta$. And there we have it, an acquisition function given as $\mu(x)+ \beta \sigma(x)$. This is one of the most popular acquisition function in BO known as GP-UCB[1]. Some of the arguments might be handwavy here but they were mostly meant for intuition and not technical rigor. [1]. Srinivas et al., Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design (https://arxiv.org/pdf/0912.3995.pdf)
What is the relation between a surrogate function and an acquisition function? I think of an acquisition function as describing the utility of the point to be evaluated next in the Bayesian optimization framework. To give more details, let's think about the general concept of Ba
49,033
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regression methods
According to Karush–Kuhn–Tucker conditions and this post, the first problem is equivalent to the second problem, and $t = ||\hat\beta||^2$, $\hat\beta = (X^TX+\lambda I)^{-1}X^TY$, so $t=Y^TX(X^TX+\lambda I)^{-2}X^TY$. Then we only need to prove $t$ is an one-to-one function of $\lambda$. Suppose $T_1=X^TX+\lambda_1 I$, $T_2=X^TX+\lambda_2 I=T_1+\lambda_0I$ where $\lambda_0 = \lambda_2-\lambda_1>0$, then $t(\lambda_2)-t(\lambda_1)=Y^TX(T_2^{-2}-T_1^{-2})X^TY$. Note that $T_1$ and $T_2$ are positive definite. $T_2^{-2}-T_1^{-2}=T_2^{-2}(I-(T_1+\lambda_0I)^2T_1^{-2})=-T_2^{-2}(\lambda_0^2T_1^{-2}+2\lambda_0T_1^{-1})<0$. Thus $t(\lambda_2)<t(\lambda_1)$. Actually $t(\lambda)$ is monotone decreasing as you indicated.
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regress
According to Karush–Kuhn–Tucker conditions and this post, the first problem is equivalent to the second problem, and $t = ||\hat\beta||^2$, $\hat\beta = (X^TX+\lambda I)^{-1}X^TY$, so $t=Y^TX(X^TX+\la
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regression methods According to Karush–Kuhn–Tucker conditions and this post, the first problem is equivalent to the second problem, and $t = ||\hat\beta||^2$, $\hat\beta = (X^TX+\lambda I)^{-1}X^TY$, so $t=Y^TX(X^TX+\lambda I)^{-2}X^TY$. Then we only need to prove $t$ is an one-to-one function of $\lambda$. Suppose $T_1=X^TX+\lambda_1 I$, $T_2=X^TX+\lambda_2 I=T_1+\lambda_0I$ where $\lambda_0 = \lambda_2-\lambda_1>0$, then $t(\lambda_2)-t(\lambda_1)=Y^TX(T_2^{-2}-T_1^{-2})X^TY$. Note that $T_1$ and $T_2$ are positive definite. $T_2^{-2}-T_1^{-2}=T_2^{-2}(I-(T_1+\lambda_0I)^2T_1^{-2})=-T_2^{-2}(\lambda_0^2T_1^{-2}+2\lambda_0T_1^{-1})<0$. Thus $t(\lambda_2)<t(\lambda_1)$. Actually $t(\lambda)$ is monotone decreasing as you indicated.
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regress According to Karush–Kuhn–Tucker conditions and this post, the first problem is equivalent to the second problem, and $t = ||\hat\beta||^2$, $\hat\beta = (X^TX+\lambda I)^{-1}X^TY$, so $t=Y^TX(X^TX+\la
49,034
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regression methods
Assume that the solution of your problem $(1)$ is $\beta_\lambda^*$, where index $\lambda$ indicates dependence on a particular value of $\lambda$. The second problem is solved using Langrange multipliers ($\mu$) and considering KKT conditions, one of which is that $\mu(\Vert \beta\Vert^2 -t) =0$. Set $t$ in the KTT condition above to the value of the solution of problem $(1)$, that is, $t = \Vert \beta_\lambda^*\Vert^2 $. Then $\mu=\lambda$ and $\beta = \beta_\lambda^*$ satisfy KKT conditions for $(2)$, that is, the problems share the same solution. Once again, the correspondence between $\lambda^*$ and $t$ is $t = \Vert \beta_\lambda^*\Vert^2 $. I'm providing only a condensed conclusion from the (great) answers with proofs and detailed explanations, which can be found here: https://math.stackexchange.com/questions/335306/why-are-additional-constraint-and-penalty-term-equivalent-in-ridge-regression/336618#336618 To answer the question on correspondence between $\mu$ and $t$ one has to solve $t = \Vert \beta_\lambda^*\Vert^2 $. To do that, use the solution to problem $(1)$: $$ \beta_\lambda^* = (X^TX+\lambda I)^{-1}X^Ty. $$ In other words, for a given $t$, one needs to find a $\lambda$ such that $$ [(X^TX+\lambda I)^{-1}X^Ty]^T (X^TX+\lambda I)^{-1}X^Ty = t $$ what establishes the desired correspondence. Note that $t$ needs to be less than $1$, see here: How to find regression coefficients $\beta$ in ridge regression? and here: Ridge regression formulation as constrained versus penalized: How are they equivalent?
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regress
Assume that the solution of your problem $(1)$ is $\beta_\lambda^*$, where index $\lambda$ indicates dependence on a particular value of $\lambda$. The second problem is solved using Langrange multipl
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regression methods Assume that the solution of your problem $(1)$ is $\beta_\lambda^*$, where index $\lambda$ indicates dependence on a particular value of $\lambda$. The second problem is solved using Langrange multipliers ($\mu$) and considering KKT conditions, one of which is that $\mu(\Vert \beta\Vert^2 -t) =0$. Set $t$ in the KTT condition above to the value of the solution of problem $(1)$, that is, $t = \Vert \beta_\lambda^*\Vert^2 $. Then $\mu=\lambda$ and $\beta = \beta_\lambda^*$ satisfy KKT conditions for $(2)$, that is, the problems share the same solution. Once again, the correspondence between $\lambda^*$ and $t$ is $t = \Vert \beta_\lambda^*\Vert^2 $. I'm providing only a condensed conclusion from the (great) answers with proofs and detailed explanations, which can be found here: https://math.stackexchange.com/questions/335306/why-are-additional-constraint-and-penalty-term-equivalent-in-ridge-regression/336618#336618 To answer the question on correspondence between $\mu$ and $t$ one has to solve $t = \Vert \beta_\lambda^*\Vert^2 $. To do that, use the solution to problem $(1)$: $$ \beta_\lambda^* = (X^TX+\lambda I)^{-1}X^Ty. $$ In other words, for a given $t$, one needs to find a $\lambda$ such that $$ [(X^TX+\lambda I)^{-1}X^Ty]^T (X^TX+\lambda I)^{-1}X^Ty = t $$ what establishes the desired correspondence. Note that $t$ needs to be less than $1$, see here: How to find regression coefficients $\beta$ in ridge regression? and here: Ridge regression formulation as constrained versus penalized: How are they equivalent?
One-to-one correspondence between penalty parameters of equivalent formulations of penalised regress Assume that the solution of your problem $(1)$ is $\beta_\lambda^*$, where index $\lambda$ indicates dependence on a particular value of $\lambda$. The second problem is solved using Langrange multipl
49,035
Why is RMSEA typically reported with a 90% confidence interval, and not 95%?
Curran et al. (2003) write that: It is common to report 90 percent confidence intervals for the RMSEA, primarily because of the resulting direct link to hypothesis testing based on the model test statistic. Three hypothesis tests sometimes reported in the SEM literature are. The test of exact fit, $H_{0}: \epsilon = 0$ and The test of close fit, $H_{0}: \epsilon \leq .05$ and The test of not-close fit, $H_{0}: \epsilon \geq .05$ Thus the ultimate rationale for using a 90% CI is that if you do that you can infer the results of those hypothesis tests from the CI. The relationship is illustrated in a table in MacCullum et al. (1996) p. 137. Curran, P. J., Bollen, K. A., Chen, F., Paxton, P., & Kirby, J. B. (2003). Finite sampling properties of the point estimates and confidence intervals of the RMSEA. Sociological Methods & Research, 32, 208-252. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological methods, 1, 130.
Why is RMSEA typically reported with a 90% confidence interval, and not 95%?
Curran et al. (2003) write that: It is common to report 90 percent confidence intervals for the RMSEA, primarily because of the resulting direct link to hypothesis testing based on the model tes
Why is RMSEA typically reported with a 90% confidence interval, and not 95%? Curran et al. (2003) write that: It is common to report 90 percent confidence intervals for the RMSEA, primarily because of the resulting direct link to hypothesis testing based on the model test statistic. Three hypothesis tests sometimes reported in the SEM literature are. The test of exact fit, $H_{0}: \epsilon = 0$ and The test of close fit, $H_{0}: \epsilon \leq .05$ and The test of not-close fit, $H_{0}: \epsilon \geq .05$ Thus the ultimate rationale for using a 90% CI is that if you do that you can infer the results of those hypothesis tests from the CI. The relationship is illustrated in a table in MacCullum et al. (1996) p. 137. Curran, P. J., Bollen, K. A., Chen, F., Paxton, P., & Kirby, J. B. (2003). Finite sampling properties of the point estimates and confidence intervals of the RMSEA. Sociological Methods & Research, 32, 208-252. MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological methods, 1, 130.
Why is RMSEA typically reported with a 90% confidence interval, and not 95%? Curran et al. (2003) write that: It is common to report 90 percent confidence intervals for the RMSEA, primarily because of the resulting direct link to hypothesis testing based on the model tes
49,036
Thin Plate Regression Splines mgcv?
The motivation for performing an eigendecomposition of the design matrix is indeed, as you mentioned, to reduce the computational cost of the algorithm. Fitting splines, particularly in the case where $d > 1$, is a very computationally intensive task - in the paper you cite, Wood mentions that all of the algorithms for $d > 1$ are of $O(n^3)$ complexity. Performing an eigendecomposition and selecting the top k eigenvalues not only decreases the computational cost from $O(n^3)$ to $O(k^3)$, but also decreases the memory overhead, since we don't have to keep as many elements of the design matrix in memory. This is especially valuable when working with larger datasets.
Thin Plate Regression Splines mgcv?
The motivation for performing an eigendecomposition of the design matrix is indeed, as you mentioned, to reduce the computational cost of the algorithm. Fitting splines, particularly in the case wher
Thin Plate Regression Splines mgcv? The motivation for performing an eigendecomposition of the design matrix is indeed, as you mentioned, to reduce the computational cost of the algorithm. Fitting splines, particularly in the case where $d > 1$, is a very computationally intensive task - in the paper you cite, Wood mentions that all of the algorithms for $d > 1$ are of $O(n^3)$ complexity. Performing an eigendecomposition and selecting the top k eigenvalues not only decreases the computational cost from $O(n^3)$ to $O(k^3)$, but also decreases the memory overhead, since we don't have to keep as many elements of the design matrix in memory. This is especially valuable when working with larger datasets.
Thin Plate Regression Splines mgcv? The motivation for performing an eigendecomposition of the design matrix is indeed, as you mentioned, to reduce the computational cost of the algorithm. Fitting splines, particularly in the case wher
49,037
Model Deployment: export Scikit Learn Pipeline or Model only?
You have to export the model which includes the list of transformers defined by the pipeline and the final estimator. To give a simple example: from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.datasets import load_boston from sklearn.pipeline import Pipeline X, y = load_boston(return_X_y=True) pipe = Pipeline(steps=[('scaler', StandardScaler()), ('linreg', LinearRegression())]) pipe.fit(X, y) Now you can save your model for example via pickle for use in production: import pickle s = pickle.dumps(pipe)
Model Deployment: export Scikit Learn Pipeline or Model only?
You have to export the model which includes the list of transformers defined by the pipeline and the final estimator. To give a simple example: from sklearn.preprocessing import StandardScaler from sk
Model Deployment: export Scikit Learn Pipeline or Model only? You have to export the model which includes the list of transformers defined by the pipeline and the final estimator. To give a simple example: from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.datasets import load_boston from sklearn.pipeline import Pipeline X, y = load_boston(return_X_y=True) pipe = Pipeline(steps=[('scaler', StandardScaler()), ('linreg', LinearRegression())]) pipe.fit(X, y) Now you can save your model for example via pickle for use in production: import pickle s = pickle.dumps(pipe)
Model Deployment: export Scikit Learn Pipeline or Model only? You have to export the model which includes the list of transformers defined by the pipeline and the final estimator. To give a simple example: from sklearn.preprocessing import StandardScaler from sk
49,038
Stationary processes for AR, MA, ARMA
Short answers: We restrict ourself to the stationary region as on the non-stationary one ARMA processes become explosive (that is, they go to infinity) It is possible to fit a non-stationary model to time series but that won't be an ARMA model (but it may belong to the family of ARMA models) Non-stationary time series need to be at least locally stationary to be modelled. If they are not, we won't have enough observations at each time point to be able to make reasonable estimates. However, if we have a good "sceleton" (see e.g. Tong,H., Non-Linear Time Series) for the series we might be able to extract the non-stationary/nonlinear dynamics from the data and leave a stationary process behind to play with.
Stationary processes for AR, MA, ARMA
Short answers: We restrict ourself to the stationary region as on the non-stationary one ARMA processes become explosive (that is, they go to infinity) It is possible to fit a non-stationary model t
Stationary processes for AR, MA, ARMA Short answers: We restrict ourself to the stationary region as on the non-stationary one ARMA processes become explosive (that is, they go to infinity) It is possible to fit a non-stationary model to time series but that won't be an ARMA model (but it may belong to the family of ARMA models) Non-stationary time series need to be at least locally stationary to be modelled. If they are not, we won't have enough observations at each time point to be able to make reasonable estimates. However, if we have a good "sceleton" (see e.g. Tong,H., Non-Linear Time Series) for the series we might be able to extract the non-stationary/nonlinear dynamics from the data and leave a stationary process behind to play with.
Stationary processes for AR, MA, ARMA Short answers: We restrict ourself to the stationary region as on the non-stationary one ARMA processes become explosive (that is, they go to infinity) It is possible to fit a non-stationary model t
49,039
Stationary processes for AR, MA, ARMA
Intuition For AR it depends on what you're going to use the model for, see details below. It doesn't make sense to estimate the MA part of the ARMA. Remember, if a series follows a unit root every shock persists, forever. Said another way, an error from today or a hundred years ago has the same impact on the series. Since you can't really estimate a MA($\infty$), best to leave out the MA. Details Let's focus on a AR(1) model to gain intuition. Let's assume the data is I(1) (i.e. non-stationary). What happens if you estimate a AR(1)? Will the model be any good? To answer these question you have to know what you want to use the model for. Generally, in time series, you use a model for forecasting or inference. Forecasting Yes, we can use the model. We still have consistent coefficient estimates, i.e. the coefficient will be roughly 1. So long as the coefficients are good, our forecasts are good. Word of warning, the process is explosive. This can be seen best by estimating prediction intervals. PI's will not have the usual sideways parabola-shape, rather they will have a sideways absolute value-shape Inference Generally no, you can't use the model. Deriving the variance of coefficient estimates is tricky in the presents of a unit root. Intuitively, however, a non-stationary process has no tendency to revert to its mean, implying an infinite variance. Said another way, variance increases to infinity as the number of observation increases, a bad asymptotic result. Further, with a infinite variance, we simply can't reject any null hypotheses. What about MA? Let's look at the MA representation of a unit root. We know that a simple AR(1), $y_t = \beta y_{t-1} + u_t$, can be written as a MA($\infty$), $y_t = \sum_{j=0}^\infty \beta^ju_{t-j}$. When $\beta <1$, we're ok, the impact of a shock will eventually die off. Even if $\beta = .9$, $.9^{30}$ is small. However, when $\beta = 1$, there is a big problem! Specifically, shocks never die off. Said another way, a shock today or a hundred years ago would have the same impact on the series, e.g. $1^{30} = 1$ . This makes a MA representation of a unit root process intractable.
Stationary processes for AR, MA, ARMA
Intuition For AR it depends on what you're going to use the model for, see details below. It doesn't make sense to estimate the MA part of the ARMA. Remember, if a series follows a unit root every
Stationary processes for AR, MA, ARMA Intuition For AR it depends on what you're going to use the model for, see details below. It doesn't make sense to estimate the MA part of the ARMA. Remember, if a series follows a unit root every shock persists, forever. Said another way, an error from today or a hundred years ago has the same impact on the series. Since you can't really estimate a MA($\infty$), best to leave out the MA. Details Let's focus on a AR(1) model to gain intuition. Let's assume the data is I(1) (i.e. non-stationary). What happens if you estimate a AR(1)? Will the model be any good? To answer these question you have to know what you want to use the model for. Generally, in time series, you use a model for forecasting or inference. Forecasting Yes, we can use the model. We still have consistent coefficient estimates, i.e. the coefficient will be roughly 1. So long as the coefficients are good, our forecasts are good. Word of warning, the process is explosive. This can be seen best by estimating prediction intervals. PI's will not have the usual sideways parabola-shape, rather they will have a sideways absolute value-shape Inference Generally no, you can't use the model. Deriving the variance of coefficient estimates is tricky in the presents of a unit root. Intuitively, however, a non-stationary process has no tendency to revert to its mean, implying an infinite variance. Said another way, variance increases to infinity as the number of observation increases, a bad asymptotic result. Further, with a infinite variance, we simply can't reject any null hypotheses. What about MA? Let's look at the MA representation of a unit root. We know that a simple AR(1), $y_t = \beta y_{t-1} + u_t$, can be written as a MA($\infty$), $y_t = \sum_{j=0}^\infty \beta^ju_{t-j}$. When $\beta <1$, we're ok, the impact of a shock will eventually die off. Even if $\beta = .9$, $.9^{30}$ is small. However, when $\beta = 1$, there is a big problem! Specifically, shocks never die off. Said another way, a shock today or a hundred years ago would have the same impact on the series, e.g. $1^{30} = 1$ . This makes a MA representation of a unit root process intractable.
Stationary processes for AR, MA, ARMA Intuition For AR it depends on what you're going to use the model for, see details below. It doesn't make sense to estimate the MA part of the ARMA. Remember, if a series follows a unit root every
49,040
Stationary processes for AR, MA, ARMA
I think it would be difficult to even estimate an ARIMA model with non-stationarity, because the tools such as PACF and ACF look different when non-stationarity exists. Knowing what was the right order, the AR and MA levels, would be difficult at least given the classical presentations used to identify these.
Stationary processes for AR, MA, ARMA
I think it would be difficult to even estimate an ARIMA model with non-stationarity, because the tools such as PACF and ACF look different when non-stationarity exists. Knowing what was the right orde
Stationary processes for AR, MA, ARMA I think it would be difficult to even estimate an ARIMA model with non-stationarity, because the tools such as PACF and ACF look different when non-stationarity exists. Knowing what was the right order, the AR and MA levels, would be difficult at least given the classical presentations used to identify these.
Stationary processes for AR, MA, ARMA I think it would be difficult to even estimate an ARIMA model with non-stationarity, because the tools such as PACF and ACF look different when non-stationarity exists. Knowing what was the right orde
49,041
Incremental solution for matrix inverse using Shermann-Morrison in $O(n^2)$
Within the context of linear regression, the estimated $\beta$ parameters are given $(X^TX)^{-1}X^Ty$. The main computational burden in this formula is the inversion the matrix $A = X^TX$. In the use-case of batch learning, we have already estimate for $A$ as well as for $A^{-1} = (X^TX)^{-1}$ from our previous iteration. When we get another batch we can look into how this can be expressed as an additive change on $A$ in the form of $A_{new} = A + uv^T$. Notice that the extra sample/batch will extend $X$ by a row but it will leave $A$ to be the same dimensions as before. Thus, we have $A_{new}^{-1}= (A+uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^TA^{-1}}{1 + v^TA^{-1}u}$.
Incremental solution for matrix inverse using Shermann-Morrison in $O(n^2)$
Within the context of linear regression, the estimated $\beta$ parameters are given $(X^TX)^{-1}X^Ty$. The main computational burden in this formula is the inversion the matrix $A = X^TX$. In the use-
Incremental solution for matrix inverse using Shermann-Morrison in $O(n^2)$ Within the context of linear regression, the estimated $\beta$ parameters are given $(X^TX)^{-1}X^Ty$. The main computational burden in this formula is the inversion the matrix $A = X^TX$. In the use-case of batch learning, we have already estimate for $A$ as well as for $A^{-1} = (X^TX)^{-1}$ from our previous iteration. When we get another batch we can look into how this can be expressed as an additive change on $A$ in the form of $A_{new} = A + uv^T$. Notice that the extra sample/batch will extend $X$ by a row but it will leave $A$ to be the same dimensions as before. Thus, we have $A_{new}^{-1}= (A+uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^TA^{-1}}{1 + v^TA^{-1}u}$.
Incremental solution for matrix inverse using Shermann-Morrison in $O(n^2)$ Within the context of linear regression, the estimated $\beta$ parameters are given $(X^TX)^{-1}X^Ty$. The main computational burden in this formula is the inversion the matrix $A = X^TX$. In the use-
49,042
warning: Some predictor variables are on very different scales: consider rescaling
A reasonable approach is to do a "summary" of the dataset and look at the means and medians of the numerical variables, and also possibly their variances. If you see any that are orders of magnitude different from others, that will give you a clue. You could just simply standardise all the numeric variables first and see if the model converges. If it does then iteratively de-scale them to see which one(s) is/are causing the problem(s).
warning: Some predictor variables are on very different scales: consider rescaling
A reasonable approach is to do a "summary" of the dataset and look at the means and medians of the numerical variables, and also possibly their variances. If you see any that are orders of magnitude d
warning: Some predictor variables are on very different scales: consider rescaling A reasonable approach is to do a "summary" of the dataset and look at the means and medians of the numerical variables, and also possibly their variances. If you see any that are orders of magnitude different from others, that will give you a clue. You could just simply standardise all the numeric variables first and see if the model converges. If it does then iteratively de-scale them to see which one(s) is/are causing the problem(s).
warning: Some predictor variables are on very different scales: consider rescaling A reasonable approach is to do a "summary" of the dataset and look at the means and medians of the numerical variables, and also possibly their variances. If you see any that are orders of magnitude d
49,043
Is MLE intrinsically connected to logs?
The answer to your main question can be yes or no, depending on one's perspective. First, the maximum likelihood principle can be motivated without any logs. In contrast to your approach, one needs to start with the probability of a sample of size $n$ instead of the $n$ probabilities of $n$ samples. There is a reason that it is not the log likelihood principle - from a certain point of view, the likelihood is more fundamental than its log, and taking the log of a product is simply a mathematical convenience. Or one could say that the iid assumption leads to products, and products lead to logs. But then there are several theories that generalize the ml principle and in which the ml principle corresponds to the log (or some variant like the KL divergence). Most notably there are different classes of other divergences (Renyi, Bregman, ...) that can be used for inference and that lead to consistent estimators, and there is also information geometry. I don't know whether there is a divergence that corresponds to your proposed additive variant, though. One point that singles out the likelihood and the KL is the Neyman-Pearson Lemma. Another point is the derivation of the ml principle without loss mentioned at the beginning of my answer.
Is MLE intrinsically connected to logs?
The answer to your main question can be yes or no, depending on one's perspective. First, the maximum likelihood principle can be motivated without any logs. In contrast to your approach, one needs t
Is MLE intrinsically connected to logs? The answer to your main question can be yes or no, depending on one's perspective. First, the maximum likelihood principle can be motivated without any logs. In contrast to your approach, one needs to start with the probability of a sample of size $n$ instead of the $n$ probabilities of $n$ samples. There is a reason that it is not the log likelihood principle - from a certain point of view, the likelihood is more fundamental than its log, and taking the log of a product is simply a mathematical convenience. Or one could say that the iid assumption leads to products, and products lead to logs. But then there are several theories that generalize the ml principle and in which the ml principle corresponds to the log (or some variant like the KL divergence). Most notably there are different classes of other divergences (Renyi, Bregman, ...) that can be used for inference and that lead to consistent estimators, and there is also information geometry. I don't know whether there is a divergence that corresponds to your proposed additive variant, though. One point that singles out the likelihood and the KL is the Neyman-Pearson Lemma. Another point is the derivation of the ml principle without loss mentioned at the beginning of my answer.
Is MLE intrinsically connected to logs? The answer to your main question can be yes or no, depending on one's perspective. First, the maximum likelihood principle can be motivated without any logs. In contrast to your approach, one needs t
49,044
How to Fine Tune a pre-trained network
I would like to share my understanding here. Here is a thesis and in its related work author has explained Transfer learning and Fine-Tuning. Also, the survey on Transfer Learning is a good read to understand these concepts in detail. Unsupervised pre-training is a good strategy to train deep neural networks for supervised and unsupervised tasks. Fine-tuning can be seen as an extension of the above approach where the learned layers are allowed to retrain or fine-tune on the domain specific task. Transfer learning, on the other hand, requires two different task, where learning from one distribution can be transferred to another. [These points are taken from the related work of this thesis] Now, I think your understanding is correct about Transfer learning and Fine-Tuning. But, Freezing the weights is a choice that you get, if you don't freeze then we call that the network is now fine-tuned on the domain-specific data. And yes, it should usually provide better generalization. On the other hand, if you freeze the weights depends on the problem and type of network you have. For example, IMAGENET layers are widely used to classify images and its layers are frozen (1) as its computationally expensive (2) Imagenet data covers a large distribution of the data and (3) the last layer is usually enough to capture the small variations that a domain-specific image. This is good because of strong representation capacity of Imagenet and may not be true for every model. Hence depending on the case one should empirically answer this question precisely.
How to Fine Tune a pre-trained network
I would like to share my understanding here. Here is a thesis and in its related work author has explained Transfer learning and Fine-Tuning. Also, the survey on Transfer Learning is a good read to un
How to Fine Tune a pre-trained network I would like to share my understanding here. Here is a thesis and in its related work author has explained Transfer learning and Fine-Tuning. Also, the survey on Transfer Learning is a good read to understand these concepts in detail. Unsupervised pre-training is a good strategy to train deep neural networks for supervised and unsupervised tasks. Fine-tuning can be seen as an extension of the above approach where the learned layers are allowed to retrain or fine-tune on the domain specific task. Transfer learning, on the other hand, requires two different task, where learning from one distribution can be transferred to another. [These points are taken from the related work of this thesis] Now, I think your understanding is correct about Transfer learning and Fine-Tuning. But, Freezing the weights is a choice that you get, if you don't freeze then we call that the network is now fine-tuned on the domain-specific data. And yes, it should usually provide better generalization. On the other hand, if you freeze the weights depends on the problem and type of network you have. For example, IMAGENET layers are widely used to classify images and its layers are frozen (1) as its computationally expensive (2) Imagenet data covers a large distribution of the data and (3) the last layer is usually enough to capture the small variations that a domain-specific image. This is good because of strong representation capacity of Imagenet and may not be true for every model. Hence depending on the case one should empirically answer this question precisely.
How to Fine Tune a pre-trained network I would like to share my understanding here. Here is a thesis and in its related work author has explained Transfer learning and Fine-Tuning. Also, the survey on Transfer Learning is a good read to un
49,045
Predict when a user logins next
Without having seen the data, I don't think machine learning is the appropriate choice here because your data is not IID. It is reasonable to believe that users may have different login habits, and we should allow for that in our model by modelling time between logins as a hierarchical model. Posit that each user's time between logins has some distribution (maybe it is exponential), which is parameterized by $\lambda_i$. Then, maybe you say that the $\lambda_i$ come from some distribution. So your model posits the following: $$ (\Delta_{t})_{i} \sim \operatorname{Exponential}(\lambda _i) \quad i = 1 \dots N $$ $$ \lambda_i \sim P(\lambda)$$ Here $P$ is the distribution for the $\lambda_i$. Maybe the $\lambda$ are gamma distributed or something. I'm not saying that this is the model, but I think an approach like this is reasonable. If you have a lot of data, you could even do an Empirical Bayes approach and construct "priors" from the data.
Predict when a user logins next
Without having seen the data, I don't think machine learning is the appropriate choice here because your data is not IID. It is reasonable to believe that users may have different login habits, and we
Predict when a user logins next Without having seen the data, I don't think machine learning is the appropriate choice here because your data is not IID. It is reasonable to believe that users may have different login habits, and we should allow for that in our model by modelling time between logins as a hierarchical model. Posit that each user's time between logins has some distribution (maybe it is exponential), which is parameterized by $\lambda_i$. Then, maybe you say that the $\lambda_i$ come from some distribution. So your model posits the following: $$ (\Delta_{t})_{i} \sim \operatorname{Exponential}(\lambda _i) \quad i = 1 \dots N $$ $$ \lambda_i \sim P(\lambda)$$ Here $P$ is the distribution for the $\lambda_i$. Maybe the $\lambda$ are gamma distributed or something. I'm not saying that this is the model, but I think an approach like this is reasonable. If you have a lot of data, you could even do an Empirical Bayes approach and construct "priors" from the data.
Predict when a user logins next Without having seen the data, I don't think machine learning is the appropriate choice here because your data is not IID. It is reasonable to believe that users may have different login habits, and we
49,046
Does every loss function correspond to MLE/MAP
2 years later I'll give a partial answer, this encompasses the first three example (log loss, weighted log loss, L1 regression) and many more point wise losses. Let $L : \mathcal{Y} \times \mathcal{Y}$ be a point-wise loss (i.e. scores a predicted $\hat y$ and $y$ by $L(\hat y,y)$). The essence would be to define the likelihood as: $$ p(y|x) = \frac{1}{C} \exp (- L(y,x)) $$ for a suitable normalizing constant $C$ and assuming that the above equation is well defined. This is essentially what the authors of [1] did in proposition 2.9. But it essentially hides most of the complexity in whether that equation is well defined. So the answer to the initial question would be no not all (point) losses gives rise to a well-defined density but many do. [1] Gressmann, Frithjof, et al. "Probabilistic supervised learning." arXiv preprint arXiv:1801.00753 (2018).
Does every loss function correspond to MLE/MAP
2 years later I'll give a partial answer, this encompasses the first three example (log loss, weighted log loss, L1 regression) and many more point wise losses. Let $L : \mathcal{Y} \times \mathcal{Y}
Does every loss function correspond to MLE/MAP 2 years later I'll give a partial answer, this encompasses the first three example (log loss, weighted log loss, L1 regression) and many more point wise losses. Let $L : \mathcal{Y} \times \mathcal{Y}$ be a point-wise loss (i.e. scores a predicted $\hat y$ and $y$ by $L(\hat y,y)$). The essence would be to define the likelihood as: $$ p(y|x) = \frac{1}{C} \exp (- L(y,x)) $$ for a suitable normalizing constant $C$ and assuming that the above equation is well defined. This is essentially what the authors of [1] did in proposition 2.9. But it essentially hides most of the complexity in whether that equation is well defined. So the answer to the initial question would be no not all (point) losses gives rise to a well-defined density but many do. [1] Gressmann, Frithjof, et al. "Probabilistic supervised learning." arXiv preprint arXiv:1801.00753 (2018).
Does every loss function correspond to MLE/MAP 2 years later I'll give a partial answer, this encompasses the first three example (log loss, weighted log loss, L1 regression) and many more point wise losses. Let $L : \mathcal{Y} \times \mathcal{Y}
49,047
Which estimation technique minimizes the MAPE?
If the probability density of your future distribution is positively skewed, then typically (though not always; von Hippel, 2005) the median will be lower than its mean. So a technique that aims at the median as a point forecast will be biased low. Since the MAPE usually prefers a low biased prediction, such a technique will usually perform better in terms of the MAPE in such a situation. Note that there are a couple of caveats in this description. The chain of reasoning is not perfect, because you can find pathological counter-examples for at least two steps. Nevertheless, it should work in most practical cases. You may be better off using a custom optimization routine that directly attempts to minimize the MAPE. The problem being, of course, that the MAPE is not differentiable at perfect forecasts. Alternatively, you could try to estimate full predictive densities and then output the (-1)-median of this density as a point forecast, which is the functional that minimizes the MAPE in expectation (Gneiting, 2011, p. 752 with $\beta=-1$). You may be interested in What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?
Which estimation technique minimizes the MAPE?
If the probability density of your future distribution is positively skewed, then typically (though not always; von Hippel, 2005) the median will be lower than its mean. So a technique that aims at th
Which estimation technique minimizes the MAPE? If the probability density of your future distribution is positively skewed, then typically (though not always; von Hippel, 2005) the median will be lower than its mean. So a technique that aims at the median as a point forecast will be biased low. Since the MAPE usually prefers a low biased prediction, such a technique will usually perform better in terms of the MAPE in such a situation. Note that there are a couple of caveats in this description. The chain of reasoning is not perfect, because you can find pathological counter-examples for at least two steps. Nevertheless, it should work in most practical cases. You may be better off using a custom optimization routine that directly attempts to minimize the MAPE. The problem being, of course, that the MAPE is not differentiable at perfect forecasts. Alternatively, you could try to estimate full predictive densities and then output the (-1)-median of this density as a point forecast, which is the functional that minimizes the MAPE in expectation (Gneiting, 2011, p. 752 with $\beta=-1$). You may be interested in What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?
Which estimation technique minimizes the MAPE? If the probability density of your future distribution is positively skewed, then typically (though not always; von Hippel, 2005) the median will be lower than its mean. So a technique that aims at th
49,048
Value iteration does not converge when using Q learning
Ok, so I've slightly modified the initial example and the code below gives me working policy states_space_size = 16 # 4x4 size of the board actions_space_size = len(DIRECTIONS) QSA = np.zeros(shape=(states_space_size, actions_space_size)) max_iterations = 80 gamma = 1 # discount factor alpha = 0.9 # learning rate eps = 0.99 # exploitation rate s = 0 # initial state for i in range(max_iterations): # explore the world? a = choose_an_action(actions_space_size) # or not? if random.random() > eps: # which criterion on decreasing epsilon a = np.argmax(QSA[s]) r, s_ = perform_action(s, a, game) qsa = QSA[s][a] qsa_ = np.argmax(QSA[s_]) QSA[s][a] = qsa + alpha*(r + gamma*qsa_ - qsa) # change state s = s_ # converge criterion instead of max iterations? print(QSA) I have introduced learning rate variable (how quickly to forget older results) and exploration/exploitation rate (choose random actions vs following existing policy) and seems like resulting policy gives the desired path
Value iteration does not converge when using Q learning
Ok, so I've slightly modified the initial example and the code below gives me working policy states_space_size = 16 # 4x4 size of the board actions_space_size = len(DIRECTIONS) QSA = np.zeros(shape=
Value iteration does not converge when using Q learning Ok, so I've slightly modified the initial example and the code below gives me working policy states_space_size = 16 # 4x4 size of the board actions_space_size = len(DIRECTIONS) QSA = np.zeros(shape=(states_space_size, actions_space_size)) max_iterations = 80 gamma = 1 # discount factor alpha = 0.9 # learning rate eps = 0.99 # exploitation rate s = 0 # initial state for i in range(max_iterations): # explore the world? a = choose_an_action(actions_space_size) # or not? if random.random() > eps: # which criterion on decreasing epsilon a = np.argmax(QSA[s]) r, s_ = perform_action(s, a, game) qsa = QSA[s][a] qsa_ = np.argmax(QSA[s_]) QSA[s][a] = qsa + alpha*(r + gamma*qsa_ - qsa) # change state s = s_ # converge criterion instead of max iterations? print(QSA) I have introduced learning rate variable (how quickly to forget older results) and exploration/exploitation rate (choose random actions vs following existing policy) and seems like resulting policy gives the desired path
Value iteration does not converge when using Q learning Ok, so I've slightly modified the initial example and the code below gives me working policy states_space_size = 16 # 4x4 size of the board actions_space_size = len(DIRECTIONS) QSA = np.zeros(shape=
49,049
How to determine block size for a block bootstrap and it's variants?
Not sure if you still need this, but the classic texts are on subject are Hall and Horowitz "On Blocking Rules for the Bootstrap with Dependent Data", Lahiri "Theoretical Comparisons of Block Bootstrap Methods", in addition to the Lahiri book you mentioned. I found Hongyi Li and Maddala "Bootstrapping Time Series Models" useful as well. I'm writing my Masters dissertation on the subject so if I can be of any other help please let me know.
How to determine block size for a block bootstrap and it's variants?
Not sure if you still need this, but the classic texts are on subject are Hall and Horowitz "On Blocking Rules for the Bootstrap with Dependent Data", Lahiri "Theoretical Comparisons of Block Bootstra
How to determine block size for a block bootstrap and it's variants? Not sure if you still need this, but the classic texts are on subject are Hall and Horowitz "On Blocking Rules for the Bootstrap with Dependent Data", Lahiri "Theoretical Comparisons of Block Bootstrap Methods", in addition to the Lahiri book you mentioned. I found Hongyi Li and Maddala "Bootstrapping Time Series Models" useful as well. I'm writing my Masters dissertation on the subject so if I can be of any other help please let me know.
How to determine block size for a block bootstrap and it's variants? Not sure if you still need this, but the classic texts are on subject are Hall and Horowitz "On Blocking Rules for the Bootstrap with Dependent Data", Lahiri "Theoretical Comparisons of Block Bootstra
49,050
Significance testing when the treatment group was only partially treated (an unknown set of individuals did not receive the treatment)
I disagree with the other answer by @mkt that no inference is possible, but yes, inference, like hypothesis testing, might be difficult. But there are some possibilities. In the treatment group, where only about 60% of the subjects/items actually received treatment, we might model the distribution of the outcome as a mixture distribution, of the form $$ 0.6 \cdot \text{distribution when treatment given}+0.4 \cdot\text{distribution when treatment not given} $$ Using regression with such a model is called mixture regression (search this site, many posts!) and there are for instance some R packages, like flexmix and fpc. But our situation is simpler, since the mixing probabilities is given, known, and not to be estimated. So we can just write down a model, find the loglikelihood function, and use mle (maximum likelihood estimation.) Since little context is given, we will use a very simple model as illustration. We assume independence and normal distributions. In the control group (and the non-treated in the treatment group) we have $Y_i \sim \mathcal{N}(\mu,\sigma^2)$ while among the treated we have $Y_i\sim\mathcal{N}(\mu+\Delta,\sigma^2)$. So in the "treatment group" we have $Y_i \sim w\cdot\mathcal{N}(\mu+\Delta,\sigma^2)+(1-w)\cdot\mathcal{N}(\mu,\sigma^2)$. For your data you told us $w=0.6$. The same principles could be used with other error distributions, covariables, etc. The interest or focus parameter is $\Delta$, so we will ultimately use the profile likelihood for $\Delta$ to construct a confidence interval. With simulated data I construct this model in R, and the resulting confidence interval based on the profile likelihood is below: Note that in this proposed model we have assumed constant variance. This might be a critical assumption (as it always is, but maybe more so here.) This is because, in the "treatment" group, a higher empirical variance could be because of either a large $\Delta$, or because the variance actually is larger than in the control group (maybe because the treatment, somehow, also increases variance.) So this model should be taken as tentative, it would be wise to investigate it further before use. I tried to search for papers on this issue, could not find any. The R code used: n <- 100 # "treatment" sample size m <- 100 # control sample size mu <- 10; sigma <- 3; delta <- 0.8 w <- 0.6 # fraction of treatment data really treated # Simulating some data: set.seed(7*11*13) # My public seed Y <- rnorm(n+m, c(rep(mu, m), rep(mu+delta, w*n), rep(mu, (1-w)*n)), sigma) G <- c(rep(0L, m), rep(1L, n)) # 1 is treatment group mydf <- data.frame(Y=Y, G=G); rm(Y, G) # loglikelihood function: loglik0 <- function(mu, delta, sigma) { loglik <- sum(ifelse(mydf$G==0L, dnorm(mydf$Y, mu, sigma, log=TRUE), log(w*dnorm(mydf$Y, mu+delta, sigma) + (1-w)*dnorm(mydf$Y, mu, sigma)))) -loglik # we use bbmle::mle2 which requires the # negative loglik } library(bbmle)# on CRAN mod <- bbmle::mle2(loglik0, start=list(mu=8, delta=0, sigma=2)) mod.prof <- bbmle::profile(mod, which=2) confint(mod.prof) 2.5 % 97.5 % -0.571204 1.704091 plot(mod.prof)
Significance testing when the treatment group was only partially treated (an unknown set of individu
I disagree with the other answer by @mkt that no inference is possible, but yes, inference, like hypothesis testing, might be difficult. But there are some possibilities. In the treatment group, wher
Significance testing when the treatment group was only partially treated (an unknown set of individuals did not receive the treatment) I disagree with the other answer by @mkt that no inference is possible, but yes, inference, like hypothesis testing, might be difficult. But there are some possibilities. In the treatment group, where only about 60% of the subjects/items actually received treatment, we might model the distribution of the outcome as a mixture distribution, of the form $$ 0.6 \cdot \text{distribution when treatment given}+0.4 \cdot\text{distribution when treatment not given} $$ Using regression with such a model is called mixture regression (search this site, many posts!) and there are for instance some R packages, like flexmix and fpc. But our situation is simpler, since the mixing probabilities is given, known, and not to be estimated. So we can just write down a model, find the loglikelihood function, and use mle (maximum likelihood estimation.) Since little context is given, we will use a very simple model as illustration. We assume independence and normal distributions. In the control group (and the non-treated in the treatment group) we have $Y_i \sim \mathcal{N}(\mu,\sigma^2)$ while among the treated we have $Y_i\sim\mathcal{N}(\mu+\Delta,\sigma^2)$. So in the "treatment group" we have $Y_i \sim w\cdot\mathcal{N}(\mu+\Delta,\sigma^2)+(1-w)\cdot\mathcal{N}(\mu,\sigma^2)$. For your data you told us $w=0.6$. The same principles could be used with other error distributions, covariables, etc. The interest or focus parameter is $\Delta$, so we will ultimately use the profile likelihood for $\Delta$ to construct a confidence interval. With simulated data I construct this model in R, and the resulting confidence interval based on the profile likelihood is below: Note that in this proposed model we have assumed constant variance. This might be a critical assumption (as it always is, but maybe more so here.) This is because, in the "treatment" group, a higher empirical variance could be because of either a large $\Delta$, or because the variance actually is larger than in the control group (maybe because the treatment, somehow, also increases variance.) So this model should be taken as tentative, it would be wise to investigate it further before use. I tried to search for papers on this issue, could not find any. The R code used: n <- 100 # "treatment" sample size m <- 100 # control sample size mu <- 10; sigma <- 3; delta <- 0.8 w <- 0.6 # fraction of treatment data really treated # Simulating some data: set.seed(7*11*13) # My public seed Y <- rnorm(n+m, c(rep(mu, m), rep(mu+delta, w*n), rep(mu, (1-w)*n)), sigma) G <- c(rep(0L, m), rep(1L, n)) # 1 is treatment group mydf <- data.frame(Y=Y, G=G); rm(Y, G) # loglikelihood function: loglik0 <- function(mu, delta, sigma) { loglik <- sum(ifelse(mydf$G==0L, dnorm(mydf$Y, mu, sigma, log=TRUE), log(w*dnorm(mydf$Y, mu+delta, sigma) + (1-w)*dnorm(mydf$Y, mu, sigma)))) -loglik # we use bbmle::mle2 which requires the # negative loglik } library(bbmle)# on CRAN mod <- bbmle::mle2(loglik0, start=list(mu=8, delta=0, sigma=2)) mod.prof <- bbmle::profile(mod, which=2) confint(mod.prof) 2.5 % 97.5 % -0.571204 1.704091 plot(mod.prof)
Significance testing when the treatment group was only partially treated (an unknown set of individu I disagree with the other answer by @mkt that no inference is possible, but yes, inference, like hypothesis testing, might be difficult. But there are some possibilities. In the treatment group, wher
49,051
Significance testing when the treatment group was only partially treated (an unknown set of individuals did not receive the treatment)
EDIT: kjetil b halvorsen's new answer has persuaded me that my answer below is incorrect. +1, this is an interesting question To summarize: the goal is to try to compare people who did receive treatment ($X$) with those who were controls ($C$). However, $X$ is a subset of the population supposed to receive treatment ($Z$), and we only know that $X$ is $w$% (60% here) of $Z$, with the remainder (100 - $w$)% being called $Y$. Unfortunately, I do not think there is any legitimate way to make a strong statistical claim from this data. Therefore, I think estimating statistical significance is out of the question. However, it might be possible to do a useful exploratory analysis with this dataset instead. I would start by examining $Z$, the people supposed to have received treatment. If you run a clustering algorithm on this, you might find that the data falls into two natural groups with a ~60/40 split, as expected. These would correspond to the subgroups that did receive treatment ($X$) and those that were supposed to but did not ($Y$). You could then calculate the means of the 3 groups - $X$, $Y$ and $C$ - and with caveats, make some weak statements about possible differences between them. You could also exclude $Y$ from further consideration, or add those points to $C$, but I think keeping them as a separate category is less likely to mislead. Assuming the above steps work, could you then do a hypothesis test on them? No - because the results are going to be biased to an unknown degree by the uncertainties in the clustering step. Points belonging to $X$ but that are far from its mean/median/centroid (depending on clustering approach) may be grouped with $Y$, and vice versa. This will have the effect of reducing group variances and inflating differences between the groups. Despite these limitations, I think that this approach could provide useful information about the differences between groups. A few related thoughts: A clean clustering result like the one I described might seem very unlikely - clusters could be the wrong proportion (i.e. not corresponding well to $w$% and (1 - $w$)%, or more than two clusters might be identified in the data. If either of these happen, I might try a different cluster approach, but my (already low) confidence in this procedure would reduce substantially. But if the treatment has a strong effect, which may be expected based on prior knowledge and experimental design, two distinct clusters of approximately correct proportions are an entirely plausible outcome. Knowing $w$ is crucial for getting some idea of how well the clustering step has worked. $w$ being quite different from 50% is helpful knowing which clusters are $X$ and $Y$. However, even if $w$ was exactly 50% and the two groups therefore identical in size, comparing the two clusters against $C$ would help in distinguishing $X$ from $Y$. If one cluster's outcomes are much more similar to those of $C$, it is likely to be $Y$. I'm setting aside discussion of what type of clustering approach to use because there are a huge range of possibilities, and which one works best depends strongly on details that we do not know.
Significance testing when the treatment group was only partially treated (an unknown set of individu
EDIT: kjetil b halvorsen's new answer has persuaded me that my answer below is incorrect. +1, this is an interesting question To summarize: the goal is to try to compare people who did receive treat
Significance testing when the treatment group was only partially treated (an unknown set of individuals did not receive the treatment) EDIT: kjetil b halvorsen's new answer has persuaded me that my answer below is incorrect. +1, this is an interesting question To summarize: the goal is to try to compare people who did receive treatment ($X$) with those who were controls ($C$). However, $X$ is a subset of the population supposed to receive treatment ($Z$), and we only know that $X$ is $w$% (60% here) of $Z$, with the remainder (100 - $w$)% being called $Y$. Unfortunately, I do not think there is any legitimate way to make a strong statistical claim from this data. Therefore, I think estimating statistical significance is out of the question. However, it might be possible to do a useful exploratory analysis with this dataset instead. I would start by examining $Z$, the people supposed to have received treatment. If you run a clustering algorithm on this, you might find that the data falls into two natural groups with a ~60/40 split, as expected. These would correspond to the subgroups that did receive treatment ($X$) and those that were supposed to but did not ($Y$). You could then calculate the means of the 3 groups - $X$, $Y$ and $C$ - and with caveats, make some weak statements about possible differences between them. You could also exclude $Y$ from further consideration, or add those points to $C$, but I think keeping them as a separate category is less likely to mislead. Assuming the above steps work, could you then do a hypothesis test on them? No - because the results are going to be biased to an unknown degree by the uncertainties in the clustering step. Points belonging to $X$ but that are far from its mean/median/centroid (depending on clustering approach) may be grouped with $Y$, and vice versa. This will have the effect of reducing group variances and inflating differences between the groups. Despite these limitations, I think that this approach could provide useful information about the differences between groups. A few related thoughts: A clean clustering result like the one I described might seem very unlikely - clusters could be the wrong proportion (i.e. not corresponding well to $w$% and (1 - $w$)%, or more than two clusters might be identified in the data. If either of these happen, I might try a different cluster approach, but my (already low) confidence in this procedure would reduce substantially. But if the treatment has a strong effect, which may be expected based on prior knowledge and experimental design, two distinct clusters of approximately correct proportions are an entirely plausible outcome. Knowing $w$ is crucial for getting some idea of how well the clustering step has worked. $w$ being quite different from 50% is helpful knowing which clusters are $X$ and $Y$. However, even if $w$ was exactly 50% and the two groups therefore identical in size, comparing the two clusters against $C$ would help in distinguishing $X$ from $Y$. If one cluster's outcomes are much more similar to those of $C$, it is likely to be $Y$. I'm setting aside discussion of what type of clustering approach to use because there are a huge range of possibilities, and which one works best depends strongly on details that we do not know.
Significance testing when the treatment group was only partially treated (an unknown set of individu EDIT: kjetil b halvorsen's new answer has persuaded me that my answer below is incorrect. +1, this is an interesting question To summarize: the goal is to try to compare people who did receive treat
49,052
Why is it important that estimators are unbiased and consistent?
From a frequentist perspective, Unbiasedness is important mainly with experimental data where the experiment can be repeated and we control the regressor matrix. Then we can actually obtain many estimates of the unknown parameters, and then, we do want their arithmetic average to be really close to the true value, which is what unbiasedness guarantees. But it is a property that requires very strong conditions, and even a little non-linearity in the estimator expression may destroy it. Consistency is important mainly with observational data where there is no possibility of repetition. Here, at least we want to know that if the sample is large the single estimate we will obtain will be really close to the true value with high probability, and it is consistency that guarantees that. As larger and larger data sets become available in practice, methods like bootstrapping have blurred the distinction a bit. Note that we can have unbiasedness and inconsistency only in rather freak setups, while we may easily have biasedness and consistency. The variance may superficially look like a "secondary" property (because supposedly we primarily hunt for location), but go tell that to any veteran statistician and don't say I didn't warn you: in practice, business and policy decisions tend to be based on intervals rather than points, and it is the variance that determines the length of the interval into which an unknown parameter lies. This is why Mean-Squared Errorr is often considered a better evaluation criterion and comparison tool of estimators, comprehensively including variance and bias, and favoring biased estimators that have considerably lower variance than unbiased ones.
Why is it important that estimators are unbiased and consistent?
From a frequentist perspective, Unbiasedness is important mainly with experimental data where the experiment can be repeated and we control the regressor matrix. Then we can actually obtain many est
Why is it important that estimators are unbiased and consistent? From a frequentist perspective, Unbiasedness is important mainly with experimental data where the experiment can be repeated and we control the regressor matrix. Then we can actually obtain many estimates of the unknown parameters, and then, we do want their arithmetic average to be really close to the true value, which is what unbiasedness guarantees. But it is a property that requires very strong conditions, and even a little non-linearity in the estimator expression may destroy it. Consistency is important mainly with observational data where there is no possibility of repetition. Here, at least we want to know that if the sample is large the single estimate we will obtain will be really close to the true value with high probability, and it is consistency that guarantees that. As larger and larger data sets become available in practice, methods like bootstrapping have blurred the distinction a bit. Note that we can have unbiasedness and inconsistency only in rather freak setups, while we may easily have biasedness and consistency. The variance may superficially look like a "secondary" property (because supposedly we primarily hunt for location), but go tell that to any veteran statistician and don't say I didn't warn you: in practice, business and policy decisions tend to be based on intervals rather than points, and it is the variance that determines the length of the interval into which an unknown parameter lies. This is why Mean-Squared Errorr is often considered a better evaluation criterion and comparison tool of estimators, comprehensively including variance and bias, and favoring biased estimators that have considerably lower variance than unbiased ones.
Why is it important that estimators are unbiased and consistent? From a frequentist perspective, Unbiasedness is important mainly with experimental data where the experiment can be repeated and we control the regressor matrix. Then we can actually obtain many est
49,053
Limits of integration of a density function
Both densities involve indicators$$f_X(x)=\mathbb{I}_{(a,b)}(x)\big/(b-a)\quad f_{Y|X}(y|x)=\mathbb{I}_{(a,x)}(y)\big/(x-a)$$and$$f_{X,Y}(x,y)=\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)\big/b-y)(x-a)$$This implies $$\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)=\mathbb{I}_{(a,b)}(y)\mathbb{I}_{(y,b)}(x)$$hence \begin{align}f_{X|Y}(x|y)&=\mathbb{I}_{(y,b)}\big/(x-a)\,\left\{\int_y^b (x-a)^{-1}\,\text{d}x\right\}^{-1}\\&=\mathbb{I}_{(y,b)}\big/(x-a)\,\{\log(b-a)-\log(y-a)\}^{-1}\end{align}and $$f_Y(y)=\mathbb{I}_{(a,b)}(y)\,\{\log(b-a)-\log(y-a)\}\big/(b-a)$$
Limits of integration of a density function
Both densities involve indicators$$f_X(x)=\mathbb{I}_{(a,b)}(x)\big/(b-a)\quad f_{Y|X}(y|x)=\mathbb{I}_{(a,x)}(y)\big/(x-a)$$and$$f_{X,Y}(x,y)=\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)\big/b-y)(x-a)$
Limits of integration of a density function Both densities involve indicators$$f_X(x)=\mathbb{I}_{(a,b)}(x)\big/(b-a)\quad f_{Y|X}(y|x)=\mathbb{I}_{(a,x)}(y)\big/(x-a)$$and$$f_{X,Y}(x,y)=\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)\big/b-y)(x-a)$$This implies $$\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)=\mathbb{I}_{(a,b)}(y)\mathbb{I}_{(y,b)}(x)$$hence \begin{align}f_{X|Y}(x|y)&=\mathbb{I}_{(y,b)}\big/(x-a)\,\left\{\int_y^b (x-a)^{-1}\,\text{d}x\right\}^{-1}\\&=\mathbb{I}_{(y,b)}\big/(x-a)\,\{\log(b-a)-\log(y-a)\}^{-1}\end{align}and $$f_Y(y)=\mathbb{I}_{(a,b)}(y)\,\{\log(b-a)-\log(y-a)\}\big/(b-a)$$
Limits of integration of a density function Both densities involve indicators$$f_X(x)=\mathbb{I}_{(a,b)}(x)\big/(b-a)\quad f_{Y|X}(y|x)=\mathbb{I}_{(a,x)}(y)\big/(x-a)$$and$$f_{X,Y}(x,y)=\mathbb{I}_{(a,b)}(x)\mathbb{I}_{(a,x)}(y)\big/b-y)(x-a)$
49,054
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$
I have skipped some details in the following calculations and would ask you to verify them. As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_{i=1}^4(X_i-\overline X)^2$$ Assuming both $\mu$ and $\sigma$ are unknown, we know that $(\overline X,S^2)$ is a complete sufficient statistic for $(\mu,\sigma^2)$. We also know that $\overline X$ and $S$ are independently distributed. As you say, \begin{align} E\left(\overline X^2\right)&=\operatorname{Var}(\overline X)+\left(E(\overline X)\right)^2 \\&=\frac{\sigma^2}{4}+\mu^2 \end{align} Since we are estimating $\mu^2/\sigma$, it is reasonable to assume that a part of our UMVUE is of the form $\overline X^2/S$. And for evaluating $E\left(\frac{\overline X^2}{S}\right)=E(\overline X^2)E\left(\frac{1}{S}\right)$, we have \begin{align} E\left(\frac{1}{S}\right)&=\frac{\sqrt{3}}{\sigma}\, E\left(\sqrt\frac{\sigma^2}{3\,S^2}\right) \\\\&=\frac{\sqrt{3}}{\sigma}\, E\left(\frac{1}{\sqrt Z}\right)\qquad\qquad,\,\text{ where }Z\sim\chi^2_{3} \\\\&=\frac{\sqrt{3}}{\sigma}\int_0^\infty \frac{1}{\sqrt z}\,\frac{e^{-z/2}z^{3/2-1}}{2^{3/2}\,\Gamma(3/2)}\,dz \\\\&=\frac{1}{\sigma}\sqrt\frac{3}{2\pi}\int_0^\infty e^{-z/2}\,dz \\\\&=\frac{1}{\sigma}\sqrt\frac{6}{\pi} \end{align} Again, for an unbiased estimator of $\sigma$, $$E\left(\frac{1}{2}\sqrt\frac{3\pi}{2}S\right)=\sigma$$ So, \begin{align} E\left(\frac{\overline X^2}{S}\right)&=E\left(\overline X^2\right)E\left(\frac{1}{S}\right) \\&=\left(\mu^2+\frac{\sigma^2}{4}\right)\frac{1}{\sigma}\sqrt\frac{6}{\pi} \\&=\sqrt\frac{6}{\pi}\left(\frac{\mu^2}{\sigma}+\frac{\sigma}{4}\right) \end{align} Or, $$E\left(\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{\frac{1}{2}\sqrt\frac{3\pi}{2}S}{4}\right)=\frac{\mu^2}{\sigma}$$ Hence our unbiased estimator based on the complete sufficient statistic $(\overline X,S^2)$ is \begin{align} T(X_1,X_2,X_3,X_4)&=\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{1}{8}\sqrt\frac{3\pi}{2}S \end{align} By Lehmann-Scheffe, $T$ is the UMVUE of $\mu^2/\sigma$.
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$
I have skipped some details in the following calculations and would ask you to verify them. As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$ I have skipped some details in the following calculations and would ask you to verify them. As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_{i=1}^4(X_i-\overline X)^2$$ Assuming both $\mu$ and $\sigma$ are unknown, we know that $(\overline X,S^2)$ is a complete sufficient statistic for $(\mu,\sigma^2)$. We also know that $\overline X$ and $S$ are independently distributed. As you say, \begin{align} E\left(\overline X^2\right)&=\operatorname{Var}(\overline X)+\left(E(\overline X)\right)^2 \\&=\frac{\sigma^2}{4}+\mu^2 \end{align} Since we are estimating $\mu^2/\sigma$, it is reasonable to assume that a part of our UMVUE is of the form $\overline X^2/S$. And for evaluating $E\left(\frac{\overline X^2}{S}\right)=E(\overline X^2)E\left(\frac{1}{S}\right)$, we have \begin{align} E\left(\frac{1}{S}\right)&=\frac{\sqrt{3}}{\sigma}\, E\left(\sqrt\frac{\sigma^2}{3\,S^2}\right) \\\\&=\frac{\sqrt{3}}{\sigma}\, E\left(\frac{1}{\sqrt Z}\right)\qquad\qquad,\,\text{ where }Z\sim\chi^2_{3} \\\\&=\frac{\sqrt{3}}{\sigma}\int_0^\infty \frac{1}{\sqrt z}\,\frac{e^{-z/2}z^{3/2-1}}{2^{3/2}\,\Gamma(3/2)}\,dz \\\\&=\frac{1}{\sigma}\sqrt\frac{3}{2\pi}\int_0^\infty e^{-z/2}\,dz \\\\&=\frac{1}{\sigma}\sqrt\frac{6}{\pi} \end{align} Again, for an unbiased estimator of $\sigma$, $$E\left(\frac{1}{2}\sqrt\frac{3\pi}{2}S\right)=\sigma$$ So, \begin{align} E\left(\frac{\overline X^2}{S}\right)&=E\left(\overline X^2\right)E\left(\frac{1}{S}\right) \\&=\left(\mu^2+\frac{\sigma^2}{4}\right)\frac{1}{\sigma}\sqrt\frac{6}{\pi} \\&=\sqrt\frac{6}{\pi}\left(\frac{\mu^2}{\sigma}+\frac{\sigma}{4}\right) \end{align} Or, $$E\left(\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{\frac{1}{2}\sqrt\frac{3\pi}{2}S}{4}\right)=\frac{\mu^2}{\sigma}$$ Hence our unbiased estimator based on the complete sufficient statistic $(\overline X,S^2)$ is \begin{align} T(X_1,X_2,X_3,X_4)&=\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{1}{8}\sqrt\frac{3\pi}{2}S \end{align} By Lehmann-Scheffe, $T$ is the UMVUE of $\mu^2/\sigma$.
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$ I have skipped some details in the following calculations and would ask you to verify them. As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_
49,055
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$
An R simulation to verify StubbornAtom's well-explained answer: In the case of $\mu=3$ and $\sigma=7$ we have $$\frac{\mu^2}{\sigma}=\frac{9}{7}=1.285714$$ The simulation with $10^7$ trials gives $\widehat{\theta}=1.286482$ y=0 for(i in c(1:10^7)) {x<-rnorm(4,3,7) y=y+sqrt(pi/6)*mean(x)^(2)/sd(x)-(1/8)*sqrt(3*pi/2)*sd(x)} y/(10^7) 1.286482 This, however, took a while to run as for loops aren't very fast in R so if anyone has an alternative method to simulate this in R, I would be very interested.
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$
An R simulation to verify StubbornAtom's well-explained answer: In the case of $\mu=3$ and $\sigma=7$ we have $$\frac{\mu^2}{\sigma}=\frac{9}{7}=1.285714$$ The simulation with $10^7$ trials gives $\wi
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$ An R simulation to verify StubbornAtom's well-explained answer: In the case of $\mu=3$ and $\sigma=7$ we have $$\frac{\mu^2}{\sigma}=\frac{9}{7}=1.285714$$ The simulation with $10^7$ trials gives $\widehat{\theta}=1.286482$ y=0 for(i in c(1:10^7)) {x<-rnorm(4,3,7) y=y+sqrt(pi/6)*mean(x)^(2)/sd(x)-(1/8)*sqrt(3*pi/2)*sd(x)} y/(10^7) 1.286482 This, however, took a while to run as for loops aren't very fast in R so if anyone has an alternative method to simulate this in R, I would be very interested.
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$ An R simulation to verify StubbornAtom's well-explained answer: In the case of $\mu=3$ and $\sigma=7$ we have $$\frac{\mu^2}{\sigma}=\frac{9}{7}=1.285714$$ The simulation with $10^7$ trials gives $\wi
49,056
GAMM with Zero-Inflated Negative Binomial - Looking for a package in R
This model is possible with the brms R package which is an interface to R: A slight modification of one of the examples from https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html shows essentially what is involved in setting up and fitting the model ## load package library('brms') ## load data zinb <- read.csv("http://stats.idre.ucla.edu/stat/data/fish.csv") ## fit a model with constant zero inflation fit_zinb1 <- brm(count ~ s(persons, k = 4) + s(child, k = 4) + camper, data = zinb, family = zero_inflated_negbinomial(), chains = 4, cores = 4, control = list(adapt_delta = 0.999)) ## plot the marginal effects plot(marginal_effects(fit_zinb1)) ## model summary summary(fit_zinb1, WAIC = FALSE) ## fit a model where the zero-inflation is a constant plus a linear ## effect of number of children fit_zinb2 <- brm(bf(count ~ s(persons, k = 4) + s(child, k= 4) + camper, zi ~ s(child, k = 4)), data = zinb, family = zero_inflated_negbinomial(), chains = 4, cores = 4, control = list(adapt_delta = 0.999)) ## plot the marginal effects plot(marginal_effects(fit_zinb2)) ## model summary summary(fit_zinb2, WAIC = FALSE) There's more to this that shown above (you need to do some amount of model checking, look at posterior predictive checks etc) but it gives you an idea of what's involved in fitting a GAM --- you'll need to look at the help for brms to find the syntax for random effects. The other alternative is to use the gamlss package, which has the ZINBI family for zero-inflated NegBin and as with brms you can model all of the parameters of the distribution with linear predictors.
GAMM with Zero-Inflated Negative Binomial - Looking for a package in R
This model is possible with the brms R package which is an interface to R: A slight modification of one of the examples from https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html sh
GAMM with Zero-Inflated Negative Binomial - Looking for a package in R This model is possible with the brms R package which is an interface to R: A slight modification of one of the examples from https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html shows essentially what is involved in setting up and fitting the model ## load package library('brms') ## load data zinb <- read.csv("http://stats.idre.ucla.edu/stat/data/fish.csv") ## fit a model with constant zero inflation fit_zinb1 <- brm(count ~ s(persons, k = 4) + s(child, k = 4) + camper, data = zinb, family = zero_inflated_negbinomial(), chains = 4, cores = 4, control = list(adapt_delta = 0.999)) ## plot the marginal effects plot(marginal_effects(fit_zinb1)) ## model summary summary(fit_zinb1, WAIC = FALSE) ## fit a model where the zero-inflation is a constant plus a linear ## effect of number of children fit_zinb2 <- brm(bf(count ~ s(persons, k = 4) + s(child, k= 4) + camper, zi ~ s(child, k = 4)), data = zinb, family = zero_inflated_negbinomial(), chains = 4, cores = 4, control = list(adapt_delta = 0.999)) ## plot the marginal effects plot(marginal_effects(fit_zinb2)) ## model summary summary(fit_zinb2, WAIC = FALSE) There's more to this that shown above (you need to do some amount of model checking, look at posterior predictive checks etc) but it gives you an idea of what's involved in fitting a GAM --- you'll need to look at the help for brms to find the syntax for random effects. The other alternative is to use the gamlss package, which has the ZINBI family for zero-inflated NegBin and as with brms you can model all of the parameters of the distribution with linear predictors.
GAMM with Zero-Inflated Negative Binomial - Looking for a package in R This model is possible with the brms R package which is an interface to R: A slight modification of one of the examples from https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html sh
49,057
Multi-agent actor-critic MADDPG algorithm confusion
(1) how subsampling would resolve the non-stationarity problem The idea about sampling a variety of sub-policies for other agents to execute during training is that this introduces more variety in the behaviour of competing agents, rather than always only training against the single most recent "version" of opponents (which can result in "overfitting" against those agents). If there is variety in the behaviour of opponents, your agent will be forced to try learning a robust policy in the sense that it will try learning a policy that can handle all opponents. Without that variety, if you would only always select the most recent versions of opponents, your agent would instead be incentivized to only learn a policy that is strong against those most recent versions of opponents. Consider, for example, the game of Rock-Paper-Scissors. Let $P_1$ and $P_2$ denote two agents that are simultaneously learning. Suppose that they would only ever train against each other (rather than having more varied training partners through sampling). Suppose $P_1$ is randomly initialized to mostly just play Rock, and $P_2$ is randomly initialized to mostly just play Paper. $P_2$ will initially win most of its games, and $P_1$ will then learn to just play Scissors very often. Once $P_1$ has learned that, $P_2$ will start learning to play Rock very often. Once that is done, $P_1$ will start learning to play Paper very often. Both agents will just keep going in circles like that, always learning only to counter the most recent behaviour of the other player. If we instead introduce more variety in training partners by sampling from an ensemble of multiple learned policies, we will be more likely to converge to the optimal strategy of selecting actions uniformly at random; that's the only strategy that will be likely to perform well against an ensemble of varyious policies. (2) why would the individual agents have more than one possible (sub) policy - shouldn't there be a single optimum policy for each agent? Ultimately we'll often want to converge to a single*, optimal policy for every agent, yes. But normally we don't have that yet... that's why we're doing Reinforcement Learning in the first place! We don't know what an optimal (or even just a good) policy looks like, we have to learn that first. During that learning process, if we want to (which we do based on the reasoning in my answer to your previous question above), we can easily just learn an ensemble of different policies, rather than learning a single policy. This can, for example, be done simply by training each sub-policy on a different subset of the experience that we collect.
Multi-agent actor-critic MADDPG algorithm confusion
(1) how subsampling would resolve the non-stationarity problem The idea about sampling a variety of sub-policies for other agents to execute during training is that this introduces more variety in th
Multi-agent actor-critic MADDPG algorithm confusion (1) how subsampling would resolve the non-stationarity problem The idea about sampling a variety of sub-policies for other agents to execute during training is that this introduces more variety in the behaviour of competing agents, rather than always only training against the single most recent "version" of opponents (which can result in "overfitting" against those agents). If there is variety in the behaviour of opponents, your agent will be forced to try learning a robust policy in the sense that it will try learning a policy that can handle all opponents. Without that variety, if you would only always select the most recent versions of opponents, your agent would instead be incentivized to only learn a policy that is strong against those most recent versions of opponents. Consider, for example, the game of Rock-Paper-Scissors. Let $P_1$ and $P_2$ denote two agents that are simultaneously learning. Suppose that they would only ever train against each other (rather than having more varied training partners through sampling). Suppose $P_1$ is randomly initialized to mostly just play Rock, and $P_2$ is randomly initialized to mostly just play Paper. $P_2$ will initially win most of its games, and $P_1$ will then learn to just play Scissors very often. Once $P_1$ has learned that, $P_2$ will start learning to play Rock very often. Once that is done, $P_1$ will start learning to play Paper very often. Both agents will just keep going in circles like that, always learning only to counter the most recent behaviour of the other player. If we instead introduce more variety in training partners by sampling from an ensemble of multiple learned policies, we will be more likely to converge to the optimal strategy of selecting actions uniformly at random; that's the only strategy that will be likely to perform well against an ensemble of varyious policies. (2) why would the individual agents have more than one possible (sub) policy - shouldn't there be a single optimum policy for each agent? Ultimately we'll often want to converge to a single*, optimal policy for every agent, yes. But normally we don't have that yet... that's why we're doing Reinforcement Learning in the first place! We don't know what an optimal (or even just a good) policy looks like, we have to learn that first. During that learning process, if we want to (which we do based on the reasoning in my answer to your previous question above), we can easily just learn an ensemble of different policies, rather than learning a single policy. This can, for example, be done simply by training each sub-policy on a different subset of the experience that we collect.
Multi-agent actor-critic MADDPG algorithm confusion (1) how subsampling would resolve the non-stationarity problem The idea about sampling a variety of sub-policies for other agents to execute during training is that this introduces more variety in th
49,058
State-of-the-art algorithms for the training of neural networks with GRU or LSTM units
This article is a good place to start. "Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network" by Alex Sherstinsky Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well. This is a dense document with all of the equations your heart might desire. It would be difficult to reproduce all of the relevant materials here. Another presentation can be found in "A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation" by Gang Chen. We describe recurrent neural networks (RNNs), which have attracted great attention on sequential tasks, such as handwriting recognition, speech recognition and image to text. However, compared to general feedforward neural networks, RNNs have feedback loops, which makes it a little hard to understand the backpropagation step. Thus, we focus on basics, especially the error backpropagation to compute gradients with respect to model parameters. Further, we go into detail on how error backpropagation algorithm is applied on long short-term memory (LSTM) by unfolding the memory unit. Also, if you're unfamiliar with backpropagation, we have a number of threads on the topic. Regarding GRUs, I'm not aware of a similar paper. The promise of GRUs was supposedly that GRUs would provide comparable performance to LSTMs with a lower parameter count and fewer computations; results are mixed. For a comparison of LSTMs and GRUs, see Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling."
State-of-the-art algorithms for the training of neural networks with GRU or LSTM units
This article is a good place to start. "Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network" by Alex Sherstinsky Because of their effectiveness in broad practical
State-of-the-art algorithms for the training of neural networks with GRU or LSTM units This article is a good place to start. "Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network" by Alex Sherstinsky Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well. This is a dense document with all of the equations your heart might desire. It would be difficult to reproduce all of the relevant materials here. Another presentation can be found in "A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation" by Gang Chen. We describe recurrent neural networks (RNNs), which have attracted great attention on sequential tasks, such as handwriting recognition, speech recognition and image to text. However, compared to general feedforward neural networks, RNNs have feedback loops, which makes it a little hard to understand the backpropagation step. Thus, we focus on basics, especially the error backpropagation to compute gradients with respect to model parameters. Further, we go into detail on how error backpropagation algorithm is applied on long short-term memory (LSTM) by unfolding the memory unit. Also, if you're unfamiliar with backpropagation, we have a number of threads on the topic. Regarding GRUs, I'm not aware of a similar paper. The promise of GRUs was supposedly that GRUs would provide comparable performance to LSTMs with a lower parameter count and fewer computations; results are mixed. For a comparison of LSTMs and GRUs, see Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling."
State-of-the-art algorithms for the training of neural networks with GRU or LSTM units This article is a good place to start. "Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network" by Alex Sherstinsky Because of their effectiveness in broad practical
49,059
Difference between independence and stationarity tests in time series
As a preliminary matter, it is worth noting that "independence" is a very vague condition unless it comes with a clear specification of what is independent from what. Conceptually, independence is a much broader concept, whereas stationarity of a time-series is a particular condition on the series that can be framed as a particular kind of independence. To answer your question I will show you how you can frame the condition of strict stationarity as an independence condition, and then I will discuss the intuition behind this condition. Stationarity of a time-series can be framed as a type of independence: You have stated the definition of (strong) stationarity in your question, but I will reiterate it in my own notation: Let $\boldsymbol{X} = \{ X_t | t \in \mathbb{Z} \}$ be a stochastic process. This process is said to be strongly stationary if for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1}, ..., x_{t_k}$ we have: $$\mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}) = \mathbb{P}(X_{t_1+s} \leqslant x_{t_1}, ..., X_{t_k+s} \leqslant x_{t_k}) \quad \quad \text{for all }s \in \mathbb{Z}.$$ For any integer random variable $S$ we can define the shifted process $\boldsymbol{X}^S = \{ X_{t-S} | t \in \mathbb{Z} \}$, which shifts the stochastic process $\boldsymbol{X}$ forwards by $S$ time units. Using this randomly shifted time-series, we can now frame the requirement of strong stationarity in terms of an independence condition. Theorem: The process $\boldsymbol{X}$ is strongly stationary if and only if, for all integer random variables variables $S \text{ } \bot \text{ } \boldsymbol{X}$ we have $S \text{ } \bot \text{ } \boldsymbol{X}^S$. Proof: To show equivalence of the conditions we will first show that strong stationarity implies the independence condition ($\implies$) and then we will show that strong stationarity is implied by the independence condition ($\impliedby$). ($\implies$) Assume that strong stationarity holds and let $S \text{ } \bot \text{ } \boldsymbol{X}$ be an arbitrary integer random variable that is independent of the original process. Then for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1},...,x_{t_k}$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X^S_{t_1} \leqslant x_{t_1}, ..., X^S_{t_k} \leqslant x_{t_k} | S=s) &= \mathbb{P}(X_{t_1-s} \leqslant x_{t_1}, ..., X_{t_k-s} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X_{t_1-s} \leqslant x_{t_1}, ..., X_{t_k-s} \leqslant x_{t_k}) \\[6pt] &= \mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}). \\[6pt] \end{aligned} \end{equation}$$ Since the right-hand-side of this equation does not depend on $s$, we have $S \text{ } \bot \text{ } \boldsymbol{X}^S$. ($\impliedby$) Let $S \text{ } \bot \text{ } \boldsymbol{X}$ be an integer random variable that is independent of the original process and has support on the whole set of integers (i.e., $\mathbb{P}(S=s)>0$ for all $s \in \mathbb{Z}$). Assume that the independence condition $S \text{ } \bot \text{ } \boldsymbol{X}^S$ holds. Then for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1},...,x_{t_k}$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}) &= \mathbb{P}(X^S_{t_1+S} \leqslant x_{t_1}, ..., X^S_{t_k+S} \leqslant x_{t_k}) \\[6pt] &= \mathbb{P}(X^S_{t_1+S} \leqslant x_{t_1}, ..., X^S_{t_k+S} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X^S_{t_1+s} \leqslant x_{t_1}, ..., X^S_{t_k+s} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X^S_{t_1+s} \leqslant x_{t_1}, ..., X^S_{t_k+s} \leqslant x_{t_k} | S=0) \\[6pt] &= \mathbb{P}(X_{t_1+s} \leqslant x_{t_1}, ..., X_{t_k+s} \leqslant x_{t_k}). \\[6pt] \end{aligned} \end{equation}$$ (The step from the first line to the second is allowed by our independence assumption.) Since this equation holds for all $s \in \mathbb{Z}$ we have established the strong stationarity of the original process. $\blacksquare$ From this theorem (which I just made up, but I imagine there is probably something like it in books somewhere) we can see that the condition of strict stationarity can be framed as an independence condition. If you have a stochastic process $\boldsymbol{X}$ and a random time-shift $S$ that is independent of the process, then stationarity occurs when the shifted process is independent of the time-shift variable itself. In other words, the joint distribution of the values in the stochastic process are not affected by knowledge of how much the process was shifted. Your specific questions: Should I test for stationarity or independence? Independence of what? Stationarity is a type of independence, but if you have some other type of independence in mind, you will need to specify what that is. Whether you should test for particular types of independence depends on your interests in the problem, but it is generally useful to test for stationarity, since stationary time-series models have a number of well-known model forms that might be useful to you. What is the difference between these two concepts in this context? As you can see from the above, stationarity is a type of independence. The general concept of independence is broader than this, and encompasses any possible assertion of independence of two or more random variables. In the context of time-series analysis, it is common to deal with models that are stationary, or models that have time-based trends (either trend or drift terms) with an underlying stationary error series. In this case it is common to test whether trend or drift terms are present in the model. What tests should I use in this situation? All you have told us about your data is that it has a time variable, and you want to know if this variable is important. This isn't much to go on, but it sounds like you might want to formulate a time-series model for your data, and test to see if there is any trend or drift term that would lead to a general systematic change in values over time. This is effectively a test of stationarity against an alternative with a trend or drift in the model. To give more information we would need to know more about your data. (Perhaps in a new question?)
Difference between independence and stationarity tests in time series
As a preliminary matter, it is worth noting that "independence" is a very vague condition unless it comes with a clear specification of what is independent from what. Conceptually, independence is a
Difference between independence and stationarity tests in time series As a preliminary matter, it is worth noting that "independence" is a very vague condition unless it comes with a clear specification of what is independent from what. Conceptually, independence is a much broader concept, whereas stationarity of a time-series is a particular condition on the series that can be framed as a particular kind of independence. To answer your question I will show you how you can frame the condition of strict stationarity as an independence condition, and then I will discuss the intuition behind this condition. Stationarity of a time-series can be framed as a type of independence: You have stated the definition of (strong) stationarity in your question, but I will reiterate it in my own notation: Let $\boldsymbol{X} = \{ X_t | t \in \mathbb{Z} \}$ be a stochastic process. This process is said to be strongly stationary if for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1}, ..., x_{t_k}$ we have: $$\mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}) = \mathbb{P}(X_{t_1+s} \leqslant x_{t_1}, ..., X_{t_k+s} \leqslant x_{t_k}) \quad \quad \text{for all }s \in \mathbb{Z}.$$ For any integer random variable $S$ we can define the shifted process $\boldsymbol{X}^S = \{ X_{t-S} | t \in \mathbb{Z} \}$, which shifts the stochastic process $\boldsymbol{X}$ forwards by $S$ time units. Using this randomly shifted time-series, we can now frame the requirement of strong stationarity in terms of an independence condition. Theorem: The process $\boldsymbol{X}$ is strongly stationary if and only if, for all integer random variables variables $S \text{ } \bot \text{ } \boldsymbol{X}$ we have $S \text{ } \bot \text{ } \boldsymbol{X}^S$. Proof: To show equivalence of the conditions we will first show that strong stationarity implies the independence condition ($\implies$) and then we will show that strong stationarity is implied by the independence condition ($\impliedby$). ($\implies$) Assume that strong stationarity holds and let $S \text{ } \bot \text{ } \boldsymbol{X}$ be an arbitrary integer random variable that is independent of the original process. Then for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1},...,x_{t_k}$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X^S_{t_1} \leqslant x_{t_1}, ..., X^S_{t_k} \leqslant x_{t_k} | S=s) &= \mathbb{P}(X_{t_1-s} \leqslant x_{t_1}, ..., X_{t_k-s} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X_{t_1-s} \leqslant x_{t_1}, ..., X_{t_k-s} \leqslant x_{t_k}) \\[6pt] &= \mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}). \\[6pt] \end{aligned} \end{equation}$$ Since the right-hand-side of this equation does not depend on $s$, we have $S \text{ } \bot \text{ } \boldsymbol{X}^S$. ($\impliedby$) Let $S \text{ } \bot \text{ } \boldsymbol{X}$ be an integer random variable that is independent of the original process and has support on the whole set of integers (i.e., $\mathbb{P}(S=s)>0$ for all $s \in \mathbb{Z}$). Assume that the independence condition $S \text{ } \bot \text{ } \boldsymbol{X}^S$ holds. Then for all time indices $t_1,...,t_k \in \mathbb{Z} \text{ }$ and all series values $x_{t_1},...,x_{t_k}$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_{t_1} \leqslant x_{t_1}, ..., X_{t_k} \leqslant x_{t_k}) &= \mathbb{P}(X^S_{t_1+S} \leqslant x_{t_1}, ..., X^S_{t_k+S} \leqslant x_{t_k}) \\[6pt] &= \mathbb{P}(X^S_{t_1+S} \leqslant x_{t_1}, ..., X^S_{t_k+S} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X^S_{t_1+s} \leqslant x_{t_1}, ..., X^S_{t_k+s} \leqslant x_{t_k} | S=s) \\[6pt] &= \mathbb{P}(X^S_{t_1+s} \leqslant x_{t_1}, ..., X^S_{t_k+s} \leqslant x_{t_k} | S=0) \\[6pt] &= \mathbb{P}(X_{t_1+s} \leqslant x_{t_1}, ..., X_{t_k+s} \leqslant x_{t_k}). \\[6pt] \end{aligned} \end{equation}$$ (The step from the first line to the second is allowed by our independence assumption.) Since this equation holds for all $s \in \mathbb{Z}$ we have established the strong stationarity of the original process. $\blacksquare$ From this theorem (which I just made up, but I imagine there is probably something like it in books somewhere) we can see that the condition of strict stationarity can be framed as an independence condition. If you have a stochastic process $\boldsymbol{X}$ and a random time-shift $S$ that is independent of the process, then stationarity occurs when the shifted process is independent of the time-shift variable itself. In other words, the joint distribution of the values in the stochastic process are not affected by knowledge of how much the process was shifted. Your specific questions: Should I test for stationarity or independence? Independence of what? Stationarity is a type of independence, but if you have some other type of independence in mind, you will need to specify what that is. Whether you should test for particular types of independence depends on your interests in the problem, but it is generally useful to test for stationarity, since stationary time-series models have a number of well-known model forms that might be useful to you. What is the difference between these two concepts in this context? As you can see from the above, stationarity is a type of independence. The general concept of independence is broader than this, and encompasses any possible assertion of independence of two or more random variables. In the context of time-series analysis, it is common to deal with models that are stationary, or models that have time-based trends (either trend or drift terms) with an underlying stationary error series. In this case it is common to test whether trend or drift terms are present in the model. What tests should I use in this situation? All you have told us about your data is that it has a time variable, and you want to know if this variable is important. This isn't much to go on, but it sounds like you might want to formulate a time-series model for your data, and test to see if there is any trend or drift term that would lead to a general systematic change in values over time. This is effectively a test of stationarity against an alternative with a trend or drift in the model. To give more information we would need to know more about your data. (Perhaps in a new question?)
Difference between independence and stationarity tests in time series As a preliminary matter, it is worth noting that "independence" is a very vague condition unless it comes with a clear specification of what is independent from what. Conceptually, independence is a
49,060
Difference between independence and stationarity tests in time series
To the best of my understanding, the concepts mean the following: Testing for independence, would be useful to see whether there is any relationship between time and your variable of choice at all. Looking a the definition of stationarity, it does not say anything about independence. The most important takeaway is that this definition results in a number of properties (mean, variance) that stay the same when the series is shifted over time. These properties do not imply that your variable is independent of time. Given your description, I would say you should be looking into testing for independence. Not sure which test is the best fit.
Difference between independence and stationarity tests in time series
To the best of my understanding, the concepts mean the following: Testing for independence, would be useful to see whether there is any relationship between time and your variable of choice at all. L
Difference between independence and stationarity tests in time series To the best of my understanding, the concepts mean the following: Testing for independence, would be useful to see whether there is any relationship between time and your variable of choice at all. Looking a the definition of stationarity, it does not say anything about independence. The most important takeaway is that this definition results in a number of properties (mean, variance) that stay the same when the series is shifted over time. These properties do not imply that your variable is independent of time. Given your description, I would say you should be looking into testing for independence. Not sure which test is the best fit.
Difference between independence and stationarity tests in time series To the best of my understanding, the concepts mean the following: Testing for independence, would be useful to see whether there is any relationship between time and your variable of choice at all. L
49,061
Is there an alternative to categorical cross-entropy with a notion of "class distance"?
The earth mover's distance (EMD) provides a way to do this. When computed between probability distributions, the EMD is equivalent to the 1st Wasserstein distance. Intuitively, each distribution can be imagined as a pile of dirt, consisting of a certain amount of dirt at each location. A pile can be transformed by moving the dirt from location to location. Work is measured as the amount of dirt moved times the distance moved. The EMD is defined as the minimum amount of work needed to transform one pile to match the other. In your problem, there are multiple classes, each corresponding to one of the 'discretized levels'. Distances between classes are the distances between the corresponding levels. The classifier outputs a predicted probability that the input is a member of each class. In the dirt pile analogy, each class corresponds to a location, and the predicted probability defines the amount of dirt. For each point in the training set, you have a target class. This corresponds to a probability distribution that takes the value one for the target class and zero for all others, i.e. all dirt is piled up at a single location. In general, computing the EMD requires solving an optimization problem, where we search over possible ways of transforming the dirt piles. But, the EMD has a convenient, closed form expression in your case, because there's only one transformation that makes sense: directly move all the dirt from wherever it was originally to a single target location. Suppose there are $l$ classes (represented as integers from 1 to $l$), and let $D_{ij}$ denote the distance between classes $i$ and $j$. For a given data point with target class $c$, let $p_i$ denote the classifier's predicted probability that the class is $i$. The EMD is: $$\text{EMD}(p, c) = \sum_{i \ne c} p_i D_{ic}$$ Related references: Levina and Bickel (2001). The Earth Mover's Distance is the Mallows Distance: Some Insights from Statistics. Frogner et al. (2015). Learning with a Wasserstein Loss. Hou et al. (2017). Squared earth movers distance loss for training deep neural networks on ordered classes.
Is there an alternative to categorical cross-entropy with a notion of "class distance"?
The earth mover's distance (EMD) provides a way to do this. When computed between probability distributions, the EMD is equivalent to the 1st Wasserstein distance. Intuitively, each distribution can b
Is there an alternative to categorical cross-entropy with a notion of "class distance"? The earth mover's distance (EMD) provides a way to do this. When computed between probability distributions, the EMD is equivalent to the 1st Wasserstein distance. Intuitively, each distribution can be imagined as a pile of dirt, consisting of a certain amount of dirt at each location. A pile can be transformed by moving the dirt from location to location. Work is measured as the amount of dirt moved times the distance moved. The EMD is defined as the minimum amount of work needed to transform one pile to match the other. In your problem, there are multiple classes, each corresponding to one of the 'discretized levels'. Distances between classes are the distances between the corresponding levels. The classifier outputs a predicted probability that the input is a member of each class. In the dirt pile analogy, each class corresponds to a location, and the predicted probability defines the amount of dirt. For each point in the training set, you have a target class. This corresponds to a probability distribution that takes the value one for the target class and zero for all others, i.e. all dirt is piled up at a single location. In general, computing the EMD requires solving an optimization problem, where we search over possible ways of transforming the dirt piles. But, the EMD has a convenient, closed form expression in your case, because there's only one transformation that makes sense: directly move all the dirt from wherever it was originally to a single target location. Suppose there are $l$ classes (represented as integers from 1 to $l$), and let $D_{ij}$ denote the distance between classes $i$ and $j$. For a given data point with target class $c$, let $p_i$ denote the classifier's predicted probability that the class is $i$. The EMD is: $$\text{EMD}(p, c) = \sum_{i \ne c} p_i D_{ic}$$ Related references: Levina and Bickel (2001). The Earth Mover's Distance is the Mallows Distance: Some Insights from Statistics. Frogner et al. (2015). Learning with a Wasserstein Loss. Hou et al. (2017). Squared earth movers distance loss for training deep neural networks on ordered classes.
Is there an alternative to categorical cross-entropy with a notion of "class distance"? The earth mover's distance (EMD) provides a way to do this. When computed between probability distributions, the EMD is equivalent to the 1st Wasserstein distance. Intuitively, each distribution can b
49,062
Geometric interpretation of mathematical expectation of a random variable
The mathematical expectation is the x-coordinate of the centre of gravity. The picture above is borrowed from Wikipedia.
Geometric interpretation of mathematical expectation of a random variable
The mathematical expectation is the x-coordinate of the centre of gravity. The picture above is borrowed from Wikipedia.
Geometric interpretation of mathematical expectation of a random variable The mathematical expectation is the x-coordinate of the centre of gravity. The picture above is borrowed from Wikipedia.
Geometric interpretation of mathematical expectation of a random variable The mathematical expectation is the x-coordinate of the centre of gravity. The picture above is borrowed from Wikipedia.
49,063
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so during recruitment?
I guess yours is a randomized trial. If it is not, then the whole intention-to-treat (ITT) vs as-treated (AT) vs per-protocol (PP) Mexican standoff is meaningless (eg McCoy, 2017). Accordingly, if they were not randomized, you could exclude them and still consider the corresponding analysis as an ITT one. Otherwise, if they were unluckily randomized, then you should include them in the ITT analysis. You can always skip them in the PP analysis. The bottomline is indeed that if you exclude them and call the corresponding analysis ITT, you will most likely find someone in a journal or an organization who will recommend you not to, and any discrepancy would undermine the whole trial.
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so duri
I guess yours is a randomized trial. If it is not, then the whole intention-to-treat (ITT) vs as-treated (AT) vs per-protocol (PP) Mexican standoff is meaningless (eg McCoy, 2017). Accordingly, if the
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so during recruitment? I guess yours is a randomized trial. If it is not, then the whole intention-to-treat (ITT) vs as-treated (AT) vs per-protocol (PP) Mexican standoff is meaningless (eg McCoy, 2017). Accordingly, if they were not randomized, you could exclude them and still consider the corresponding analysis as an ITT one. Otherwise, if they were unluckily randomized, then you should include them in the ITT analysis. You can always skip them in the PP analysis. The bottomline is indeed that if you exclude them and call the corresponding analysis ITT, you will most likely find someone in a journal or an organization who will recommend you not to, and any discrepancy would undermine the whole trial.
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so duri I guess yours is a randomized trial. If it is not, then the whole intention-to-treat (ITT) vs as-treated (AT) vs per-protocol (PP) Mexican standoff is meaningless (eg McCoy, 2017). Accordingly, if the
49,064
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so during recruitment?
This is a very interesting question and is unlikely to have a single right answer. I would argue that if they should never have been included in the trial in the first place then they should be excluded even post-randomisation and this would not affect the intention to teat analysis. The point of the trial is to be able to generalise to the population from which the trial participants were drawn and including people in the trial who do not, in fact, come from that population is going to disturb that generalisation. Consider some extreme cases. A patient is included who did not in fact have the condition being treated. An adult is included in a paediatric trial. A woman is included in a trial of ante-natal care who turns out not to have been pregnant at all. In the case outlined in the question I would be happy to see them excluded.
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so duri
This is a very interesting question and is unlikely to have a single right answer. I would argue that if they should never have been included in the trial in the first place then they should be exclud
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so during recruitment? This is a very interesting question and is unlikely to have a single right answer. I would argue that if they should never have been included in the trial in the first place then they should be excluded even post-randomisation and this would not affect the intention to teat analysis. The point of the trial is to be able to generalise to the population from which the trial participants were drawn and including people in the trial who do not, in fact, come from that population is going to disturb that generalisation. Consider some extreme cases. A patient is included who did not in fact have the condition being treated. An adult is included in a paediatric trial. A woman is included in a trial of ante-natal care who turns out not to have been pregnant at all. In the case outlined in the question I would be happy to see them excluded.
Does Intention-to-treat apply to the cases that should have been excluded but not able to do so duri This is a very interesting question and is unlikely to have a single right answer. I would argue that if they should never have been included in the trial in the first place then they should be exclud
49,065
Ridge regression to minimize RMSE instead of MSE
minimizing $$ \left\| X \vec{c} - \vec{y} \right\|_2^2 + \left\| \Gamma \vec{c} \right\|_2^2 $$ and minimizing $$ \sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2} + \left\| \Gamma \vec{c} \right\|_2^2 $$ do not directly relate to minimizing ${\left\| X \vec{c} - \vec{y} \right\|_2^2}$ or $\sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2}$ under the constraint $\left\|\vec{c}\right\|_2^2 < t$. There will need to be a conversion between $t$ and $\Gamma$ which will be different for the two different cost functions. Thus the minimization of MSE and RMSE with a same penalty term defined by $\Gamma$ will relate to a constrained minimization with different constraints $t$. Note that for every solution $\vec{c}$ to minimizing the MSE with penalty term $\Gamma_1$ there will be a penalty term $\Gamma_2$ that results in the same solution $\vec{c}$ when minimizing the penalized RMSE. So for many practical purposes you can use any methods/software that solves the penalized MSE problem, but only you need to use a different cost function when, for instance, performing cross validation to select the ideal $\Gamma$.
Ridge regression to minimize RMSE instead of MSE
minimizing $$ \left\| X \vec{c} - \vec{y} \right\|_2^2 + \left\| \Gamma \vec{c} \right\|_2^2 $$ and minimizing $$ \sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2} + \left\| \Gamma \vec{c} \right\|_2
Ridge regression to minimize RMSE instead of MSE minimizing $$ \left\| X \vec{c} - \vec{y} \right\|_2^2 + \left\| \Gamma \vec{c} \right\|_2^2 $$ and minimizing $$ \sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2} + \left\| \Gamma \vec{c} \right\|_2^2 $$ do not directly relate to minimizing ${\left\| X \vec{c} - \vec{y} \right\|_2^2}$ or $\sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2}$ under the constraint $\left\|\vec{c}\right\|_2^2 < t$. There will need to be a conversion between $t$ and $\Gamma$ which will be different for the two different cost functions. Thus the minimization of MSE and RMSE with a same penalty term defined by $\Gamma$ will relate to a constrained minimization with different constraints $t$. Note that for every solution $\vec{c}$ to minimizing the MSE with penalty term $\Gamma_1$ there will be a penalty term $\Gamma_2$ that results in the same solution $\vec{c}$ when minimizing the penalized RMSE. So for many practical purposes you can use any methods/software that solves the penalized MSE problem, but only you need to use a different cost function when, for instance, performing cross validation to select the ideal $\Gamma$.
Ridge regression to minimize RMSE instead of MSE minimizing $$ \left\| X \vec{c} - \vec{y} \right\|_2^2 + \left\| \Gamma \vec{c} \right\|_2^2 $$ and minimizing $$ \sqrt{\left\| X \vec{c} - \vec{y} \right\|_2^2} + \left\| \Gamma \vec{c} \right\|_2
49,066
Linear regression with $l_0$ regularization
First, note that NP-hardness is a property of a problem, rather than a specific algorithm--it puts bounds on the performance of any algorithm. The proof works by establishing relationships between problems to prove membership in complexity classes. Background A complexity class is a set of computational problems defined by 1) a computational resource of interest (e.g. time, memory, etc.), 2) a particular model of computation (e.g. a Turing machine), and 3) a bound describing how the resources needed to compute a solution scale with the problem size. NP is the set of all decision problems where, if the answer is "yes", there exists a proof that can be verified in polynomial time by a deterministic Turing machine. Informally, this means it's possible to efficiently check the answer, even if it's not possible to efficiently produce the answer in the first place. A problem $q$ is NP-complete if it's in NP, and every problem $r$ in NP can be reduced to $q$ in polynomial time. This means that there exists an efficient algorithm for transforming $r$ into $q$. So, if an efficient algorithm for solving $q$ exists, it can also be used to solve $r$ efficiently. Informally, if $a$ can be reduced to $b$, then $b$ is at least as difficult to solve as $a$. So, NP-complete problems are the most difficult problems in NP. A problem $q$ is NP-hard if every NP-complete problem can be reduced to it in polynomial time. However, $q$ need not necessarily be in NP. Informally, NP-hard problems are at least as difficult as the most difficult problems in NP. Sparse regression The NP-hardness of sparse regression is proved in this paper: Natarajan (1995). Sparse approximate solutions to linear systems. They call the sparse regression problem SAS (for sparse approximate solution). The proof of NP-hardness works by reducing a problem called "exact cover with 3-sets" (a.k.a. X3C) to SAS. They explicitly show how to transform any X3C problem into an SAS problem. X3C is known to be NP-complete, which implies that SAS is NP-hard.
Linear regression with $l_0$ regularization
First, note that NP-hardness is a property of a problem, rather than a specific algorithm--it puts bounds on the performance of any algorithm. The proof works by establishing relationships between pro
Linear regression with $l_0$ regularization First, note that NP-hardness is a property of a problem, rather than a specific algorithm--it puts bounds on the performance of any algorithm. The proof works by establishing relationships between problems to prove membership in complexity classes. Background A complexity class is a set of computational problems defined by 1) a computational resource of interest (e.g. time, memory, etc.), 2) a particular model of computation (e.g. a Turing machine), and 3) a bound describing how the resources needed to compute a solution scale with the problem size. NP is the set of all decision problems where, if the answer is "yes", there exists a proof that can be verified in polynomial time by a deterministic Turing machine. Informally, this means it's possible to efficiently check the answer, even if it's not possible to efficiently produce the answer in the first place. A problem $q$ is NP-complete if it's in NP, and every problem $r$ in NP can be reduced to $q$ in polynomial time. This means that there exists an efficient algorithm for transforming $r$ into $q$. So, if an efficient algorithm for solving $q$ exists, it can also be used to solve $r$ efficiently. Informally, if $a$ can be reduced to $b$, then $b$ is at least as difficult to solve as $a$. So, NP-complete problems are the most difficult problems in NP. A problem $q$ is NP-hard if every NP-complete problem can be reduced to it in polynomial time. However, $q$ need not necessarily be in NP. Informally, NP-hard problems are at least as difficult as the most difficult problems in NP. Sparse regression The NP-hardness of sparse regression is proved in this paper: Natarajan (1995). Sparse approximate solutions to linear systems. They call the sparse regression problem SAS (for sparse approximate solution). The proof of NP-hardness works by reducing a problem called "exact cover with 3-sets" (a.k.a. X3C) to SAS. They explicitly show how to transform any X3C problem into an SAS problem. X3C is known to be NP-complete, which implies that SAS is NP-hard.
Linear regression with $l_0$ regularization First, note that NP-hardness is a property of a problem, rather than a specific algorithm--it puts bounds on the performance of any algorithm. The proof works by establishing relationships between pro
49,067
Similarity LAD and quantile regression
Assume we have the following regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ The $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}_{ b} \sum_{i=1}^n |y_i - f(\mathbf{b},x_i)|$ The $\beta$ estimate of Quantile regression is given by: $ \hat{\beta}_{Quantile} = \text{argmin}_{ b} \sum_{i:y_{i} \geq f(\mathbf{b},x_i) }^n q|y_i -f(\mathbf{b},x_i)| + \sum_{i:y_{i} < f(\mathbf{b},x_i) }^n (1-q)|y_i -f(\mathbf{b},x_i)| $ If $q = 0.5$ (which is the case if we want to estimate the conditional median), this simplifies to: $ \hat{\beta}_{Quantile, q =0.5} = \text{argmin}_{ b} \sum_{i=1}^n 0.5|y_i - f(\mathbf{b},x_i)|$, which is equivalent to $\hat{\beta}_{LAD}$
Similarity LAD and quantile regression
Assume we have the following regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ The $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}
Similarity LAD and quantile regression Assume we have the following regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ The $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}_{ b} \sum_{i=1}^n |y_i - f(\mathbf{b},x_i)|$ The $\beta$ estimate of Quantile regression is given by: $ \hat{\beta}_{Quantile} = \text{argmin}_{ b} \sum_{i:y_{i} \geq f(\mathbf{b},x_i) }^n q|y_i -f(\mathbf{b},x_i)| + \sum_{i:y_{i} < f(\mathbf{b},x_i) }^n (1-q)|y_i -f(\mathbf{b},x_i)| $ If $q = 0.5$ (which is the case if we want to estimate the conditional median), this simplifies to: $ \hat{\beta}_{Quantile, q =0.5} = \text{argmin}_{ b} \sum_{i=1}^n 0.5|y_i - f(\mathbf{b},x_i)|$, which is equivalent to $\hat{\beta}_{LAD}$
Similarity LAD and quantile regression Assume we have the following regression model: $\mathbf{y} = f(\mathbf{x},\mathbf{\beta}) + \mathbf{\epsilon}$ The $\beta$ estimate of LAD regression is given by: $ \hat{\beta}_{LAD} = \text{argmin}
49,068
What influences fluctuations in validation accuracy?
First of all, does your $x$ axis represent training steps or epochs? My guess would be epochs (keras default), because of the stability in the training accuracy. If that is not the case, a low batch size would be the prime suspect in fluctuations, because the accuracy would depend on what examples the model sees at each batch. However, that should effect both the training and validation accuracies. Another parameter that usually effects fluctuations is a high learning rate. The weights change much in each epoch, resulting in the model changing its prediction on many examples. Normally this should effect both training and validation sets, but you also seem to be suffering from a bit of overfitting. If this is the case, your model has learned its training set by heart (I can't see this to confirm it by I suspect your training accuracy is close to 1), but struggles a bit on the validation set. This along with a high learning rate would result in the training and validation figures above. My suggestions to counter this would be: Decrease the learning rate. Some ideas would be a gradual decrease, a scheduled decrease or a reduction on a plateau of a training metric. I'd recommend the third (can be done easily through a keras callback). Regularize the model. This should reduce overfitting and also improve the performance of the model. However, this might increase the fluctuations in the training set as well.
What influences fluctuations in validation accuracy?
First of all, does your $x$ axis represent training steps or epochs? My guess would be epochs (keras default), because of the stability in the training accuracy. If that is not the case, a low batch s
What influences fluctuations in validation accuracy? First of all, does your $x$ axis represent training steps or epochs? My guess would be epochs (keras default), because of the stability in the training accuracy. If that is not the case, a low batch size would be the prime suspect in fluctuations, because the accuracy would depend on what examples the model sees at each batch. However, that should effect both the training and validation accuracies. Another parameter that usually effects fluctuations is a high learning rate. The weights change much in each epoch, resulting in the model changing its prediction on many examples. Normally this should effect both training and validation sets, but you also seem to be suffering from a bit of overfitting. If this is the case, your model has learned its training set by heart (I can't see this to confirm it by I suspect your training accuracy is close to 1), but struggles a bit on the validation set. This along with a high learning rate would result in the training and validation figures above. My suggestions to counter this would be: Decrease the learning rate. Some ideas would be a gradual decrease, a scheduled decrease or a reduction on a plateau of a training metric. I'd recommend the third (can be done easily through a keras callback). Regularize the model. This should reduce overfitting and also improve the performance of the model. However, this might increase the fluctuations in the training set as well.
What influences fluctuations in validation accuracy? First of all, does your $x$ axis represent training steps or epochs? My guess would be epochs (keras default), because of the stability in the training accuracy. If that is not the case, a low batch s
49,069
Special values in continuous numerical variables/features in Random Forest
My instinct is to split features for which this is the case into two new feature, one with the numerical values, and NA values where there was previously a special value, and another feature with NAs for the cases where the original value was numeric, and strings for the special values, coding this as a "factor" variable (I am using R). This is exactly the correct way to go about this. Do this. The obvious problem with this approach is that Random Forest won't "know" that these two features are linked, and will not include them together necessarily when building trees. This should not matter. If the difference is important in your classification task, then your RF should learn it automatically. RFs are pretty good at learning interactions. And if it isn't important, then it isn't important. Maybe this isn't such a big problem given that the trees are intentionally "weak learners" that don't try to do everything alone in one tree. That one, too. If you have any influence at all on the "upstream" data acquisition, try to help people to understand that encoding special values by using "obviously" invalid numericals is not good practice. Too much can go wrong when you reverse this, and there is really no good reason to do this unless you have a very specific use case.
Special values in continuous numerical variables/features in Random Forest
My instinct is to split features for which this is the case into two new feature, one with the numerical values, and NA values where there was previously a special value, and another feature with NAs
Special values in continuous numerical variables/features in Random Forest My instinct is to split features for which this is the case into two new feature, one with the numerical values, and NA values where there was previously a special value, and another feature with NAs for the cases where the original value was numeric, and strings for the special values, coding this as a "factor" variable (I am using R). This is exactly the correct way to go about this. Do this. The obvious problem with this approach is that Random Forest won't "know" that these two features are linked, and will not include them together necessarily when building trees. This should not matter. If the difference is important in your classification task, then your RF should learn it automatically. RFs are pretty good at learning interactions. And if it isn't important, then it isn't important. Maybe this isn't such a big problem given that the trees are intentionally "weak learners" that don't try to do everything alone in one tree. That one, too. If you have any influence at all on the "upstream" data acquisition, try to help people to understand that encoding special values by using "obviously" invalid numericals is not good practice. Too much can go wrong when you reverse this, and there is really no good reason to do this unless you have a very specific use case.
Special values in continuous numerical variables/features in Random Forest My instinct is to split features for which this is the case into two new feature, one with the numerical values, and NA values where there was previously a special value, and another feature with NAs
49,070
Find $\lim_{n \downarrow 1} t_{n-1, \alpha/2} / \sqrt{n}$ and prove the limit
Noting that the division by $\sqrt{n}$ does not change the result (because it converges to $1$) and writing $\nu=n-1$ and $2\gamma = 1-\alpha,$ the problem is to analyze the behavior of the function $x_\gamma(\nu)$ defined implicitly by $$\gamma = \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{x_\gamma(\nu)}\left(1 + \frac{r^2}{\nu}\right)^{-1/2 - \nu/2} \frac{dr}{\sqrt{\nu}}.$$ The change of variable $x \sqrt{\nu} = r$ yields $$\gamma = \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{x_\gamma(\nu)/\sqrt{\nu}}\left(1 + x^2\right)^{-1/2 - \nu/2} dx.$$ Because for all $\nu \gt 0$ the integrand is bounded above by $(1+x^2)^{-1/2},$ a lower bound for $x_\gamma(\nu)$ is given by the solution $t$ to the equation $$\eqalign{ \gamma &= \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{t/\sqrt{\nu}}\left(1 + x^2\right)^{-1/2} dx \\ &= \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)} \operatorname{Asinh}\left(\frac{t}{\sqrt{\nu}}\right), }$$ which is readily solved to produce $$x_\gamma(\nu) \ge t = \sinh\left(\frac{\gamma\,\Gamma(1/2)\Gamma(\nu/2)}{\Gamma(\nu/2+1/2)}\right)\sqrt{\nu}.$$ The divergence of $x_\gamma(\nu)$ as $\nu\to 0$ (from above) is immediate, because $\Gamma(\nu/2+1/2)\to\Gamma(1/2),$ $\Gamma(\nu/2) \approx 2/\nu$ for small positive $\nu,$ and $\operatorname{Asinh}(C) \approx \exp(C)/2$ for large positive $C.$ (The rate of divergence is substantially greater than suggested by this lower bound, because it is fairly crude.)
Find $\lim_{n \downarrow 1} t_{n-1, \alpha/2} / \sqrt{n}$ and prove the limit
Noting that the division by $\sqrt{n}$ does not change the result (because it converges to $1$) and writing $\nu=n-1$ and $2\gamma = 1-\alpha,$ the problem is to analyze the behavior of the function $
Find $\lim_{n \downarrow 1} t_{n-1, \alpha/2} / \sqrt{n}$ and prove the limit Noting that the division by $\sqrt{n}$ does not change the result (because it converges to $1$) and writing $\nu=n-1$ and $2\gamma = 1-\alpha,$ the problem is to analyze the behavior of the function $x_\gamma(\nu)$ defined implicitly by $$\gamma = \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{x_\gamma(\nu)}\left(1 + \frac{r^2}{\nu}\right)^{-1/2 - \nu/2} \frac{dr}{\sqrt{\nu}}.$$ The change of variable $x \sqrt{\nu} = r$ yields $$\gamma = \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{x_\gamma(\nu)/\sqrt{\nu}}\left(1 + x^2\right)^{-1/2 - \nu/2} dx.$$ Because for all $\nu \gt 0$ the integrand is bounded above by $(1+x^2)^{-1/2},$ a lower bound for $x_\gamma(\nu)$ is given by the solution $t$ to the equation $$\eqalign{ \gamma &= \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)}\int_0^{t/\sqrt{\nu}}\left(1 + x^2\right)^{-1/2} dx \\ &= \frac{\Gamma(\nu/2+1/2)}{\Gamma(1/2)\Gamma(\nu/2)} \operatorname{Asinh}\left(\frac{t}{\sqrt{\nu}}\right), }$$ which is readily solved to produce $$x_\gamma(\nu) \ge t = \sinh\left(\frac{\gamma\,\Gamma(1/2)\Gamma(\nu/2)}{\Gamma(\nu/2+1/2)}\right)\sqrt{\nu}.$$ The divergence of $x_\gamma(\nu)$ as $\nu\to 0$ (from above) is immediate, because $\Gamma(\nu/2+1/2)\to\Gamma(1/2),$ $\Gamma(\nu/2) \approx 2/\nu$ for small positive $\nu,$ and $\operatorname{Asinh}(C) \approx \exp(C)/2$ for large positive $C.$ (The rate of divergence is substantially greater than suggested by this lower bound, because it is fairly crude.)
Find $\lim_{n \downarrow 1} t_{n-1, \alpha/2} / \sqrt{n}$ and prove the limit Noting that the division by $\sqrt{n}$ does not change the result (because it converges to $1$) and writing $\nu=n-1$ and $2\gamma = 1-\alpha,$ the problem is to analyze the behavior of the function $
49,071
Can a GAN be used for tabular/vector data augmentation?
GANs are primarily used for data augmentation. If you have 1D signals you could use MLP or 1D convolutions. Hope those links will help: http://www.rricard.me/machine/learning/generative/adversarial/networks/keras/tensorflow/2017/04/05/gans-part2.html https://github.com/timzhang642/GAN-1D-Gaussian-Distribution
Can a GAN be used for tabular/vector data augmentation?
GANs are primarily used for data augmentation. If you have 1D signals you could use MLP or 1D convolutions. Hope those links will help: http://www.rricard.me/machine/learning/generative/adversarial/ne
Can a GAN be used for tabular/vector data augmentation? GANs are primarily used for data augmentation. If you have 1D signals you could use MLP or 1D convolutions. Hope those links will help: http://www.rricard.me/machine/learning/generative/adversarial/networks/keras/tensorflow/2017/04/05/gans-part2.html https://github.com/timzhang642/GAN-1D-Gaussian-Distribution
Can a GAN be used for tabular/vector data augmentation? GANs are primarily used for data augmentation. If you have 1D signals you could use MLP or 1D convolutions. Hope those links will help: http://www.rricard.me/machine/learning/generative/adversarial/ne
49,072
Can a GAN be used for tabular/vector data augmentation?
Of course, you can generate some new data as data augmentation, check out Take look at review two recent papers https://towardsdatascience.com/review-of-gans-for-tabular-data-a30a2199342 With code https://github.com/Diyago/GAN-for-tabular-data
Can a GAN be used for tabular/vector data augmentation?
Of course, you can generate some new data as data augmentation, check out Take look at review two recent papers https://towardsdatascience.com/review-of-gans-for-tabular-data-a30a2199342 With code ht
Can a GAN be used for tabular/vector data augmentation? Of course, you can generate some new data as data augmentation, check out Take look at review two recent papers https://towardsdatascience.com/review-of-gans-for-tabular-data-a30a2199342 With code https://github.com/Diyago/GAN-for-tabular-data
Can a GAN be used for tabular/vector data augmentation? Of course, you can generate some new data as data augmentation, check out Take look at review two recent papers https://towardsdatascience.com/review-of-gans-for-tabular-data-a30a2199342 With code ht
49,073
Sampling from characteristic/moment generating function
Some time ago I worked on something similar. If you are still interested in an implementation of the Devoye (1981) method you can give a look here https://www.kent.ac.uk/smsas/personal/msr/webfiles/rlaptrans/rdevroye.r This is the code by Martin Ridout. Prof. Ridout also wrote a really interesting paper on the topic (I advice you to give a look at it, https://www.kent.ac.uk/smsas/personal/msr/webfiles/rlaptrans/SimRandom4.pdf). Finally, there is a more recent paper by SG Walker (https://link.springer.com/article/10.1007/s11222-016-9631-8) Hope my answer helps.
Sampling from characteristic/moment generating function
Some time ago I worked on something similar. If you are still interested in an implementation of the Devoye (1981) method you can give a look here https://www.kent.ac.uk/smsas/personal/msr/webfiles/r
Sampling from characteristic/moment generating function Some time ago I worked on something similar. If you are still interested in an implementation of the Devoye (1981) method you can give a look here https://www.kent.ac.uk/smsas/personal/msr/webfiles/rlaptrans/rdevroye.r This is the code by Martin Ridout. Prof. Ridout also wrote a really interesting paper on the topic (I advice you to give a look at it, https://www.kent.ac.uk/smsas/personal/msr/webfiles/rlaptrans/SimRandom4.pdf). Finally, there is a more recent paper by SG Walker (https://link.springer.com/article/10.1007/s11222-016-9631-8) Hope my answer helps.
Sampling from characteristic/moment generating function Some time ago I worked on something similar. If you are still interested in an implementation of the Devoye (1981) method you can give a look here https://www.kent.ac.uk/smsas/personal/msr/webfiles/r
49,074
Survey: dismiss an answer based on quality of other answers?
The way you describe it, this participant gave not one, but multiple problematic answers. In addition, the one answer that is your focus here also needs interpretation as to which way it leans. It is quite common to run survey questions past multiple independent scorers are exclude those that are judged to be unclear by, e.g., a majority. If you can do this (potentially not only for this one participant, but also for the others), this would be a very defensible way forward. Alternatively, you could report your results with and without the problematic participant. It sounds like you should treat the situation without him as the "default", as in Results indicated that foo. If the problematic participant X's answers were included, they changed to bar. (See above on a description of the problems with X's answers.)
Survey: dismiss an answer based on quality of other answers?
The way you describe it, this participant gave not one, but multiple problematic answers. In addition, the one answer that is your focus here also needs interpretation as to which way it leans. It is
Survey: dismiss an answer based on quality of other answers? The way you describe it, this participant gave not one, but multiple problematic answers. In addition, the one answer that is your focus here also needs interpretation as to which way it leans. It is quite common to run survey questions past multiple independent scorers are exclude those that are judged to be unclear by, e.g., a majority. If you can do this (potentially not only for this one participant, but also for the others), this would be a very defensible way forward. Alternatively, you could report your results with and without the problematic participant. It sounds like you should treat the situation without him as the "default", as in Results indicated that foo. If the problematic participant X's answers were included, they changed to bar. (See above on a description of the problems with X's answers.)
Survey: dismiss an answer based on quality of other answers? The way you describe it, this participant gave not one, but multiple problematic answers. In addition, the one answer that is your focus here also needs interpretation as to which way it leans. It is
49,075
Normal Distribution With Many Zero Values
I think you might be better off treating this as a mixture of two distributions rather than trying to apply the standard normal-theory tools, so instead I'm going to outline a bit about the zero inflated Gamma distribution, including computing its first two moments, to give you a sense of how this goes. You could pretty easily swap the Gamma out for a different continuous distribution if you'd like (e.g. a Beta distribution scaled to be in $[0,100]$). Happy to add updates later if this is not helpful. Let $Z_1,\dots,Z_n\stackrel{\text{iid}}\sim\text{Bern}(\theta)$ and consider $X_1,\dots,X_n$ where $$ X_i \vert Z_i \stackrel{\text{iid}}\sim \begin{cases}\Gamma(\alpha, \beta) & Z_i = 1 \\ 0 & Z_i = 0\end{cases} $$ so each $X_i$ is a mixture of a point mass at $0$ with probability $1-\theta$ and a $\Gamma(\alpha,\beta)$ with probability $\theta$. We interpret this as $Z_i$ being a hidden latent variable that determines whether or not the student studies, and then $X_i$ is the observed value. This is a bit formal but I'm going to mention it for the sake of completeness. $X_i$ does not have a pdf in the usual sense because it's neither discrete nor continuous, but if we consider the measure $\nu = \lambda + \delta_0$, i.e. the Lebesgue measure plus a point mass at $0$, then $\nu(A) = 0 \implies P_X(A) = 0$ for any measurable $A$ so we can get a pdf $f_X := \frac{\text dP_X}{\text d\nu}$ w.r.t. $\nu$. But what does this pdf look like? We can work out the CDF $F$ using some rules of conditional probability. $$ F(x) = P(X\leq x \cap Z = 0) + P(X\leq x \cap Z = 1) \\ = P(X\leq x | Z = 0)P(Z=0) + P(X\leq x | Z=1)P(Z=1) \\ = 1 \cdot (1 - \theta) + F_\Gamma(x; \alpha, \beta) \theta \\ = 1 - \theta + \theta F_\Gamma(x; \alpha, \beta) $$ where $F_\Gamma$ denotes the CDF of an actual Gamma distribution. So we want a function $f_X$ such that $$ F(x) = \int_{[0, x]} f_X\,\text d\nu. $$ Note that $$ \int_{[0, x]} f_X\,\text d\nu = \int_{\{0\}} f_X\,\text d\delta_0 + \int_{(0, x)} f_X\,\text d\lambda \\ = f_X(0) + \int_{(0, x)} f_X\,\text d\lambda $$ so I can take $$ f_X(x) = 1 - \theta + \theta f_\Gamma(x; \alpha, \beta). $$ Let's check that this is a valid pdf: $$ \int_{[0,\infty)} f_X\,\text d\nu = 1 - \theta + \theta \int_0^\infty f_\Gamma \,\text d\lambda = 1 $$ so this is indeed a valid pdf (w.r.t. $\nu$). Now I'll work out the first two moments of $X_i$. $$ E(X_i) = \int_{[0,\infty)} x f_X(x)\,\text d\nu(x) \\ = 0 \cdot \int_{\{0\}} f_X\,\text d\delta_0 + \int_{(0,\infty)} x f_X(x)\,\text d\lambda(x) \\ = 0 + \theta \int_0^\infty x f_\Gamma(x) \,\text d\lambda(x) = \frac{\theta\alpha}\beta := \mu < \infty. $$ Next $$ E(X_i^2) = \int_{[0,\infty)} x^2 f_X(x)\,\text d\nu(x) \\ = 0 + \theta \int_0^\infty x^2 f_\Gamma(x)\,\text d\lambda(x) \\ = \frac{\theta\alpha(1 + \alpha)}{\beta^2}. $$ This means $$ \sigma^2 := E(X_i^2) - \mu^2 < \infty. $$ At long last I have confirmed the following facts: we have a collection of iid RVs $X_1, X_2,\dots$ with finite means and variances, so we can happily apply the standard CLT to conclude $$ \frac{\bar X_n - \mu_n}{\sqrt n} \stackrel{\text d}\to \mathcal N(0, \sigma^2). $$ Now as for how good this is, you'll probably want to do some simulations. Also I'm not saying this is actually a good model. I'll check my math with the following simulation: theta <- .76 a <- 5.4 b <- 1.2 n <- 1e6 set.seed(42) z <- rbinom(n, 1, theta) x <- numeric(n) x[z==1] <- rgamma(sum(z), shape=a, rate=b) hist(x, main="Zero inflated Gamma simulations") mean(x) theta * a / b # agrees mean(x^2) theta * a * (1 + a) / b^2 # agrees Also note that I'm not using a KDE to show the distribution like (it looks like) you are. Those typically aren't appropriate for distributions that have a point mass like this. Plus if you're using one that puts a mini gaussian at each data point then it's implicitly assumed that the support is all of $\mathbb R$ so you can also get positive probability on impossible areas like you did. If you choose to use this model and want to estimate the parameters, the EM algorithm is the usual way to go. In this case though there's no doubt as to which class a particular $X_i$ belongs as if $X_i = 0$ then $Z_i = 0$ almost surely. So you can do mean(x > 0) # compare with theta mu.x <- mean(x[x > 0]) s2.x <- var(x[x > 0]) (b.hat <- mu.x / s2.x) (a.hat <- mu.x^2 / s2.x) and these agree. But I have a massive sample size and $\theta$ isn't particularly close to $0$ or $1$ here so it's not impressive to be so accurate with this conditional approach.
Normal Distribution With Many Zero Values
I think you might be better off treating this as a mixture of two distributions rather than trying to apply the standard normal-theory tools, so instead I'm going to outline a bit about the zero infla
Normal Distribution With Many Zero Values I think you might be better off treating this as a mixture of two distributions rather than trying to apply the standard normal-theory tools, so instead I'm going to outline a bit about the zero inflated Gamma distribution, including computing its first two moments, to give you a sense of how this goes. You could pretty easily swap the Gamma out for a different continuous distribution if you'd like (e.g. a Beta distribution scaled to be in $[0,100]$). Happy to add updates later if this is not helpful. Let $Z_1,\dots,Z_n\stackrel{\text{iid}}\sim\text{Bern}(\theta)$ and consider $X_1,\dots,X_n$ where $$ X_i \vert Z_i \stackrel{\text{iid}}\sim \begin{cases}\Gamma(\alpha, \beta) & Z_i = 1 \\ 0 & Z_i = 0\end{cases} $$ so each $X_i$ is a mixture of a point mass at $0$ with probability $1-\theta$ and a $\Gamma(\alpha,\beta)$ with probability $\theta$. We interpret this as $Z_i$ being a hidden latent variable that determines whether or not the student studies, and then $X_i$ is the observed value. This is a bit formal but I'm going to mention it for the sake of completeness. $X_i$ does not have a pdf in the usual sense because it's neither discrete nor continuous, but if we consider the measure $\nu = \lambda + \delta_0$, i.e. the Lebesgue measure plus a point mass at $0$, then $\nu(A) = 0 \implies P_X(A) = 0$ for any measurable $A$ so we can get a pdf $f_X := \frac{\text dP_X}{\text d\nu}$ w.r.t. $\nu$. But what does this pdf look like? We can work out the CDF $F$ using some rules of conditional probability. $$ F(x) = P(X\leq x \cap Z = 0) + P(X\leq x \cap Z = 1) \\ = P(X\leq x | Z = 0)P(Z=0) + P(X\leq x | Z=1)P(Z=1) \\ = 1 \cdot (1 - \theta) + F_\Gamma(x; \alpha, \beta) \theta \\ = 1 - \theta + \theta F_\Gamma(x; \alpha, \beta) $$ where $F_\Gamma$ denotes the CDF of an actual Gamma distribution. So we want a function $f_X$ such that $$ F(x) = \int_{[0, x]} f_X\,\text d\nu. $$ Note that $$ \int_{[0, x]} f_X\,\text d\nu = \int_{\{0\}} f_X\,\text d\delta_0 + \int_{(0, x)} f_X\,\text d\lambda \\ = f_X(0) + \int_{(0, x)} f_X\,\text d\lambda $$ so I can take $$ f_X(x) = 1 - \theta + \theta f_\Gamma(x; \alpha, \beta). $$ Let's check that this is a valid pdf: $$ \int_{[0,\infty)} f_X\,\text d\nu = 1 - \theta + \theta \int_0^\infty f_\Gamma \,\text d\lambda = 1 $$ so this is indeed a valid pdf (w.r.t. $\nu$). Now I'll work out the first two moments of $X_i$. $$ E(X_i) = \int_{[0,\infty)} x f_X(x)\,\text d\nu(x) \\ = 0 \cdot \int_{\{0\}} f_X\,\text d\delta_0 + \int_{(0,\infty)} x f_X(x)\,\text d\lambda(x) \\ = 0 + \theta \int_0^\infty x f_\Gamma(x) \,\text d\lambda(x) = \frac{\theta\alpha}\beta := \mu < \infty. $$ Next $$ E(X_i^2) = \int_{[0,\infty)} x^2 f_X(x)\,\text d\nu(x) \\ = 0 + \theta \int_0^\infty x^2 f_\Gamma(x)\,\text d\lambda(x) \\ = \frac{\theta\alpha(1 + \alpha)}{\beta^2}. $$ This means $$ \sigma^2 := E(X_i^2) - \mu^2 < \infty. $$ At long last I have confirmed the following facts: we have a collection of iid RVs $X_1, X_2,\dots$ with finite means and variances, so we can happily apply the standard CLT to conclude $$ \frac{\bar X_n - \mu_n}{\sqrt n} \stackrel{\text d}\to \mathcal N(0, \sigma^2). $$ Now as for how good this is, you'll probably want to do some simulations. Also I'm not saying this is actually a good model. I'll check my math with the following simulation: theta <- .76 a <- 5.4 b <- 1.2 n <- 1e6 set.seed(42) z <- rbinom(n, 1, theta) x <- numeric(n) x[z==1] <- rgamma(sum(z), shape=a, rate=b) hist(x, main="Zero inflated Gamma simulations") mean(x) theta * a / b # agrees mean(x^2) theta * a * (1 + a) / b^2 # agrees Also note that I'm not using a KDE to show the distribution like (it looks like) you are. Those typically aren't appropriate for distributions that have a point mass like this. Plus if you're using one that puts a mini gaussian at each data point then it's implicitly assumed that the support is all of $\mathbb R$ so you can also get positive probability on impossible areas like you did. If you choose to use this model and want to estimate the parameters, the EM algorithm is the usual way to go. In this case though there's no doubt as to which class a particular $X_i$ belongs as if $X_i = 0$ then $Z_i = 0$ almost surely. So you can do mean(x > 0) # compare with theta mu.x <- mean(x[x > 0]) s2.x <- var(x[x > 0]) (b.hat <- mu.x / s2.x) (a.hat <- mu.x^2 / s2.x) and these agree. But I have a massive sample size and $\theta$ isn't particularly close to $0$ or $1$ here so it's not impressive to be so accurate with this conditional approach.
Normal Distribution With Many Zero Values I think you might be better off treating this as a mixture of two distributions rather than trying to apply the standard normal-theory tools, so instead I'm going to outline a bit about the zero infla
49,076
$\mathbb{E}(\log(X_{max}/X_{min}) )$ of Weibull(alpha, 1)
Simplifying the problem: The Weibull distribution with unit shape is the exponential distribution, so your specified sampling mechanism is equivalent to $X_1, ..., X_n \sim \text{IID Exp}(\text{Scale} = \alpha)$. For all arguments $x \geqslant 0$ you have the density and distribution functions: $$f_X(x) = \frac{1}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \quad \quad \quad \quad F_X(x) = 1-\exp \Big( - \frac{x}{\alpha} \Big) .$$ Now, write the order statistics in standard notation as $X_{(1)} \leqslant \cdots \leqslant X_{(n)}$. Since the logarithmic transformation is an increasing function, the order of the original $X$ values is preserved under the transformation. Hence, using the properties of the logarithm, and the linearity property of the expectation operator, you have: $$\mathbb{E}(\ln (X_{\max}/X_{\min})) = \mathbb{E}(\ln X_{(n)} - \ln X_{(1)}) = \mathbb{E}(\ln X_{(n)}) - \mathbb{E}(\ln X_{(1)}).$$ This means that your problem reduces to one of finding the moments of transformed order statistics. This can be solved using standard methods for dealing with order statistics. Finding the distribution of the order statistics: Using standard formulae for the distribution of the order statistics we have: $$\begin{equation} \begin{aligned} f_{X_{(1)}}(x) &= n (1-F_X(x))^{n-1} f_X(x) \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{nx}{\alpha} \Big), \\[10pt] f_{X_{(n)}}(x) &= n F_X(x)^{n-1} f_X(x) \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \Big( 1 - \exp \Big( - \frac{x}{\alpha} \Big) \Big)^{n-1} \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \sum_{k=0}^{n-1} {n-1 \choose k} (-1)^k \exp \Big( - \frac{kx}{\alpha} \Big) \\[6pt] &= \frac{n}{\alpha} \sum_{k=1}^{n} {n-1 \choose k-1} (-1)^{k-1} \exp \Big( - \frac{kx}{\alpha} \Big) \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \cdot \frac{k}{\alpha} \cdot \exp \Big( - \frac{kx}{\alpha} \Big). \end{aligned} \end{equation}$$ Finding the moments: Using the change of variable $r = nx/\alpha$ we have: $$\begin{equation} \begin{aligned} \mathbb{E}(\ln X_{(1)}) &= \int \limits_0^\infty \ln(x) f_{X_{(1)}}(x) dx \\[6pt] &= \frac{n}{\alpha} \int \limits_0^\infty \ln(x) \exp \Big( - \frac{nx}{\alpha} \Big) dx \\[6pt] &= \int \limits_0^\infty \ln \Big( \frac{\alpha r}{n} \Big) \exp (-r) dr \\[6pt] &= \int \limits_0^\infty \ln (r) \exp (-r) dr - \ln \Big( \frac{n}{\alpha} \Big) \int \limits_0^\infty \exp (-r) dr \\[6pt] &= - \gamma + \ln \alpha - \ln n, \\[6pt] \end{aligned} \end{equation}$$ where $\gamma$ is the Euler-Mascheroni constant. Applying this same integral result we then have: $$\begin{equation} \begin{aligned} \mathbb{E}(\ln X_{(n)}) &= \int \limits_0^\infty \ln(x) f_{X_{(n)}}(x) dx \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \cdot \frac{k}{\alpha} \int \limits_0^\infty \ln(x) \exp \Big( - \frac{kx}{\alpha} \Big) dx \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \Big[ -\gamma + \ln \alpha - \ln k \Big] \\[6pt] &= (-\gamma + \ln \alpha) \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} - \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \ln k \\[6pt] &= (-\gamma + \ln \alpha) \times 1 + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k \\[6pt] &= -\gamma + \ln \alpha + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k. \\[6pt] \end{aligned} \end{equation}$$ Putting this together we obtain the function: $$\begin{equation} \begin{aligned} E(n) &\equiv \mathbb{E}(\ln (X_{\max}/X_{\min})) \\[6pt] &= \mathbb{E}(\ln X_{(n)}) - \mathbb{E}(\ln X_{(1)}) \\[6pt] &= \ln n + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k \\[6pt] &= {n \choose 2} \ln 2 - {n \choose 3} \ln 3 + {n \choose 4} \ln 4 - \cdots + (-1)^n \ln n + \ln n. \\[6pt] \end{aligned} \end{equation}$$ This is a finite sum that can be easily evaluated for a given value of $n \in \mathbb{N}$. There does not appear to be any simpler form for this expression. It is interesting to note that this function does not depend on the scale parameter $\alpha$, so this parameter has no effect on the expected value of the log-ratio of the maximum-to-minimum value. This is unsurprising, in view of the fact that the log-ratio of those values is invariant to changes in scale. Plotting the function in R: This expected value function can be plotted for different input values of $n$ to get a visual sense of how the number of data points in the sample affects the expected log-ratio of the maximum-to-minimum value. To plot the function we use the following R code: #Create function to calculate expected value EXP <- function(n) { if(n%%1 != 0) { stop("Error: Input is not a positive integer") } else if(n < 1) { stop("Error: Input is not a positive integer") } else if(n == 1) { 0 } else { log(n) + sum(choose(n,2:n)*(-1)^(2:n)*log(2:n)) } }; #Plot expected value function N <- 20; NNN <- (1:N); EEE <- rep(0, N); for(n in 1:N) { EEE[n] <- EXP(n) }; DATA <- data.frame(n = NNN, Expectation = EEE); SPLINE <- as.data.frame(spline(NNN, EEE)); library(ggplot2); ggplot(data = DATA, aes(x = n, y = Expectation)) + geom_point(colour = "DarkBlue") + geom_line(data = SPLINE, aes(x = x, y = y), colour = "Blue") + scale_x_continuous(name = "Number of Data Points", labels = (1:N), breaks = (1:N)) + ggtitle("Expected value of log-ratio of maximum-to-minimum") + labs(subtitle = "(Data from an exponential distribution)") + xlab("Number of Data Points") + ylab("Expected Value"); This generates the following plot of the function:
$\mathbb{E}(\log(X_{max}/X_{min}) )$ of Weibull(alpha, 1)
Simplifying the problem: The Weibull distribution with unit shape is the exponential distribution, so your specified sampling mechanism is equivalent to $X_1, ..., X_n \sim \text{IID Exp}(\text{Scale}
$\mathbb{E}(\log(X_{max}/X_{min}) )$ of Weibull(alpha, 1) Simplifying the problem: The Weibull distribution with unit shape is the exponential distribution, so your specified sampling mechanism is equivalent to $X_1, ..., X_n \sim \text{IID Exp}(\text{Scale} = \alpha)$. For all arguments $x \geqslant 0$ you have the density and distribution functions: $$f_X(x) = \frac{1}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \quad \quad \quad \quad F_X(x) = 1-\exp \Big( - \frac{x}{\alpha} \Big) .$$ Now, write the order statistics in standard notation as $X_{(1)} \leqslant \cdots \leqslant X_{(n)}$. Since the logarithmic transformation is an increasing function, the order of the original $X$ values is preserved under the transformation. Hence, using the properties of the logarithm, and the linearity property of the expectation operator, you have: $$\mathbb{E}(\ln (X_{\max}/X_{\min})) = \mathbb{E}(\ln X_{(n)} - \ln X_{(1)}) = \mathbb{E}(\ln X_{(n)}) - \mathbb{E}(\ln X_{(1)}).$$ This means that your problem reduces to one of finding the moments of transformed order statistics. This can be solved using standard methods for dealing with order statistics. Finding the distribution of the order statistics: Using standard formulae for the distribution of the order statistics we have: $$\begin{equation} \begin{aligned} f_{X_{(1)}}(x) &= n (1-F_X(x))^{n-1} f_X(x) \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{nx}{\alpha} \Big), \\[10pt] f_{X_{(n)}}(x) &= n F_X(x)^{n-1} f_X(x) \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \Big( 1 - \exp \Big( - \frac{x}{\alpha} \Big) \Big)^{n-1} \\[6pt] &= \frac{n}{\alpha} \cdot \exp \Big( - \frac{x}{\alpha} \Big) \sum_{k=0}^{n-1} {n-1 \choose k} (-1)^k \exp \Big( - \frac{kx}{\alpha} \Big) \\[6pt] &= \frac{n}{\alpha} \sum_{k=1}^{n} {n-1 \choose k-1} (-1)^{k-1} \exp \Big( - \frac{kx}{\alpha} \Big) \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \cdot \frac{k}{\alpha} \cdot \exp \Big( - \frac{kx}{\alpha} \Big). \end{aligned} \end{equation}$$ Finding the moments: Using the change of variable $r = nx/\alpha$ we have: $$\begin{equation} \begin{aligned} \mathbb{E}(\ln X_{(1)}) &= \int \limits_0^\infty \ln(x) f_{X_{(1)}}(x) dx \\[6pt] &= \frac{n}{\alpha} \int \limits_0^\infty \ln(x) \exp \Big( - \frac{nx}{\alpha} \Big) dx \\[6pt] &= \int \limits_0^\infty \ln \Big( \frac{\alpha r}{n} \Big) \exp (-r) dr \\[6pt] &= \int \limits_0^\infty \ln (r) \exp (-r) dr - \ln \Big( \frac{n}{\alpha} \Big) \int \limits_0^\infty \exp (-r) dr \\[6pt] &= - \gamma + \ln \alpha - \ln n, \\[6pt] \end{aligned} \end{equation}$$ where $\gamma$ is the Euler-Mascheroni constant. Applying this same integral result we then have: $$\begin{equation} \begin{aligned} \mathbb{E}(\ln X_{(n)}) &= \int \limits_0^\infty \ln(x) f_{X_{(n)}}(x) dx \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \cdot \frac{k}{\alpha} \int \limits_0^\infty \ln(x) \exp \Big( - \frac{kx}{\alpha} \Big) dx \\[6pt] &= \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \Big[ -\gamma + \ln \alpha - \ln k \Big] \\[6pt] &= (-\gamma + \ln \alpha) \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} - \sum_{k=1}^{n} {n \choose k} (-1)^{k-1} \ln k \\[6pt] &= (-\gamma + \ln \alpha) \times 1 + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k \\[6pt] &= -\gamma + \ln \alpha + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k. \\[6pt] \end{aligned} \end{equation}$$ Putting this together we obtain the function: $$\begin{equation} \begin{aligned} E(n) &\equiv \mathbb{E}(\ln (X_{\max}/X_{\min})) \\[6pt] &= \mathbb{E}(\ln X_{(n)}) - \mathbb{E}(\ln X_{(1)}) \\[6pt] &= \ln n + \sum_{k=1}^{n} {n \choose k} (-1)^k \ln k \\[6pt] &= {n \choose 2} \ln 2 - {n \choose 3} \ln 3 + {n \choose 4} \ln 4 - \cdots + (-1)^n \ln n + \ln n. \\[6pt] \end{aligned} \end{equation}$$ This is a finite sum that can be easily evaluated for a given value of $n \in \mathbb{N}$. There does not appear to be any simpler form for this expression. It is interesting to note that this function does not depend on the scale parameter $\alpha$, so this parameter has no effect on the expected value of the log-ratio of the maximum-to-minimum value. This is unsurprising, in view of the fact that the log-ratio of those values is invariant to changes in scale. Plotting the function in R: This expected value function can be plotted for different input values of $n$ to get a visual sense of how the number of data points in the sample affects the expected log-ratio of the maximum-to-minimum value. To plot the function we use the following R code: #Create function to calculate expected value EXP <- function(n) { if(n%%1 != 0) { stop("Error: Input is not a positive integer") } else if(n < 1) { stop("Error: Input is not a positive integer") } else if(n == 1) { 0 } else { log(n) + sum(choose(n,2:n)*(-1)^(2:n)*log(2:n)) } }; #Plot expected value function N <- 20; NNN <- (1:N); EEE <- rep(0, N); for(n in 1:N) { EEE[n] <- EXP(n) }; DATA <- data.frame(n = NNN, Expectation = EEE); SPLINE <- as.data.frame(spline(NNN, EEE)); library(ggplot2); ggplot(data = DATA, aes(x = n, y = Expectation)) + geom_point(colour = "DarkBlue") + geom_line(data = SPLINE, aes(x = x, y = y), colour = "Blue") + scale_x_continuous(name = "Number of Data Points", labels = (1:N), breaks = (1:N)) + ggtitle("Expected value of log-ratio of maximum-to-minimum") + labs(subtitle = "(Data from an exponential distribution)") + xlab("Number of Data Points") + ylab("Expected Value"); This generates the following plot of the function:
$\mathbb{E}(\log(X_{max}/X_{min}) )$ of Weibull(alpha, 1) Simplifying the problem: The Weibull distribution with unit shape is the exponential distribution, so your specified sampling mechanism is equivalent to $X_1, ..., X_n \sim \text{IID Exp}(\text{Scale}
49,077
Representation input and output nodes in neural network for $\textit{AlphaZero}$ chess?
A good place to look might be the Deep Mind paper on the topic, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by David Silver et al. The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case. From the Methods section (p. 13): A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy $\pi(a|s)$ by a $8 \times 8 \times 73$ stack of planes encoding a probability distribution over $4,672$ possible moves. Each of the $8\times 8$ positions identifies the square from which to “pick up” a piece. The first 56 planes encode possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible under-promotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen. To address your questions: A move in chess is defined by the square the move starts from and the square the move ends. I've calculated all possible moves which are: diagonal moves (280), straight (horizontal/vertical) moves (896), knight (L-shape) moves (336) and castle moves (2 for agent only). All of these moves define all the possible moves all pieces in check can make. I am wondering if these should also be used as the number of output nodes where each of these nodes represents one move? It seems that they represent moves in a different way than your enumeration. They start from allotting an output neuron to each piece's starting point, and then destinations for that piece. Special moves, like pawn promotion, are likewise represented as "destinations". The first sentence of this passage suggests that some "filtration" is applied to screen out illegal moves, or illogical moves (like moving a piece 0 squares), from considerations of the move's "profit". So your intuition seems to be correct: make some assessments for all moves first, and then exclude all impossible moves. Am I correct with this assumption of the neural network of AlphaZero chess? You're partially right. The above passage (p. 13) supports that the illegal moves are discarded out of hand. The predicted value for a particular move is estimated in a different manner, however. Instead of having a "value" for each "move" neuron, the "value" for a particular move is estimated by Monte Carlo tree search. The authors write (pp. 2-3): Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero utilises a deep neural network $(\mathbf{p}, v) = f_\theta(s)$ with parameters $\theta$. This neural network takes the board position $s$ as an input and outputs a vector of move probabilities $\mathbf{p}$ with components $p_a = Pr(a|s)$ for each action $a$, and a scalar value $v$ estimating the expected outcome $z$ from position $s$, $v \approx \mathbb{E}[z|s]$. AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search. Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte-Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root $s_{root}$ to leaf. Each simulation proceeds by selecting in each state $s$ a move a with low visit count, high move probability and high value (averaged over the leaf states of simulations that selected a from $s$) according to the current neural network $f_\theta$. The search returns a vector $\pi$ representing a probability distribution over moves, either proportionally or greedily with respect to the visit counts at the root state.
Representation input and output nodes in neural network for $\textit{AlphaZero}$ chess?
A good place to look might be the Deep Mind paper on the topic, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by David Silver et al. The game of chess is th
Representation input and output nodes in neural network for $\textit{AlphaZero}$ chess? A good place to look might be the Deep Mind paper on the topic, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by David Silver et al. The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case. From the Methods section (p. 13): A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy $\pi(a|s)$ by a $8 \times 8 \times 73$ stack of planes encoding a probability distribution over $4,672$ possible moves. Each of the $8\times 8$ positions identifies the square from which to “pick up” a piece. The first 56 planes encode possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible under-promotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen. To address your questions: A move in chess is defined by the square the move starts from and the square the move ends. I've calculated all possible moves which are: diagonal moves (280), straight (horizontal/vertical) moves (896), knight (L-shape) moves (336) and castle moves (2 for agent only). All of these moves define all the possible moves all pieces in check can make. I am wondering if these should also be used as the number of output nodes where each of these nodes represents one move? It seems that they represent moves in a different way than your enumeration. They start from allotting an output neuron to each piece's starting point, and then destinations for that piece. Special moves, like pawn promotion, are likewise represented as "destinations". The first sentence of this passage suggests that some "filtration" is applied to screen out illegal moves, or illogical moves (like moving a piece 0 squares), from considerations of the move's "profit". So your intuition seems to be correct: make some assessments for all moves first, and then exclude all impossible moves. Am I correct with this assumption of the neural network of AlphaZero chess? You're partially right. The above passage (p. 13) supports that the illegal moves are discarded out of hand. The predicted value for a particular move is estimated in a different manner, however. Instead of having a "value" for each "move" neuron, the "value" for a particular move is estimated by Monte Carlo tree search. The authors write (pp. 2-3): Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero utilises a deep neural network $(\mathbf{p}, v) = f_\theta(s)$ with parameters $\theta$. This neural network takes the board position $s$ as an input and outputs a vector of move probabilities $\mathbf{p}$ with components $p_a = Pr(a|s)$ for each action $a$, and a scalar value $v$ estimating the expected outcome $z$ from position $s$, $v \approx \mathbb{E}[z|s]$. AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search. Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte-Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root $s_{root}$ to leaf. Each simulation proceeds by selecting in each state $s$ a move a with low visit count, high move probability and high value (averaged over the leaf states of simulations that selected a from $s$) according to the current neural network $f_\theta$. The search returns a vector $\pi$ representing a probability distribution over moves, either proportionally or greedily with respect to the visit counts at the root state.
Representation input and output nodes in neural network for $\textit{AlphaZero}$ chess? A good place to look might be the Deep Mind paper on the topic, "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by David Silver et al. The game of chess is th
49,078
How to efficiently calculate the PDF of a multivariate gaussian with linear algebra (python)
Least squares optimizer has an elegant solution using linear algebra. You are solving the system $A\hat x=\hat b$, where be is A is your matrix ( [[1,x0,z0],[1,x1,y2],...] ), $b$ is a column of [z0; z1; ;..] and $x$ is a vector containing the estimated parameters which your solving for. The vector $b$ is NOT in the column space of $A$, so there is no solution, so you need to decompose the vector $b$ vector into the sum of the projection of $b$ onto the column space of the matrix $A$ and the orthogonal component $e$ given by the following: \begin{equation} \label{2} b=proj_{Col(A)} + e \end{equation} Where $e$ is a vector containing errors orthogonal to the column space of $A$. Instead of solving $A x= b$, we solve the equation that best estimates $ b$. \begin{equation} Ax=proj_{Col(A)} \end{equation} Since, the $proj_{Col(A)}$ (read as the projection of b onto the column space of $A$) is in the column space of $A$ there will now be a solution to the system, where there wasn't one previously one! To find a the "best fit" solution start by combining the previous equations: \begin{equation} A x= b - e \end{equation} Here comes the trick! Multiply each term by $A^T$, which is the transposed matrix of A where the columns now become rows. \begin{equation} A^T A x=A^T b -A^T e \end{equation} Where $e$ is orthogonal to the row space of $A^T$, and therefore $ e$ is in the null space of $A^T$. This means term $A^T e $ becomes the zero vector $ 0$. What's left is the least squares solution to $A x=b$ given by : \begin{equation} A^T A x=A^T b \end{equation} Now you want to know how to code this... I will solve the 2 variable case: Multiplying everything out we get: To solve for the estimators, the matrix should be augmented and row reduced. The row reduction starts by switching row 1 and row 2. Then multiply row 1 by $-\frac{n}{\sum_{i=1}^{n} x_i}$ and add to row 2. This will result in a $0$ in the second row and first column. A total of two pivots for two rows means the matrix has full rank and $\hat b_0$ and $\hat b_1$ can be solved for.
How to efficiently calculate the PDF of a multivariate gaussian with linear algebra (python)
Least squares optimizer has an elegant solution using linear algebra. You are solving the system $A\hat x=\hat b$, where be is A is your matrix ( [[1,x0,z0],[1,x1,y2],...] ), $b$ is a column of [z0; z
How to efficiently calculate the PDF of a multivariate gaussian with linear algebra (python) Least squares optimizer has an elegant solution using linear algebra. You are solving the system $A\hat x=\hat b$, where be is A is your matrix ( [[1,x0,z0],[1,x1,y2],...] ), $b$ is a column of [z0; z1; ;..] and $x$ is a vector containing the estimated parameters which your solving for. The vector $b$ is NOT in the column space of $A$, so there is no solution, so you need to decompose the vector $b$ vector into the sum of the projection of $b$ onto the column space of the matrix $A$ and the orthogonal component $e$ given by the following: \begin{equation} \label{2} b=proj_{Col(A)} + e \end{equation} Where $e$ is a vector containing errors orthogonal to the column space of $A$. Instead of solving $A x= b$, we solve the equation that best estimates $ b$. \begin{equation} Ax=proj_{Col(A)} \end{equation} Since, the $proj_{Col(A)}$ (read as the projection of b onto the column space of $A$) is in the column space of $A$ there will now be a solution to the system, where there wasn't one previously one! To find a the "best fit" solution start by combining the previous equations: \begin{equation} A x= b - e \end{equation} Here comes the trick! Multiply each term by $A^T$, which is the transposed matrix of A where the columns now become rows. \begin{equation} A^T A x=A^T b -A^T e \end{equation} Where $e$ is orthogonal to the row space of $A^T$, and therefore $ e$ is in the null space of $A^T$. This means term $A^T e $ becomes the zero vector $ 0$. What's left is the least squares solution to $A x=b$ given by : \begin{equation} A^T A x=A^T b \end{equation} Now you want to know how to code this... I will solve the 2 variable case: Multiplying everything out we get: To solve for the estimators, the matrix should be augmented and row reduced. The row reduction starts by switching row 1 and row 2. Then multiply row 1 by $-\frac{n}{\sum_{i=1}^{n} x_i}$ and add to row 2. This will result in a $0$ in the second row and first column. A total of two pivots for two rows means the matrix has full rank and $\hat b_0$ and $\hat b_1$ can be solved for.
How to efficiently calculate the PDF of a multivariate gaussian with linear algebra (python) Least squares optimizer has an elegant solution using linear algebra. You are solving the system $A\hat x=\hat b$, where be is A is your matrix ( [[1,x0,z0],[1,x1,y2],...] ), $b$ is a column of [z0; z
49,079
Does a quadratic log-likehood mean the MLE is (approximately) normally distributed?
If you are not working with the asymptotic case, that "around the best fit" is key; if functions are twice differentiable, they are "locally linear" and also "locally quadratic", which latter implies that the quadratic approximation is arbitrarily good as you shrink the region over which you are approximating towards any given point. This means that all twice-differentiable log likelihood surfaces are approximately quadratic in a sufficiently small region around the best fit. Naturally this does not imply that the MLE is approximately (to any given degree) normally distributed, because, writing loosely, that "sufficiently small region" can be much smaller than the region in which the MLE might plausibly fall. If you are working with the asymptotic case, then things are different. If, asymptotically, your log-likelihood surface becomes quadratic around the best fit, then the corresponding MLE is, as you suspect, asymptotically Normally distributed. To see this, note that (one of the) standard proofs of asymptotic Normality of the MLE involves taking a Taylor expansion of the log likelihood function and ignoring all terms above the second order term; see for example http://www.stat.cmu.edu/~larry/=stat705/Lecture9.pdf, page 8. Obviously the validity of doing so requires that those terms actually be ignorable, i.e., that the log-likelihood surface becomes quadratic around the true parameter value as the sample size goes to $\infty$.
Does a quadratic log-likehood mean the MLE is (approximately) normally distributed?
If you are not working with the asymptotic case, that "around the best fit" is key; if functions are twice differentiable, they are "locally linear" and also "locally quadratic", which latter implies
Does a quadratic log-likehood mean the MLE is (approximately) normally distributed? If you are not working with the asymptotic case, that "around the best fit" is key; if functions are twice differentiable, they are "locally linear" and also "locally quadratic", which latter implies that the quadratic approximation is arbitrarily good as you shrink the region over which you are approximating towards any given point. This means that all twice-differentiable log likelihood surfaces are approximately quadratic in a sufficiently small region around the best fit. Naturally this does not imply that the MLE is approximately (to any given degree) normally distributed, because, writing loosely, that "sufficiently small region" can be much smaller than the region in which the MLE might plausibly fall. If you are working with the asymptotic case, then things are different. If, asymptotically, your log-likelihood surface becomes quadratic around the best fit, then the corresponding MLE is, as you suspect, asymptotically Normally distributed. To see this, note that (one of the) standard proofs of asymptotic Normality of the MLE involves taking a Taylor expansion of the log likelihood function and ignoring all terms above the second order term; see for example http://www.stat.cmu.edu/~larry/=stat705/Lecture9.pdf, page 8. Obviously the validity of doing so requires that those terms actually be ignorable, i.e., that the log-likelihood surface becomes quadratic around the true parameter value as the sample size goes to $\infty$.
Does a quadratic log-likehood mean the MLE is (approximately) normally distributed? If you are not working with the asymptotic case, that "around the best fit" is key; if functions are twice differentiable, they are "locally linear" and also "locally quadratic", which latter implies
49,080
LASSO: selection of penalty term: "one-standard-error" rule
I don't know of any rigorous justification for the "one-standard-error" rule. It seems to be a rule of thumb for situations where the analyst is more interested in parsimony than in predictive accuracy. It's important to recognize the artificial model being evaluated in the section of ESL that brings up the "one-standard-error" rule (p.244; Figure 7.9 posted by @Rickyfox) and how that type of model might not be relevant to many real-world problems. It's "from the scenario in the bottom right panel of Figure 7.3," which is explained in text on p. 226: it's a classification problem with 80 cases. The 20 predictors are each uniformly and independently distributed in [0,1]; the true class is 1 if the sum of the first 10 predictors is > 5. Thus the model used for this example has no correlations among the predictors, and 10 of the predictors have no predictive value at all. If you didn't know beforehand how many predictors are associated with the class membership but you suspected that only a small number are and that the predictors wouldn't be inter-correlated, one could argue that the "one-standard-error" rule would tend to give you the smallest useful LASSO model, and would be close to the "true" model. I haven't, however, come across many real-world situations where there are no correlations among the predictors or where one could a priori assume that a large number are unrelated to outcome. In those cases I don't know that there is any justification for the "one-standard-error" rule. Minimum cross-validation error would seem much better justified in such real-world situations. Also, note that the variable selection performed by LASSO makes the most sense in situations where there aren't correlations among predictors. If there are such correlations, the specific predictors selected are likely to depend heavily on the data sample at hand, as you can illustrate by repeating LASSO on multiple bootstrapped samples of such a dataset. So, yes, you can select predictors with LASSO but there is no assurance, with correlated predictors, that the selected predictors are in any sense "true" predictors, just useful ones.
LASSO: selection of penalty term: "one-standard-error" rule
I don't know of any rigorous justification for the "one-standard-error" rule. It seems to be a rule of thumb for situations where the analyst is more interested in parsimony than in predictive accurac
LASSO: selection of penalty term: "one-standard-error" rule I don't know of any rigorous justification for the "one-standard-error" rule. It seems to be a rule of thumb for situations where the analyst is more interested in parsimony than in predictive accuracy. It's important to recognize the artificial model being evaluated in the section of ESL that brings up the "one-standard-error" rule (p.244; Figure 7.9 posted by @Rickyfox) and how that type of model might not be relevant to many real-world problems. It's "from the scenario in the bottom right panel of Figure 7.3," which is explained in text on p. 226: it's a classification problem with 80 cases. The 20 predictors are each uniformly and independently distributed in [0,1]; the true class is 1 if the sum of the first 10 predictors is > 5. Thus the model used for this example has no correlations among the predictors, and 10 of the predictors have no predictive value at all. If you didn't know beforehand how many predictors are associated with the class membership but you suspected that only a small number are and that the predictors wouldn't be inter-correlated, one could argue that the "one-standard-error" rule would tend to give you the smallest useful LASSO model, and would be close to the "true" model. I haven't, however, come across many real-world situations where there are no correlations among the predictors or where one could a priori assume that a large number are unrelated to outcome. In those cases I don't know that there is any justification for the "one-standard-error" rule. Minimum cross-validation error would seem much better justified in such real-world situations. Also, note that the variable selection performed by LASSO makes the most sense in situations where there aren't correlations among predictors. If there are such correlations, the specific predictors selected are likely to depend heavily on the data sample at hand, as you can illustrate by repeating LASSO on multiple bootstrapped samples of such a dataset. So, yes, you can select predictors with LASSO but there is no assurance, with correlated predictors, that the selected predictors are in any sense "true" predictors, just useful ones.
LASSO: selection of penalty term: "one-standard-error" rule I don't know of any rigorous justification for the "one-standard-error" rule. It seems to be a rule of thumb for situations where the analyst is more interested in parsimony than in predictive accurac
49,081
LASSO: selection of penalty term: "one-standard-error" rule
Regarding your first question: The authors use this figure on p. 244 to illustrate what they mean with 'one-standard-error' rule. Standard error bars are shown, which are the standard errors of the individual misclassification error rates for each of the ten parts. Both curves have minima at p = 10, although the CV curve is rather flat beyond 10. Often a “one-standard error” rule is used with cross-validation, in which we choose the most parsimonious model whose error is no more than one standard error above the error of the best model. Here it looks like a model with about p = 9 predictors would be chosen, while the true model uses p = 10. As they state in the last sentence, the minimum of the CV error is at p=10 (while it looks like being the same at p=14 and p=15). Since in this example a smaller parameter value results in a more general model, they end up selecting the smallest parameter value whichs error is less than one standard error larger than the 'true optimum'. With the error bars plotted in the graph, you can see that the error of p=9 still lies withing the error of p=10 (look at the upper 'antenna'), while the even more general model with p=8 has an error that exceeds that. Regarding your second question: The selection of hyperparameters is always delicate. If you have data that you can set aside to perform model selection via cross-validation as detailed in the section of the text book, then this is a reasonable approach. You can examine how your model performs with different $\lambda$ values and plot them in a similar fashion, then decide based on the results. The rule mentioned above gives you a straight-forward directive on which value to select in the end.
LASSO: selection of penalty term: "one-standard-error" rule
Regarding your first question: The authors use this figure on p. 244 to illustrate what they mean with 'one-standard-error' rule. Standard error bars are shown, which are the standard errors of the
LASSO: selection of penalty term: "one-standard-error" rule Regarding your first question: The authors use this figure on p. 244 to illustrate what they mean with 'one-standard-error' rule. Standard error bars are shown, which are the standard errors of the individual misclassification error rates for each of the ten parts. Both curves have minima at p = 10, although the CV curve is rather flat beyond 10. Often a “one-standard error” rule is used with cross-validation, in which we choose the most parsimonious model whose error is no more than one standard error above the error of the best model. Here it looks like a model with about p = 9 predictors would be chosen, while the true model uses p = 10. As they state in the last sentence, the minimum of the CV error is at p=10 (while it looks like being the same at p=14 and p=15). Since in this example a smaller parameter value results in a more general model, they end up selecting the smallest parameter value whichs error is less than one standard error larger than the 'true optimum'. With the error bars plotted in the graph, you can see that the error of p=9 still lies withing the error of p=10 (look at the upper 'antenna'), while the even more general model with p=8 has an error that exceeds that. Regarding your second question: The selection of hyperparameters is always delicate. If you have data that you can set aside to perform model selection via cross-validation as detailed in the section of the text book, then this is a reasonable approach. You can examine how your model performs with different $\lambda$ values and plot them in a similar fashion, then decide based on the results. The rule mentioned above gives you a straight-forward directive on which value to select in the end.
LASSO: selection of penalty term: "one-standard-error" rule Regarding your first question: The authors use this figure on p. 244 to illustrate what they mean with 'one-standard-error' rule. Standard error bars are shown, which are the standard errors of the
49,082
LASSO: selection of penalty term: "one-standard-error" rule
To answer the second part of your question: Robert Tibshirani (who introduced the Lasso) writes in "introduction to statistical learning" on the subject of tuning parameter selection: Cross-validation provides a simple way to tackle this problem. We choose a grid of λ values, and compute the cross-validation error for each value of λ ... We then select the tuning parameter value for which the cross validation error is smallest. The advised way seems to be to search for the optimal value for your problem with the use of cross-validation.
LASSO: selection of penalty term: "one-standard-error" rule
To answer the second part of your question: Robert Tibshirani (who introduced the Lasso) writes in "introduction to statistical learning" on the subject of tuning parameter selection: Cross-validati
LASSO: selection of penalty term: "one-standard-error" rule To answer the second part of your question: Robert Tibshirani (who introduced the Lasso) writes in "introduction to statistical learning" on the subject of tuning parameter selection: Cross-validation provides a simple way to tackle this problem. We choose a grid of λ values, and compute the cross-validation error for each value of λ ... We then select the tuning parameter value for which the cross validation error is smallest. The advised way seems to be to search for the optimal value for your problem with the use of cross-validation.
LASSO: selection of penalty term: "one-standard-error" rule To answer the second part of your question: Robert Tibshirani (who introduced the Lasso) writes in "introduction to statistical learning" on the subject of tuning parameter selection: Cross-validati
49,083
Why is bridge regression called "bridge"?
The word "bridge" does not occur in the particular reference. But in other references it does occur. For instance equation 33 in Friedman, Jerome H. "An overview of predictive learning and function approximation." From statistics to neural networks (1994). Another approach is to approximate the discontinuous penalty (30) by a close continuous one, thereby enabling the use of numerical optimization. This is motivated by the observation that both (28) and (29) (30) can be viewed as two points on a continuum of penalties, such as $$\eta_q(\theta_1,\dots,\theta_p) = \sum_{j=1}^p |\theta_j|^q \quad\text{("bridge")} \tag{33} $$ (Frank and Friedman, 1993), or $$\eta_q(\theta_1,\dots,\theta_p) = \sum_{j=1}^p \frac{(\theta_j/w)^2}{1+(\theta_j/w)^2} \quad\text{("weight decay")} \tag{34}$$ (Wiegand, Huberman and Rumelhart, 1991). With the "bridge" penalty (33) $q=2$ yields the ridge penalty (28), whereas subset selection (29) (30) is approached in the limit as $q \to 0$. Therefore if "bridge" is meant to be the figurative bridge between two points as Kjetil mentioned as possiblity in the comments, then it is a bridge between subset selection and ridge and not between Lasso and ridge. Lasso didn't exist yet when this "bridge" penalty was conceptualized.
Why is bridge regression called "bridge"?
The word "bridge" does not occur in the particular reference. But in other references it does occur. For instance equation 33 in Friedman, Jerome H. "An overview of predictive learning and function ap
Why is bridge regression called "bridge"? The word "bridge" does not occur in the particular reference. But in other references it does occur. For instance equation 33 in Friedman, Jerome H. "An overview of predictive learning and function approximation." From statistics to neural networks (1994). Another approach is to approximate the discontinuous penalty (30) by a close continuous one, thereby enabling the use of numerical optimization. This is motivated by the observation that both (28) and (29) (30) can be viewed as two points on a continuum of penalties, such as $$\eta_q(\theta_1,\dots,\theta_p) = \sum_{j=1}^p |\theta_j|^q \quad\text{("bridge")} \tag{33} $$ (Frank and Friedman, 1993), or $$\eta_q(\theta_1,\dots,\theta_p) = \sum_{j=1}^p \frac{(\theta_j/w)^2}{1+(\theta_j/w)^2} \quad\text{("weight decay")} \tag{34}$$ (Wiegand, Huberman and Rumelhart, 1991). With the "bridge" penalty (33) $q=2$ yields the ridge penalty (28), whereas subset selection (29) (30) is approached in the limit as $q \to 0$. Therefore if "bridge" is meant to be the figurative bridge between two points as Kjetil mentioned as possiblity in the comments, then it is a bridge between subset selection and ridge and not between Lasso and ridge. Lasso didn't exist yet when this "bridge" penalty was conceptualized.
Why is bridge regression called "bridge"? The word "bridge" does not occur in the particular reference. But in other references it does occur. For instance equation 33 in Friedman, Jerome H. "An overview of predictive learning and function ap
49,084
Finding complete sufficient statistic
Recall: Definition: A statistic $T$ is complete for $\theta$ if $$E(g(T)) = 0, \ \text{ for all $\theta$} \quad \Rightarrow \quad P(g(T) = 0) = 1, \ \text{ for all $\theta$}$$ The part about $P(g(T) = 0) = 1$ basically says that the function $g$ is trivially $0$ everywhere (except possibly on a set of measure 0). So... If you want to prove that $T$ is NOT complete, you can try to find a non-trivial function $g(T)$ for which $E(g(T)) = 0$ for all values of $\theta$. Hint: Can you find $E(X_{(1)})$ and $E(X_{(n)})$? Start with that, and then try looking at linear combinations of the sufficient order statistics.
Finding complete sufficient statistic
Recall: Definition: A statistic $T$ is complete for $\theta$ if $$E(g(T)) = 0, \ \text{ for all $\theta$} \quad \Rightarrow \quad P(g(T) = 0) = 1, \ \text{ for all $\theta$}$$ The part about $P(g(T
Finding complete sufficient statistic Recall: Definition: A statistic $T$ is complete for $\theta$ if $$E(g(T)) = 0, \ \text{ for all $\theta$} \quad \Rightarrow \quad P(g(T) = 0) = 1, \ \text{ for all $\theta$}$$ The part about $P(g(T) = 0) = 1$ basically says that the function $g$ is trivially $0$ everywhere (except possibly on a set of measure 0). So... If you want to prove that $T$ is NOT complete, you can try to find a non-trivial function $g(T)$ for which $E(g(T)) = 0$ for all values of $\theta$. Hint: Can you find $E(X_{(1)})$ and $E(X_{(n)})$? Start with that, and then try looking at linear combinations of the sufficient order statistics.
Finding complete sufficient statistic Recall: Definition: A statistic $T$ is complete for $\theta$ if $$E(g(T)) = 0, \ \text{ for all $\theta$} \quad \Rightarrow \quad P(g(T) = 0) = 1, \ \text{ for all $\theta$}$$ The part about $P(g(T
49,085
Finding complete sufficient statistic
Method 1 $(X_{(1)},X_{(n)})$ is not complete because we can find $g\neq0$ but $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=0,\forall\theta$. $g$ is $(t_1,t_2)\rightarrow\frac{n+1}{n-1}t_2-\frac{n+1}{1-n}t_1$. This is because $\mathbb{E}(X_{(n)})=\frac{n-1}{n+1}\theta$ and $\mathbb{E}(X_{(1)})=\frac{1-n}{n+1}\theta$. Thus $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=\mathbb{E}\left[\frac{n+1}{n-1}X_{(n)}-\frac{n+1}{1-n}X_{(1)}\right] = \frac{n+1}{n-1}\mathbb{E}(X_{(n)})-\frac{n+1}{1-n}\mathbb{E}(X_{(1)}) = \theta-\theta=0,\forall \theta$. Method 2 If the sufficient statistic $(X_{(1)},X_{(n)})$ is complete, then it is a minimal sufficient statistic. However, (X_{(1)},X_{(n)}) is not a minimal sufficient statistic. A minimal sufficient statistic is $\max\{-X_{(1)},X_{(n)}\}$. It is possible that $(x_{(1)},x_{(n)})\neq(y_{(1)},y_{(n)})$ but $\max\{-x_{(1)},x_{(n)}\}=\max\{-y_{(1)},y_{(n)}\}$.
Finding complete sufficient statistic
Method 1 $(X_{(1)},X_{(n)})$ is not complete because we can find $g\neq0$ but $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=0,\forall\theta$. $g$ is $(t_1,t_2)\rightarrow\frac{n+1}{n-1}t_2-\frac{n+1}{1-n
Finding complete sufficient statistic Method 1 $(X_{(1)},X_{(n)})$ is not complete because we can find $g\neq0$ but $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=0,\forall\theta$. $g$ is $(t_1,t_2)\rightarrow\frac{n+1}{n-1}t_2-\frac{n+1}{1-n}t_1$. This is because $\mathbb{E}(X_{(n)})=\frac{n-1}{n+1}\theta$ and $\mathbb{E}(X_{(1)})=\frac{1-n}{n+1}\theta$. Thus $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=\mathbb{E}\left[\frac{n+1}{n-1}X_{(n)}-\frac{n+1}{1-n}X_{(1)}\right] = \frac{n+1}{n-1}\mathbb{E}(X_{(n)})-\frac{n+1}{1-n}\mathbb{E}(X_{(1)}) = \theta-\theta=0,\forall \theta$. Method 2 If the sufficient statistic $(X_{(1)},X_{(n)})$ is complete, then it is a minimal sufficient statistic. However, (X_{(1)},X_{(n)}) is not a minimal sufficient statistic. A minimal sufficient statistic is $\max\{-X_{(1)},X_{(n)}\}$. It is possible that $(x_{(1)},x_{(n)})\neq(y_{(1)},y_{(n)})$ but $\max\{-x_{(1)},x_{(n)}\}=\max\{-y_{(1)},y_{(n)}\}$.
Finding complete sufficient statistic Method 1 $(X_{(1)},X_{(n)})$ is not complete because we can find $g\neq0$ but $\mathbb{E}\left[g(X_{(1)},X_{(n)})\right]=0,\forall\theta$. $g$ is $(t_1,t_2)\rightarrow\frac{n+1}{n-1}t_2-\frac{n+1}{1-n
49,086
How to compute joint entropy of high-dimensional data?
Generally, estimating the entropy in high-dimensions is going to be intractable. What you can do instead is estimate an upper bound on the entropy. Note that entropy can be written as an expectation: $$H(X_1, \ldots, X_n) = -\mathbb E_p \log p(x)$$ Here, $\mathbb E_p$ is an expectation over the distribution $p(x)$. Imagine that you fit some other generative model, $q(x)$, that you can calculate exactly. This won't be exactly the same as $p(x)$ but it can help you get a upper bound on the entropy of $p(x)$. The KL divergence can be written as: $$D(p(x)\| q(x)) = \mathbb E_p \log p(x) - \mathbb E_p \log q(x)$$ Using Jensen's inequality, we can see that KL divergence is always non-negative, and therefore, $H(X) = -\mathbb E_p \log p(x) \leq - \mathbb E_p \log q(x)$. The quantity on the right is what people sometimes call the negative log-likelihood of the data (drawn from $p(x)$) under the model, $q(x)$. Because $D(p(x)\| p(x)) = 0$ and $D(p(x)\| q(x)) \geq 0$, this implies that no model, $q$, can give a better score for negative log likelihood than the true distribution, $p$. The negative log likelihood is often reported in papers as a measure of how well you have modeled the data, here's one example (see Table 1) that links to others. Intuitively, why can't we exactly calculate the entropy, or provide nearly tight lower bounds? Entropy measures the optimal compression for the data. If you know the true entropy, you are saying that the data can be compressed this much and not a bit more. That's difficult in high-dimensions because there could always be some hidden structure that could help you compress a little more but that you might not observe with a small number of samples. EDIT: I forgot one really important component from your question. You are estimating entropy by binning your data. This is definitely going to fail in high dimensions. Suppose you have 2 bins for each dimension (maybe greater or less than 0.5). Then in $d=784$ dimensions, the total number of bins is $2^{784}$. With only $60,000$ samples, almost every bin will be empty. You don't have enough to samples empirically estimate the frequency of each bin. That's why papers like the one I linked use more sophisticated strategies for modeling $q(x)$ that have a small number of parameters that can be estimated more reliably.
How to compute joint entropy of high-dimensional data?
Generally, estimating the entropy in high-dimensions is going to be intractable. What you can do instead is estimate an upper bound on the entropy. Note that entropy can be written as an expectation:
How to compute joint entropy of high-dimensional data? Generally, estimating the entropy in high-dimensions is going to be intractable. What you can do instead is estimate an upper bound on the entropy. Note that entropy can be written as an expectation: $$H(X_1, \ldots, X_n) = -\mathbb E_p \log p(x)$$ Here, $\mathbb E_p$ is an expectation over the distribution $p(x)$. Imagine that you fit some other generative model, $q(x)$, that you can calculate exactly. This won't be exactly the same as $p(x)$ but it can help you get a upper bound on the entropy of $p(x)$. The KL divergence can be written as: $$D(p(x)\| q(x)) = \mathbb E_p \log p(x) - \mathbb E_p \log q(x)$$ Using Jensen's inequality, we can see that KL divergence is always non-negative, and therefore, $H(X) = -\mathbb E_p \log p(x) \leq - \mathbb E_p \log q(x)$. The quantity on the right is what people sometimes call the negative log-likelihood of the data (drawn from $p(x)$) under the model, $q(x)$. Because $D(p(x)\| p(x)) = 0$ and $D(p(x)\| q(x)) \geq 0$, this implies that no model, $q$, can give a better score for negative log likelihood than the true distribution, $p$. The negative log likelihood is often reported in papers as a measure of how well you have modeled the data, here's one example (see Table 1) that links to others. Intuitively, why can't we exactly calculate the entropy, or provide nearly tight lower bounds? Entropy measures the optimal compression for the data. If you know the true entropy, you are saying that the data can be compressed this much and not a bit more. That's difficult in high-dimensions because there could always be some hidden structure that could help you compress a little more but that you might not observe with a small number of samples. EDIT: I forgot one really important component from your question. You are estimating entropy by binning your data. This is definitely going to fail in high dimensions. Suppose you have 2 bins for each dimension (maybe greater or less than 0.5). Then in $d=784$ dimensions, the total number of bins is $2^{784}$. With only $60,000$ samples, almost every bin will be empty. You don't have enough to samples empirically estimate the frequency of each bin. That's why papers like the one I linked use more sophisticated strategies for modeling $q(x)$ that have a small number of parameters that can be estimated more reliably.
How to compute joint entropy of high-dimensional data? Generally, estimating the entropy in high-dimensions is going to be intractable. What you can do instead is estimate an upper bound on the entropy. Note that entropy can be written as an expectation:
49,087
Derivation of the conditional median for linear regression in “The elements of statistical learning ”
First, I think you misspelled something in the question. In your case it should be $$ EPE(f)=\mathbb{E}(\vert Y-f(X)\vert). $$ What you want to show is that $$ \text{argmin}_{f \text{ measurable}}EPE(f)=\left(X\mapsto\text{median}(Y\vert X)\right) $$ This is in fact equivalent to showing that the median is the best constant approximation in the $L^1$-norm, i.e. that $$ \text{argmin}_{c}\mathbb{E}(\vert X-c\vert) = c^* $$ where $$ c^*=\inf\{t:F_X(t)\geq 0.5\} $$ is the median of $X$ defined via the generalized inverse of the cdf $F_X(\cdot)$ of $X$.This can be easily shown as follows: First assume that $c>c^*$, then \begin{align} \mathbb{E}(\vert X-c\vert)&=\mathbb{E}((X-c)\chi_{\{X>c\}})-\mathbb{E}((X-c)\chi_{\{X\in(c^*,c]\}})-\mathbb{E}((X-c)\chi_{\{X\leq c^*\}})\\ &=\mathbb{E}((X-c^*)\chi_{\{X>c\}})-(c-c^*)\mathbb{P}(X>c)\\ &\quad\quad + \mathbb{E}((X-c^*)\chi_{\{X\in(c^*,c]\}})-2\mathbb{E}(X\chi_{\{X\in(c^*,c]\}})+(c+c^*)\mathbb{P}(X\in (c^*,c])\\ &\quad\quad-\mathbb{E}((X-c^*)\chi_{\{X\leq c^*\}})+(c-c^*)\mathbb{P}(X\leq c^*) \end{align} Now we bound $$ -2\mathbb{E}(X\chi_{\{X\in (c^*,c]\}})\geq -2c\mathbb{P}(X\in (c^*,c]). $$ Hence, we get \begin{align} \mathbb{E}(\vert X-c\vert)&\geq \mathbb{E}(\vert X-c^*\vert)+(c-c^*)\left(\mathbb{P}(X\leq c^*)-P(X>c)-\mathbb{P}(X\in (c^*,c])\right)\\ &=\mathbb{E}(\vert X-c^*\vert)+(c-c^*)(2\mathbb{P}(X\leq c^*)-1)\\ &\geq \mathbb{E}(\vert X-c^*\vert), \end{align} where we used that $c>c^*$ and $2\mathbb{P}(x\leq c^*)\geq 1$ by the definition of $c^*$. Analogously it can be shown that the same thing holds for $c<c^*$. Hence, we can conclude that the median is in fact the constant RV that approximates $X$ the best in $L^1$. Finally this can be used to show the final result: \begin{align} EPE(f)&=\mathbb{E}(\vert Y-f(X)\vert)\\ &=\mathbb{E}(\mathbb{E}(\vert Y-f(X)\vert\vert X))\\ &\geq \mathbb{E}(\vert Y-\text{median}(Y\vert X)\vert\vert X)\\ &=EPE(\text{median}(Y\vert X)) \end{align}
Derivation of the conditional median for linear regression in “The elements of statistical learning
First, I think you misspelled something in the question. In your case it should be $$ EPE(f)=\mathbb{E}(\vert Y-f(X)\vert). $$ What you want to show is that $$ \text{argmin}_{f \text{ measurable}}EPE(
Derivation of the conditional median for linear regression in “The elements of statistical learning ” First, I think you misspelled something in the question. In your case it should be $$ EPE(f)=\mathbb{E}(\vert Y-f(X)\vert). $$ What you want to show is that $$ \text{argmin}_{f \text{ measurable}}EPE(f)=\left(X\mapsto\text{median}(Y\vert X)\right) $$ This is in fact equivalent to showing that the median is the best constant approximation in the $L^1$-norm, i.e. that $$ \text{argmin}_{c}\mathbb{E}(\vert X-c\vert) = c^* $$ where $$ c^*=\inf\{t:F_X(t)\geq 0.5\} $$ is the median of $X$ defined via the generalized inverse of the cdf $F_X(\cdot)$ of $X$.This can be easily shown as follows: First assume that $c>c^*$, then \begin{align} \mathbb{E}(\vert X-c\vert)&=\mathbb{E}((X-c)\chi_{\{X>c\}})-\mathbb{E}((X-c)\chi_{\{X\in(c^*,c]\}})-\mathbb{E}((X-c)\chi_{\{X\leq c^*\}})\\ &=\mathbb{E}((X-c^*)\chi_{\{X>c\}})-(c-c^*)\mathbb{P}(X>c)\\ &\quad\quad + \mathbb{E}((X-c^*)\chi_{\{X\in(c^*,c]\}})-2\mathbb{E}(X\chi_{\{X\in(c^*,c]\}})+(c+c^*)\mathbb{P}(X\in (c^*,c])\\ &\quad\quad-\mathbb{E}((X-c^*)\chi_{\{X\leq c^*\}})+(c-c^*)\mathbb{P}(X\leq c^*) \end{align} Now we bound $$ -2\mathbb{E}(X\chi_{\{X\in (c^*,c]\}})\geq -2c\mathbb{P}(X\in (c^*,c]). $$ Hence, we get \begin{align} \mathbb{E}(\vert X-c\vert)&\geq \mathbb{E}(\vert X-c^*\vert)+(c-c^*)\left(\mathbb{P}(X\leq c^*)-P(X>c)-\mathbb{P}(X\in (c^*,c])\right)\\ &=\mathbb{E}(\vert X-c^*\vert)+(c-c^*)(2\mathbb{P}(X\leq c^*)-1)\\ &\geq \mathbb{E}(\vert X-c^*\vert), \end{align} where we used that $c>c^*$ and $2\mathbb{P}(x\leq c^*)\geq 1$ by the definition of $c^*$. Analogously it can be shown that the same thing holds for $c<c^*$. Hence, we can conclude that the median is in fact the constant RV that approximates $X$ the best in $L^1$. Finally this can be used to show the final result: \begin{align} EPE(f)&=\mathbb{E}(\vert Y-f(X)\vert)\\ &=\mathbb{E}(\mathbb{E}(\vert Y-f(X)\vert\vert X))\\ &\geq \mathbb{E}(\vert Y-\text{median}(Y\vert X)\vert\vert X)\\ &=EPE(\text{median}(Y\vert X)) \end{align}
Derivation of the conditional median for linear regression in “The elements of statistical learning First, I think you misspelled something in the question. In your case it should be $$ EPE(f)=\mathbb{E}(\vert Y-f(X)\vert). $$ What you want to show is that $$ \text{argmin}_{f \text{ measurable}}EPE(
49,088
Derivation of the conditional median for linear regression in “The elements of statistical learning ”
INTUITION This part is taken from this answer. Assume that $S$ is a finite set, with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$. If $k$ is even there are (depending on the exact definition of median) many medians. $|x-s_i|$ is the distance between $x$ and $s_i$, so we are trying to minimize the sum of the distances. For example, we have $k$ people who live at various points on the $x$-axis. We want to find the point(s) $x$ such that the sum of the travel distances of the $k$ people to $x$ is a minimum. Imagine that the $s_i$ are points on the $x$-axis. For clarity, take $k=7$. Start from well to the left of all the $s_i$, and take a tiny step, say of length $\epsilon$, to the right. Then you have gotten $\epsilon$ closer to every one of the $s_i$, so the sum of the distances has decreased by $7\epsilon$. Keep taking tiny steps to the right, each time getting a decrease of $7\epsilon$. This continues until you hit $s_1$. If you now take a tiny step to the right, then your distance from $s_1$ increases by $\epsilon$, and your distance from each of the remaining $s_i$ decreases by $\epsilon$. So there is a decrease of $6\epsilon$, and an increase of $\epsilon$, for a net decrease of $5\epsilon$ in the sum. This continues until you hit $s_2$. Now, when you take a tiny step to the right, your distance from each of $s_1$ and $s_2$ increases by $\epsilon$, and your distance from each of the five others decreases by $\epsilon$, for a net decrease of $3\epsilon$. This continues until you hit $s_3$. The next tiny step gives an increase of $3\epsilon$, and a decrease of $4\epsilon$, for a net decrease of $\epsilon$. This continues until you hit $s_4$. The next little step brings a total increase of $4\epsilon$, and a total decrease of $3\epsilon$, for an increase of $\epsilon$. Things get even worse when you travel further to the right. So the minimum sum of distances is reached at $s_4$, the median. The situation is quite similar if $k$ is even, say $k=6$. As you travel to the right, there is a net decrease at every step, until you hit $s_3$. When you are between $s_3$ and $s_4$, a tiny step of $\epsilon$ increases your distance from each of $s_1$, $s_2$, and $s_3$ by $\epsilon$. But it decreases your distance from each of the three others, for no net gain. Thus any $x$ in the interval from $s_3$ to $s_4$, including the endpoints, minimizes the sum of the distances. In the even case, Some people prefer to say that any point between the two "middle" points is a median. So the conclusion is that the points that minimize the sum are the medians. Other people prefer to define the median in the even case to be the average of the two "middle" points. Then the median does minimize the sum of the distances, but some other points also do. IN FORMULAS This is taken from this answer Consider two $x_i$'s $x_1$ and $x_2$, For $x_1\leq a\leq x_2$, $\sum_{i=1}^{2}|x_i-a|=|x_1-a|+|x_2-a|=a-x_1+x_2-a=x_2-x_1$ For $a\lt x_1$, $\sum_{i=1}^{2}|x_i-a|=x_1-a+x_2-a=x_1+x_2-2a\gt x_1+x_2-2x_1=x_2-x_1$ For $a\gt x_2$,$\sum_{i=1}^{2}|x_i-a|=-x_1+a-x_2+a=-x_1-x_2+2a\gt -x_1-x_2+2x_2=x_2 - x_1$ $\implies$for any two $x_i$'s the sum of the absolute values of the deviations is minimum when $x_1\leq a\leq x_2$ or $a\in[x_1,x_2]$. When $n$ is odd, $$ \sum_{i=1}^n|x_i-a|=|x_1-a|+|x_2-a|+\cdots+\left|x_{\tfrac{n-1}{2}}-a\right| + \left|x_{\tfrac{n+1}{2}}-a\right|+\left|x_{\tfrac{n+3}{2}}-a|+\cdots+|x_{n-1}-a\right|+|x_n-a| $$ consider the intervals $[x_1,x_n], [x_2,x_{n-1}], [x_3,x_{n-2}], \ldots, \left[x_{\tfrac{n-1}{2}}, x_{\tfrac{n+3}{2}}\right]$. If $a$ is a member of all these intervals. i.e, $\left[x_{\tfrac{n-1}{2}},x_{\tfrac{n+3}{2}}\right],$ using the above theorem, we can say that all the terms in the sum except $\left|x_{\tfrac{n+1}{2}}-a\right|$ are minimized. So $$ \sum_{i=1}^n|x_i-a|=(x_n-x_1)+(x_{n-1}-x_2)+(x_{n-2}-x_3)+\cdots + \left(x_{\tfrac{n+3}{2}}-x_{\tfrac{n-1}{2}}\right) + \left|x_{\tfrac{n+1}{2}}-a\right| = \left|x_{\tfrac{n+1}{2}}-a \right|+\text{costant} $$ To minimize also the term $\left|x_{\tfrac{n+1}{2}}-a \right|$ it is clear we have to choose $a=x_{\tfrac{n+1}{2}}$ to get $0$ but this is the definition of the median. $\implies$ When $n$ is odd,the median minimizes the sum of absolute values of the deviations. When $n$ is even, $$ \sum_{i=1}^n|x_i-a|=|x_1-a|+|x_2-a|+\cdots+|x_{\tfrac{n}{2}}-a|+|x_{\tfrac{n}{2}+1}-a|+\cdots+|x_{n-1}-a|+|x_n-a|\\ $$ If $a$ is a member of all the intervals $[x_1,x_n], [x_2,x_{n-1}], [x_3,x_{n-2}], \ldots, \left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]$, i.e, $a\in\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]$, $$ \sum_{i=1}^n|x_i-a|=(x_n-x_1)+(x_{n-1}-x_2)+(x_{n-2}-x_3)+\cdots + \left(x_{\tfrac{n}{2}+1}-x_{\tfrac{n}{2}}\right) $$ $\implies$ When $n$ is even, any number in the interval $[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}]$, i.e, including the median, minimizes the sum of absolute values of the deviations. For example consider the series:$2, 4, 5, 10$, median, $M=4.5$. $$ \sum_{i=1}^4|x_i-M|=2.5+0.5+0.5+5.5=9 $$ If you take any other value in the interval $\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2} + 1} \right] =[4,5]$, say $4.1$ $$ \sum_{i=1}^4|x_i-4.1|=2.1+0.1+0.9+5.9=9 $$ Taking for example $4$ or $5$ yields the same result: $$ \sum_{i=1}^4|x_i-4|=2+0+1+6=9 $$ $$ \sum_{i=1}^4|x_i-5|=3+1+0+5=9 $$ This is because when summing the distance from $a$ to the two middle points, you end up with the distance between them: $a-x_{\tfrac{n}{2}}+(x_{\tfrac{n}{2}+1}-a) = x_{\tfrac{n}{2}+1}-x_{\tfrac{n}{2}}$ For any value outside the interval $\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]=[4,5]$, say $5.2$ $$ \sum_{i=1}^4|x_i-5.2|=3.2+1.2+0.2+4.8=9.4 $$ The question has other good answers but in my opinion these are the clearest.
Derivation of the conditional median for linear regression in “The elements of statistical learning
INTUITION This part is taken from this answer. Assume that $S$ is a finite set, with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$. If $k$ is even there are (depending on the exa
Derivation of the conditional median for linear regression in “The elements of statistical learning ” INTUITION This part is taken from this answer. Assume that $S$ is a finite set, with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$. If $k$ is even there are (depending on the exact definition of median) many medians. $|x-s_i|$ is the distance between $x$ and $s_i$, so we are trying to minimize the sum of the distances. For example, we have $k$ people who live at various points on the $x$-axis. We want to find the point(s) $x$ such that the sum of the travel distances of the $k$ people to $x$ is a minimum. Imagine that the $s_i$ are points on the $x$-axis. For clarity, take $k=7$. Start from well to the left of all the $s_i$, and take a tiny step, say of length $\epsilon$, to the right. Then you have gotten $\epsilon$ closer to every one of the $s_i$, so the sum of the distances has decreased by $7\epsilon$. Keep taking tiny steps to the right, each time getting a decrease of $7\epsilon$. This continues until you hit $s_1$. If you now take a tiny step to the right, then your distance from $s_1$ increases by $\epsilon$, and your distance from each of the remaining $s_i$ decreases by $\epsilon$. So there is a decrease of $6\epsilon$, and an increase of $\epsilon$, for a net decrease of $5\epsilon$ in the sum. This continues until you hit $s_2$. Now, when you take a tiny step to the right, your distance from each of $s_1$ and $s_2$ increases by $\epsilon$, and your distance from each of the five others decreases by $\epsilon$, for a net decrease of $3\epsilon$. This continues until you hit $s_3$. The next tiny step gives an increase of $3\epsilon$, and a decrease of $4\epsilon$, for a net decrease of $\epsilon$. This continues until you hit $s_4$. The next little step brings a total increase of $4\epsilon$, and a total decrease of $3\epsilon$, for an increase of $\epsilon$. Things get even worse when you travel further to the right. So the minimum sum of distances is reached at $s_4$, the median. The situation is quite similar if $k$ is even, say $k=6$. As you travel to the right, there is a net decrease at every step, until you hit $s_3$. When you are between $s_3$ and $s_4$, a tiny step of $\epsilon$ increases your distance from each of $s_1$, $s_2$, and $s_3$ by $\epsilon$. But it decreases your distance from each of the three others, for no net gain. Thus any $x$ in the interval from $s_3$ to $s_4$, including the endpoints, minimizes the sum of the distances. In the even case, Some people prefer to say that any point between the two "middle" points is a median. So the conclusion is that the points that minimize the sum are the medians. Other people prefer to define the median in the even case to be the average of the two "middle" points. Then the median does minimize the sum of the distances, but some other points also do. IN FORMULAS This is taken from this answer Consider two $x_i$'s $x_1$ and $x_2$, For $x_1\leq a\leq x_2$, $\sum_{i=1}^{2}|x_i-a|=|x_1-a|+|x_2-a|=a-x_1+x_2-a=x_2-x_1$ For $a\lt x_1$, $\sum_{i=1}^{2}|x_i-a|=x_1-a+x_2-a=x_1+x_2-2a\gt x_1+x_2-2x_1=x_2-x_1$ For $a\gt x_2$,$\sum_{i=1}^{2}|x_i-a|=-x_1+a-x_2+a=-x_1-x_2+2a\gt -x_1-x_2+2x_2=x_2 - x_1$ $\implies$for any two $x_i$'s the sum of the absolute values of the deviations is minimum when $x_1\leq a\leq x_2$ or $a\in[x_1,x_2]$. When $n$ is odd, $$ \sum_{i=1}^n|x_i-a|=|x_1-a|+|x_2-a|+\cdots+\left|x_{\tfrac{n-1}{2}}-a\right| + \left|x_{\tfrac{n+1}{2}}-a\right|+\left|x_{\tfrac{n+3}{2}}-a|+\cdots+|x_{n-1}-a\right|+|x_n-a| $$ consider the intervals $[x_1,x_n], [x_2,x_{n-1}], [x_3,x_{n-2}], \ldots, \left[x_{\tfrac{n-1}{2}}, x_{\tfrac{n+3}{2}}\right]$. If $a$ is a member of all these intervals. i.e, $\left[x_{\tfrac{n-1}{2}},x_{\tfrac{n+3}{2}}\right],$ using the above theorem, we can say that all the terms in the sum except $\left|x_{\tfrac{n+1}{2}}-a\right|$ are minimized. So $$ \sum_{i=1}^n|x_i-a|=(x_n-x_1)+(x_{n-1}-x_2)+(x_{n-2}-x_3)+\cdots + \left(x_{\tfrac{n+3}{2}}-x_{\tfrac{n-1}{2}}\right) + \left|x_{\tfrac{n+1}{2}}-a\right| = \left|x_{\tfrac{n+1}{2}}-a \right|+\text{costant} $$ To minimize also the term $\left|x_{\tfrac{n+1}{2}}-a \right|$ it is clear we have to choose $a=x_{\tfrac{n+1}{2}}$ to get $0$ but this is the definition of the median. $\implies$ When $n$ is odd,the median minimizes the sum of absolute values of the deviations. When $n$ is even, $$ \sum_{i=1}^n|x_i-a|=|x_1-a|+|x_2-a|+\cdots+|x_{\tfrac{n}{2}}-a|+|x_{\tfrac{n}{2}+1}-a|+\cdots+|x_{n-1}-a|+|x_n-a|\\ $$ If $a$ is a member of all the intervals $[x_1,x_n], [x_2,x_{n-1}], [x_3,x_{n-2}], \ldots, \left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]$, i.e, $a\in\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]$, $$ \sum_{i=1}^n|x_i-a|=(x_n-x_1)+(x_{n-1}-x_2)+(x_{n-2}-x_3)+\cdots + \left(x_{\tfrac{n}{2}+1}-x_{\tfrac{n}{2}}\right) $$ $\implies$ When $n$ is even, any number in the interval $[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}]$, i.e, including the median, minimizes the sum of absolute values of the deviations. For example consider the series:$2, 4, 5, 10$, median, $M=4.5$. $$ \sum_{i=1}^4|x_i-M|=2.5+0.5+0.5+5.5=9 $$ If you take any other value in the interval $\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2} + 1} \right] =[4,5]$, say $4.1$ $$ \sum_{i=1}^4|x_i-4.1|=2.1+0.1+0.9+5.9=9 $$ Taking for example $4$ or $5$ yields the same result: $$ \sum_{i=1}^4|x_i-4|=2+0+1+6=9 $$ $$ \sum_{i=1}^4|x_i-5|=3+1+0+5=9 $$ This is because when summing the distance from $a$ to the two middle points, you end up with the distance between them: $a-x_{\tfrac{n}{2}}+(x_{\tfrac{n}{2}+1}-a) = x_{\tfrac{n}{2}+1}-x_{\tfrac{n}{2}}$ For any value outside the interval $\left[x_{\tfrac{n}{2}},x_{\tfrac{n}{2}+1}\right]=[4,5]$, say $5.2$ $$ \sum_{i=1}^4|x_i-5.2|=3.2+1.2+0.2+4.8=9.4 $$ The question has other good answers but in my opinion these are the clearest.
Derivation of the conditional median for linear regression in “The elements of statistical learning INTUITION This part is taken from this answer. Assume that $S$ is a finite set, with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$. If $k$ is even there are (depending on the exa
49,089
Derivation of the conditional median for linear regression in “The elements of statistical learning ”
Let's call $(Y - f(X))^2 = g(Y)$. Then, we know that, for continuous cases (for example) $$ E[g(Y)] = \int g(y) f_Y(y) dy $$ And we also know that $$ P(A, B) = P(A|B) P(B)$$ or, $$ f_{y, x}(y, x) = f_{y | x}(y | x) f_{x}(x) $$ Then, to derive $E_X \Big [ E_{Y|X} [g(Y) | X ] \Big ]$, we can do: $$E_x \Big [ E_{Y|X} [g(Y) | X] \Big ] = E_x \Big [ \int_{Y} g(y) f_{y | x}(y | x) dy \Big] \\ \int_{X} \Big[ \int_{Y} g(y) f_{y | x}(y | x) dy \Big] f_{x}(x) dx $$ Which is: $$ \int_{X} \int_{Y} g(y) f_{y | x}(y | x) f_{x}(x) dy dx $$ $$ \int_{X} \int_{Y} g(y) f_{y, x}(y, x) dy dx $$ $$ \int_{Y} g(y) \int_{X} f_{y, x}(y, x) dy dx $$ $$ \int_{Y} g(y) f_{y}(y) dy $$ that is the expectation of our $g(Y)$.
Derivation of the conditional median for linear regression in “The elements of statistical learning
Let's call $(Y - f(X))^2 = g(Y)$. Then, we know that, for continuous cases (for example) $$ E[g(Y)] = \int g(y) f_Y(y) dy $$ And we also know that $$ P(A, B) = P(A|B) P(B)$$ or, $$ f_{y, x}(y, x)
Derivation of the conditional median for linear regression in “The elements of statistical learning ” Let's call $(Y - f(X))^2 = g(Y)$. Then, we know that, for continuous cases (for example) $$ E[g(Y)] = \int g(y) f_Y(y) dy $$ And we also know that $$ P(A, B) = P(A|B) P(B)$$ or, $$ f_{y, x}(y, x) = f_{y | x}(y | x) f_{x}(x) $$ Then, to derive $E_X \Big [ E_{Y|X} [g(Y) | X ] \Big ]$, we can do: $$E_x \Big [ E_{Y|X} [g(Y) | X] \Big ] = E_x \Big [ \int_{Y} g(y) f_{y | x}(y | x) dy \Big] \\ \int_{X} \Big[ \int_{Y} g(y) f_{y | x}(y | x) dy \Big] f_{x}(x) dx $$ Which is: $$ \int_{X} \int_{Y} g(y) f_{y | x}(y | x) f_{x}(x) dy dx $$ $$ \int_{X} \int_{Y} g(y) f_{y, x}(y, x) dy dx $$ $$ \int_{Y} g(y) \int_{X} f_{y, x}(y, x) dy dx $$ $$ \int_{Y} g(y) f_{y}(y) dy $$ that is the expectation of our $g(Y)$.
Derivation of the conditional median for linear regression in “The elements of statistical learning Let's call $(Y - f(X))^2 = g(Y)$. Then, we know that, for continuous cases (for example) $$ E[g(Y)] = \int g(y) f_Y(y) dy $$ And we also know that $$ P(A, B) = P(A|B) P(B)$$ or, $$ f_{y, x}(y, x)
49,090
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mean?
I wouldn't get caught up in the terms. Just know they are referring to your data. Every discipline (engineering, CS, statistics) has different terms for the same thing. However, to dive in to the detail, if your data is all numerical (no categorical data), then the model matrix = design matrix because there are no categorical values to expand on (no contrasts). A design matrix will most likely contain categorical values like gender, race, or some other type of binary/categorical status. A categorical matrix with these categorical values need to be one-hot coded to be numerically meaningful. Then, depending on your contrasts settings, you may see k-1 categorical vectors from the k categorical values. An example of these types of settings are included in R's documentation contrasts. Depending on your settings, you may see the following: > warpbreaks = warpbreaks[order(runif(dim(warpbreaks)[1])),] ## random shuffle > head(model.matrix(breaks ~ wool, data = warpbreaks)) ## (Intercept) woolB 30 1 1 39 1 1 32 1 1 16 1 0 6 1 0 7 1 0 > head(model.matrix(breaks ~ wool - 1, data = warpbreaks)) woolA woolB 30 0 1 39 0 1 32 0 1 16 1 0 6 1 0 7 1 0 Python's patsy also has similar settings.
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mea
I wouldn't get caught up in the terms. Just know they are referring to your data. Every discipline (engineering, CS, statistics) has different terms for the same thing. However, to dive in to the deta
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mean? I wouldn't get caught up in the terms. Just know they are referring to your data. Every discipline (engineering, CS, statistics) has different terms for the same thing. However, to dive in to the detail, if your data is all numerical (no categorical data), then the model matrix = design matrix because there are no categorical values to expand on (no contrasts). A design matrix will most likely contain categorical values like gender, race, or some other type of binary/categorical status. A categorical matrix with these categorical values need to be one-hot coded to be numerically meaningful. Then, depending on your contrasts settings, you may see k-1 categorical vectors from the k categorical values. An example of these types of settings are included in R's documentation contrasts. Depending on your settings, you may see the following: > warpbreaks = warpbreaks[order(runif(dim(warpbreaks)[1])),] ## random shuffle > head(model.matrix(breaks ~ wool, data = warpbreaks)) ## (Intercept) woolB 30 1 1 39 1 1 32 1 1 16 1 0 6 1 0 7 1 0 > head(model.matrix(breaks ~ wool - 1, data = warpbreaks)) woolA woolB 30 0 1 39 0 1 32 0 1 16 1 0 6 1 0 7 1 0 Python's patsy also has similar settings.
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mea I wouldn't get caught up in the terms. Just know they are referring to your data. Every discipline (engineering, CS, statistics) has different terms for the same thing. However, to dive in to the deta
49,091
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mean?
The answer by @Jon states that these terms are just synonyms. I do not agree with that. Certainly there will be differences in use between disciplines and softwares, so you must always look out for the authors/programmers definitions. But there are at least two different concepts here: The raw data matrix just containing the data. In R this would be represented as a data frame. This do not depend on the model that you is going to fit, so can be defined before modeling. The data matrix, variously named. The model matrix, which also depends on the model you are going to fit. Here polynomial terms will be expanded in some polynomial basis, spline terms will be expanded in some spline basis, and so on. A column of ones for the intercept might be included. Categorical variables represented by dummys or some other categorical encoding scheme. Some examples: A very simple example, a response $y$ and a predictor $x$. Simple linear regression will have a model matrix with $1,x$, polynomial regression maybe $1, x, x^2, x^3$. A more complex example. A large dataset with variables $y, x_1, \dotsc, x_{100}, \text{cat}$ where the $x$'s are numerical variables and $\text{cat}$ is a categorical variable with 30 levels. That last one can be coded with dummys $d_1, d_2, \dotsc, d_{30}$. Usual multiple regression fitted with OLS will use a model matrix $1,x_1, \dotsc, x_{100} , d_2, \dotsc, d_{30}$. (One dummy must be left out for identifiability, doesn't really matter which. But the same multiple linear model fitted with ridge or lasso (or some other regularization) will need $1,x_1, \dotsc, x_{100} , d_1, \dotsc, d_{30}$ (all dummys must be included, see Dropping one of the columns when using one-hot encoding. Another theme is that with regularization you might want to standardize the predictors What algorithms need feature scaling, beside from SVM?, so the model matrix will include the standardized $x$'s, not the original ones. But some softwares will do that for you ... So while data matrix just includes the raw data (or maybe after some common preprocessing), the model matrix will/can depend in addition on the model to be fit, the method of fitting, and the software used.
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mea
The answer by @Jon states that these terms are just synonyms. I do not agree with that. Certainly there will be differences in use between disciplines and softwares, so you must always look out for th
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mean? The answer by @Jon states that these terms are just synonyms. I do not agree with that. Certainly there will be differences in use between disciplines and softwares, so you must always look out for the authors/programmers definitions. But there are at least two different concepts here: The raw data matrix just containing the data. In R this would be represented as a data frame. This do not depend on the model that you is going to fit, so can be defined before modeling. The data matrix, variously named. The model matrix, which also depends on the model you are going to fit. Here polynomial terms will be expanded in some polynomial basis, spline terms will be expanded in some spline basis, and so on. A column of ones for the intercept might be included. Categorical variables represented by dummys or some other categorical encoding scheme. Some examples: A very simple example, a response $y$ and a predictor $x$. Simple linear regression will have a model matrix with $1,x$, polynomial regression maybe $1, x, x^2, x^3$. A more complex example. A large dataset with variables $y, x_1, \dotsc, x_{100}, \text{cat}$ where the $x$'s are numerical variables and $\text{cat}$ is a categorical variable with 30 levels. That last one can be coded with dummys $d_1, d_2, \dotsc, d_{30}$. Usual multiple regression fitted with OLS will use a model matrix $1,x_1, \dotsc, x_{100} , d_2, \dotsc, d_{30}$. (One dummy must be left out for identifiability, doesn't really matter which. But the same multiple linear model fitted with ridge or lasso (or some other regularization) will need $1,x_1, \dotsc, x_{100} , d_1, \dotsc, d_{30}$ (all dummys must be included, see Dropping one of the columns when using one-hot encoding. Another theme is that with regularization you might want to standardize the predictors What algorithms need feature scaling, beside from SVM?, so the model matrix will include the standardized $x$'s, not the original ones. But some softwares will do that for you ... So while data matrix just includes the raw data (or maybe after some common preprocessing), the model matrix will/can depend in addition on the model to be fit, the method of fitting, and the software used.
Data matrix, predictor matrix, observation matrix, model matrix, and design matrix. What do they mea The answer by @Jon states that these terms are just synonyms. I do not agree with that. Certainly there will be differences in use between disciplines and softwares, so you must always look out for th
49,092
Proper way to use NDCG@k score for recommendations
In "plain" language The Discounted Cumulative Gain for k shown recommendations ($DCG@k$) sums the relevance of the shown items for the current user (cumulative), meanwhile adding a penalty for relevant items placed on later positions (discounted). The Normalized Cumulative Gain for k shown recommendations ($NDCG@k$) divides this score by the maximum possible value of $DCG@k$ for the current user, i.e. what the score $DCG@k$ would be if the items in the ranking were sorted by the true (unknown for the recommender model) relevance. This is called Ideal Discounted Cumulative Gain ($IDCG@k$). So the score is normalized for different users. $NDCG@k=\frac{DCG@k}{IDCG@k}$ Hence, to calculate $IDCG@k$ and hence $NDCG@k$, one needs to know all relevant items for the current user in the test set. So your second call, passing the entire ranking, is correct. Formulas Let $rel_i$ the true relevance of the recommendation at position i for the current user. The traditional method to calculate DCG (corresponds to method=1 in your code) $DCG@k=\sum_{i=1}^{k}\frac{rel_i}{log_2(i+1)}=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i+1)}$ An alternative method to calculate DCG to put more emphasis on relevance $DCG@k=\sum_{i=1}^{k}\frac{2^{rel_i}-1}{log_2(i+1)}$ The parameter method=0 in your code corresponds to $DCG@k=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i-1+1)}=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i)}$ resulting in the weights mentioned in the doc of the code. I do not know why someone wants to give the same weight of 1 for the first two items and discounting the rest. Might be based on how the recommendations are shown ? $IDCG@k$ is calculated by sorting the ranking by the true unknown relevance (in descending order) and then use the formula for $DCG@k$ (just like in your code). Sources The wikipedia page for Discounted Cumulative Gain is quite helpful (note that $DCG@k$ is called $DCG_p$ there) and contains many useful links, e.g. to the Stanford handout mentioned in your code and the paper A Theoretical Analysis of NDCG Ranking Measures by Wang et al.
Proper way to use NDCG@k score for recommendations
In "plain" language The Discounted Cumulative Gain for k shown recommendations ($DCG@k$) sums the relevance of the shown items for the current user (cumulative), meanwhile adding a penalty for relevan
Proper way to use NDCG@k score for recommendations In "plain" language The Discounted Cumulative Gain for k shown recommendations ($DCG@k$) sums the relevance of the shown items for the current user (cumulative), meanwhile adding a penalty for relevant items placed on later positions (discounted). The Normalized Cumulative Gain for k shown recommendations ($NDCG@k$) divides this score by the maximum possible value of $DCG@k$ for the current user, i.e. what the score $DCG@k$ would be if the items in the ranking were sorted by the true (unknown for the recommender model) relevance. This is called Ideal Discounted Cumulative Gain ($IDCG@k$). So the score is normalized for different users. $NDCG@k=\frac{DCG@k}{IDCG@k}$ Hence, to calculate $IDCG@k$ and hence $NDCG@k$, one needs to know all relevant items for the current user in the test set. So your second call, passing the entire ranking, is correct. Formulas Let $rel_i$ the true relevance of the recommendation at position i for the current user. The traditional method to calculate DCG (corresponds to method=1 in your code) $DCG@k=\sum_{i=1}^{k}\frac{rel_i}{log_2(i+1)}=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i+1)}$ An alternative method to calculate DCG to put more emphasis on relevance $DCG@k=\sum_{i=1}^{k}\frac{2^{rel_i}-1}{log_2(i+1)}$ The parameter method=0 in your code corresponds to $DCG@k=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i-1+1)}=rel_1+\sum_{i=2}^{k}\frac{rel_i}{log_2(i)}$ resulting in the weights mentioned in the doc of the code. I do not know why someone wants to give the same weight of 1 for the first two items and discounting the rest. Might be based on how the recommendations are shown ? $IDCG@k$ is calculated by sorting the ranking by the true unknown relevance (in descending order) and then use the formula for $DCG@k$ (just like in your code). Sources The wikipedia page for Discounted Cumulative Gain is quite helpful (note that $DCG@k$ is called $DCG_p$ there) and contains many useful links, e.g. to the Stanford handout mentioned in your code and the paper A Theoretical Analysis of NDCG Ranking Measures by Wang et al.
Proper way to use NDCG@k score for recommendations In "plain" language The Discounted Cumulative Gain for k shown recommendations ($DCG@k$) sums the relevance of the shown items for the current user (cumulative), meanwhile adding a penalty for relevan
49,093
Job interview on drawing a random number
The conclusion of the interviewer is silly, and it is an example of the Gambler's fallacy. Both classical and Bayesian methods lead to the conclusions that are broadly the opposite of his conclusion. Whenever you take draws at random from a distribution this leads you to a series of independent and identical distributed (IID) random variables. In this case the observed data gives information on the location of the distribution, and so it tends to be the case that the most likely future values are at or near the past values. Whenever I hear Dr Phil say in his charming Southern-drawl, "past behaviour is the best predictor of future behaviour", I always imagine that he is talking about sequences of IID random variables. To analyse this particular case, let's start by framing it clearly. Given that the interviewer did not specify a particular normal distribution (e.g., standard normal), his reference to "a normal distribution" is a reference to a distributional family with unknown mean and variance. Since he refers to random draws, this means that the values are IID random variables from a normal distribution, and so the observable data sequence is $X_,X_2,X_3,... \sim \text{IID N}(\mu, \sigma^2)$. This interview question is essentially just asking you to make a prediction about $X_2$, given that you only observe that $X_1 < 0$. Classical approach: Under classical analysis, with a single data point, we can estimate the mean of the distribution but not its variance. The estimated mean parameter with a single data point is: $$\hat{\mu} = \bar{X}_1 = X_1.$$ Noting that the mode of the underlying distribution is $\text{Mode } \text{N}(\mu, \sigma^2) \equiv \max_{x \in \mathbb{R}} \text{N}(x|\mu,\sigma^2) = \mu$, this means that the estimated mode is: $$\widehat{\text{Mode}} = \hat{\mu} = X_1 < 0.$$ Since we are dealing with IID data, the estimated mode of the underlying distribution is the best (point-based) prediction for the next data point. Hence, we conclude that the best prediction for the next data point is negative. Bayesian approach: Under Bayesian analysis, we specify a prior distribution for the parameters and derive the predictive distribution of the new data point. In this case we can proceed conditional on $\sigma$ since it will not affect our conclusion. If we use the standard "non-informative" prior $\pi(\mu) \propto 1$ (which is improper), this gives the posterior distribution: $$\pi_1(\mu|x_1, \sigma) \propto \text{N}(x_1|\mu, \sigma^2) \pi(\mu) \propto \text{N}(\mu|x_1, \sigma^2).$$ The posterior mode is $\text{Mode } \pi_1 \equiv \max_{\mu \in \mathbb{R}} \pi(\mu| x_1, \sigma) = x_1 < 0$. The posterior mode is the best (point-based) prediction for the next data point. Hence, we conclude that the best prediction for the next data point is negative.
Job interview on drawing a random number
The conclusion of the interviewer is silly, and it is an example of the Gambler's fallacy. Both classical and Bayesian methods lead to the conclusions that are broadly the opposite of his conclusion.
Job interview on drawing a random number The conclusion of the interviewer is silly, and it is an example of the Gambler's fallacy. Both classical and Bayesian methods lead to the conclusions that are broadly the opposite of his conclusion. Whenever you take draws at random from a distribution this leads you to a series of independent and identical distributed (IID) random variables. In this case the observed data gives information on the location of the distribution, and so it tends to be the case that the most likely future values are at or near the past values. Whenever I hear Dr Phil say in his charming Southern-drawl, "past behaviour is the best predictor of future behaviour", I always imagine that he is talking about sequences of IID random variables. To analyse this particular case, let's start by framing it clearly. Given that the interviewer did not specify a particular normal distribution (e.g., standard normal), his reference to "a normal distribution" is a reference to a distributional family with unknown mean and variance. Since he refers to random draws, this means that the values are IID random variables from a normal distribution, and so the observable data sequence is $X_,X_2,X_3,... \sim \text{IID N}(\mu, \sigma^2)$. This interview question is essentially just asking you to make a prediction about $X_2$, given that you only observe that $X_1 < 0$. Classical approach: Under classical analysis, with a single data point, we can estimate the mean of the distribution but not its variance. The estimated mean parameter with a single data point is: $$\hat{\mu} = \bar{X}_1 = X_1.$$ Noting that the mode of the underlying distribution is $\text{Mode } \text{N}(\mu, \sigma^2) \equiv \max_{x \in \mathbb{R}} \text{N}(x|\mu,\sigma^2) = \mu$, this means that the estimated mode is: $$\widehat{\text{Mode}} = \hat{\mu} = X_1 < 0.$$ Since we are dealing with IID data, the estimated mode of the underlying distribution is the best (point-based) prediction for the next data point. Hence, we conclude that the best prediction for the next data point is negative. Bayesian approach: Under Bayesian analysis, we specify a prior distribution for the parameters and derive the predictive distribution of the new data point. In this case we can proceed conditional on $\sigma$ since it will not affect our conclusion. If we use the standard "non-informative" prior $\pi(\mu) \propto 1$ (which is improper), this gives the posterior distribution: $$\pi_1(\mu|x_1, \sigma) \propto \text{N}(x_1|\mu, \sigma^2) \pi(\mu) \propto \text{N}(\mu|x_1, \sigma^2).$$ The posterior mode is $\text{Mode } \pi_1 \equiv \max_{\mu \in \mathbb{R}} \pi(\mu| x_1, \sigma) = x_1 < 0$. The posterior mode is the best (point-based) prediction for the next data point. Hence, we conclude that the best prediction for the next data point is negative.
Job interview on drawing a random number The conclusion of the interviewer is silly, and it is an example of the Gambler's fallacy. Both classical and Bayesian methods lead to the conclusions that are broadly the opposite of his conclusion.
49,094
Why don't we average Confidence Intervals?
The issue here is that the average of CIs are simply not “efficient” (not the appropriate use of this word from a statistical perspective, but reasonable in an informal sense for this context).  If you take the average of the boundaries of the CIs, you will end up with a new interval that has about the same length as the intervals used to find the averages.  Thus, you end up with a new interval that is (1) better centered on the population mean (i.e., it has higher probability that it “captures” the mean), and (2) is much larger than it would need to be to capture the mean 95% of the time (upon hypothetical replication). However, as you suggest in your query, if you aggregate your data into one larger data set, then you obtain a much narrower interval.  So, at the heart of this question is what is more important:  ¿confidence or precision?  If you are willing to sacrifice precision for confidence, then you can take the much larger interval.  If you want more precision, then you have to sacrifice some level of confidence. Here is a small bit of R code that helps demonstrate this: set.seed(1234) rep.int(NA,100) -> lens.S -> lens.L for(ijk in 1:100) { n <- 20; m <- 50; t.cv.s <- qt(1-0.05/2,n-1) t.cv.l <- qt(1-0.05/2,n*m-1) x <- rnorm(n*m,50,10) grps <- ceiling(1:{n*m}/n) Ms <- aggregate(x ~ grps,FUN=mean)[,2] SDs <- aggregate(x ~ grps,FUN=sd)[,2] CI.lo <- Ms - t.cv.s*SDs/sqrt(n) CI.hi <- Ms + t.cv.s*SDs/sqrt(n) new.CI.L <- mean(x) + c(-1,1)*t.cv.l * sd(x)/sqrt(n*m) new.CI.Savg <- c(mean(CI.lo),mean(CI.hi)) lens.L[ijk] <- diff(new.CI.L) lens.S[ijk] <- diff(new.CI.Savg) } lens <- data.frame(lens.L,lens.S) boxplot(lens)
Why don't we average Confidence Intervals?
The issue here is that the average of CIs are simply not “efficient” (not the appropriate use of this word from a statistical perspective, but reasonable in an informal sense for this context).  If y
Why don't we average Confidence Intervals? The issue here is that the average of CIs are simply not “efficient” (not the appropriate use of this word from a statistical perspective, but reasonable in an informal sense for this context).  If you take the average of the boundaries of the CIs, you will end up with a new interval that has about the same length as the intervals used to find the averages.  Thus, you end up with a new interval that is (1) better centered on the population mean (i.e., it has higher probability that it “captures” the mean), and (2) is much larger than it would need to be to capture the mean 95% of the time (upon hypothetical replication). However, as you suggest in your query, if you aggregate your data into one larger data set, then you obtain a much narrower interval.  So, at the heart of this question is what is more important:  ¿confidence or precision?  If you are willing to sacrifice precision for confidence, then you can take the much larger interval.  If you want more precision, then you have to sacrifice some level of confidence. Here is a small bit of R code that helps demonstrate this: set.seed(1234) rep.int(NA,100) -> lens.S -> lens.L for(ijk in 1:100) { n <- 20; m <- 50; t.cv.s <- qt(1-0.05/2,n-1) t.cv.l <- qt(1-0.05/2,n*m-1) x <- rnorm(n*m,50,10) grps <- ceiling(1:{n*m}/n) Ms <- aggregate(x ~ grps,FUN=mean)[,2] SDs <- aggregate(x ~ grps,FUN=sd)[,2] CI.lo <- Ms - t.cv.s*SDs/sqrt(n) CI.hi <- Ms + t.cv.s*SDs/sqrt(n) new.CI.L <- mean(x) + c(-1,1)*t.cv.l * sd(x)/sqrt(n*m) new.CI.Savg <- c(mean(CI.lo),mean(CI.hi)) lens.L[ijk] <- diff(new.CI.L) lens.S[ijk] <- diff(new.CI.Savg) } lens <- data.frame(lens.L,lens.S) boxplot(lens)
Why don't we average Confidence Intervals? The issue here is that the average of CIs are simply not “efficient” (not the appropriate use of this word from a statistical perspective, but reasonable in an informal sense for this context).  If y
49,095
Random Forest Probability vs Logistic Regression Probability
In a nutshell, logistic regression aims to produce an estimation of the probability of belonging to a specific class. So there is only one "probability estimate" after a logistic regression. On the other hand, the probability obtained using random forest is more like a by product, taking advantage of having many trees (though this is implementation dependent! more details below) and therefore, there are many ways to infer probabilities from a random forest. Random forest probability Indeed, it is not a true probability, in the sense that it is just an average over the number of trees. For the implication, they will depend on the penalty function that you use. Usually, random forest will produce many ties (in terms of probabilities) and 0 and 1. This is not good when your metric is the AUC (see this article on wikipedia if you are not familiar with AUC), because of the ties, and not good either when you observe a logarithmic loss (because the 0 and 1 can have a large impact on the penalty). However, there as some alternatives to improve the estimation of probabilities, as detailed here. H. Boström. Estimating class probabilities in random forests. In Proc. of the International Conference on Machine Learning and Applications, pages 211–216, 2007. Logistic regression probability Usually, they produce a good estimate of the probability. But as opposed to random forest, they do not take into account possible interactions of the input. So it may harm performance as well. I suspect that in most cases, if you penalty is just the accuracy of the model (and some interactions are important) a logistic regression would give poor results compared to a random forest.
Random Forest Probability vs Logistic Regression Probability
In a nutshell, logistic regression aims to produce an estimation of the probability of belonging to a specific class. So there is only one "probability estimate" after a logistic regression. On the ot
Random Forest Probability vs Logistic Regression Probability In a nutshell, logistic regression aims to produce an estimation of the probability of belonging to a specific class. So there is only one "probability estimate" after a logistic regression. On the other hand, the probability obtained using random forest is more like a by product, taking advantage of having many trees (though this is implementation dependent! more details below) and therefore, there are many ways to infer probabilities from a random forest. Random forest probability Indeed, it is not a true probability, in the sense that it is just an average over the number of trees. For the implication, they will depend on the penalty function that you use. Usually, random forest will produce many ties (in terms of probabilities) and 0 and 1. This is not good when your metric is the AUC (see this article on wikipedia if you are not familiar with AUC), because of the ties, and not good either when you observe a logarithmic loss (because the 0 and 1 can have a large impact on the penalty). However, there as some alternatives to improve the estimation of probabilities, as detailed here. H. Boström. Estimating class probabilities in random forests. In Proc. of the International Conference on Machine Learning and Applications, pages 211–216, 2007. Logistic regression probability Usually, they produce a good estimate of the probability. But as opposed to random forest, they do not take into account possible interactions of the input. So it may harm performance as well. I suspect that in most cases, if you penalty is just the accuracy of the model (and some interactions are important) a logistic regression would give poor results compared to a random forest.
Random Forest Probability vs Logistic Regression Probability In a nutshell, logistic regression aims to produce an estimation of the probability of belonging to a specific class. So there is only one "probability estimate" after a logistic regression. On the ot
49,096
What's the meaning of "Corrected for chance"?
Look at the definition of ARI in terms of the Rand index RI. Correction for chance means that the RI score is adjusted in a way that a random result ('result by chance') gets a score of 0. On certain data sets, a random result can score an RI if 0.9 - on other data sets this would be a good results. The ARI is this more interpretable, as random results always score 0.
What's the meaning of "Corrected for chance"?
Look at the definition of ARI in terms of the Rand index RI. Correction for chance means that the RI score is adjusted in a way that a random result ('result by chance') gets a score of 0. On certain
What's the meaning of "Corrected for chance"? Look at the definition of ARI in terms of the Rand index RI. Correction for chance means that the RI score is adjusted in a way that a random result ('result by chance') gets a score of 0. On certain data sets, a random result can score an RI if 0.9 - on other data sets this would be a good results. The ARI is this more interpretable, as random results always score 0.
What's the meaning of "Corrected for chance"? Look at the definition of ARI in terms of the Rand index RI. Correction for chance means that the RI score is adjusted in a way that a random result ('result by chance') gets a score of 0. On certain
49,097
Chi Square test in SPSS Exploratory Factor Analysis
This chi-square goodness-of-fit test which SPSS outputs under Maximum likelihood or Generalized least squares methods of factor extraction is one of the many methods to estimate the "best" number of factors to extract from the data. The test assumes that the data comes from multivariate normal population. This chi-square tests the null hypothesis that the observed data correlation matrix p x p $\bf R$ is a random sample realization from population having correlation matrix equal to the one returned by the extracted m factors, i.e. to $\bf \hat{R}= AA'+U^2$ (where $\bf A$ are extracted loadings and $\bf U^2$ are then uniquenesses). That is, that $\bf R-\hat{R}$ residuals are random noise, sliding to $0$ as the sample size $n$ grows to infinity. That roughly means all positive eigenvalues of $\bf R-U^2$ except first $m$ ones are close to zero if the $m$-factor model fits. Under sufficiently large $n$ the test statistic has approximately chi-square distribution with df $[(p-m)^2-(p+m)]/2$, and you can obtain p-value ($m$ thus must be small enough to give positive df according to the formula). If the test is significant that means $m$ factors is not enough and you should try at least $m+1$ extraction, and test again. Note this is not a test of factor by factor to tell you if the i-th factor is "significant" while the i+1-th is "not significant", it is the test of all the $m$-factor model fit, like in CFA (but CFA has more options to do the testing, such as, for example, to freeze some loadings as fixed parameters). The test statistic is dependent on $n$ so the test is sensitive to the sample size (as often in statistics, no wonder): for large $n$, the test becomes impractically sensitive to small departures from the true model, so it can suggest you to raise $m$ while it is not warranted from all other criterion perspectives (including interpretability of factors). Besides, departure from normality in the sample also can sharpen p-value, thus falsely suggesting an extra factor to extract. The test could be, theoretically, computed and applied independently of the factor extraction method (still under normality assumption). However, it is logically more apt with Maximum likelihood method, first, because the test is ML in its nature, second, because ML extraction also requires normality, and third - because it is most easy to compute in ML as a by-product of this extraction algorithm. As for GLS extraction, it is very like ML extraction algorithmically, so why not output it here either. The test is only one among many competitive ways to estimate the best number of factors.
Chi Square test in SPSS Exploratory Factor Analysis
This chi-square goodness-of-fit test which SPSS outputs under Maximum likelihood or Generalized least squares methods of factor extraction is one of the many methods to estimate the "best" number of f
Chi Square test in SPSS Exploratory Factor Analysis This chi-square goodness-of-fit test which SPSS outputs under Maximum likelihood or Generalized least squares methods of factor extraction is one of the many methods to estimate the "best" number of factors to extract from the data. The test assumes that the data comes from multivariate normal population. This chi-square tests the null hypothesis that the observed data correlation matrix p x p $\bf R$ is a random sample realization from population having correlation matrix equal to the one returned by the extracted m factors, i.e. to $\bf \hat{R}= AA'+U^2$ (where $\bf A$ are extracted loadings and $\bf U^2$ are then uniquenesses). That is, that $\bf R-\hat{R}$ residuals are random noise, sliding to $0$ as the sample size $n$ grows to infinity. That roughly means all positive eigenvalues of $\bf R-U^2$ except first $m$ ones are close to zero if the $m$-factor model fits. Under sufficiently large $n$ the test statistic has approximately chi-square distribution with df $[(p-m)^2-(p+m)]/2$, and you can obtain p-value ($m$ thus must be small enough to give positive df according to the formula). If the test is significant that means $m$ factors is not enough and you should try at least $m+1$ extraction, and test again. Note this is not a test of factor by factor to tell you if the i-th factor is "significant" while the i+1-th is "not significant", it is the test of all the $m$-factor model fit, like in CFA (but CFA has more options to do the testing, such as, for example, to freeze some loadings as fixed parameters). The test statistic is dependent on $n$ so the test is sensitive to the sample size (as often in statistics, no wonder): for large $n$, the test becomes impractically sensitive to small departures from the true model, so it can suggest you to raise $m$ while it is not warranted from all other criterion perspectives (including interpretability of factors). Besides, departure from normality in the sample also can sharpen p-value, thus falsely suggesting an extra factor to extract. The test could be, theoretically, computed and applied independently of the factor extraction method (still under normality assumption). However, it is logically more apt with Maximum likelihood method, first, because the test is ML in its nature, second, because ML extraction also requires normality, and third - because it is most easy to compute in ML as a by-product of this extraction algorithm. As for GLS extraction, it is very like ML extraction algorithmically, so why not output it here either. The test is only one among many competitive ways to estimate the best number of factors.
Chi Square test in SPSS Exploratory Factor Analysis This chi-square goodness-of-fit test which SPSS outputs under Maximum likelihood or Generalized least squares methods of factor extraction is one of the many methods to estimate the "best" number of f
49,098
how is covariate shift associated with domain adaptation?
Covariate Shift: source domain and target domain have the same input space 𝑋, output space 𝑌. And they share the same conditional distributions of 𝑌, but different marginal distributions of 𝑋. Formally, $𝑃_S (y│𝑥)= 𝑃_T (y│𝑥)$, but $𝑃_S (x)≠ 𝑃_T (x)$. Obviously, domain adaptation is a more general concept, it contains marginal distribution, conditional distribution and joint distribution.
how is covariate shift associated with domain adaptation?
Covariate Shift: source domain and target domain have the same input space 𝑋, output space 𝑌. And they share the same conditional distributions of 𝑌, but different marginal distributions of 𝑋. Formall
how is covariate shift associated with domain adaptation? Covariate Shift: source domain and target domain have the same input space 𝑋, output space 𝑌. And they share the same conditional distributions of 𝑌, but different marginal distributions of 𝑋. Formally, $𝑃_S (y│𝑥)= 𝑃_T (y│𝑥)$, but $𝑃_S (x)≠ 𝑃_T (x)$. Obviously, domain adaptation is a more general concept, it contains marginal distribution, conditional distribution and joint distribution.
how is covariate shift associated with domain adaptation? Covariate Shift: source domain and target domain have the same input space 𝑋, output space 𝑌. And they share the same conditional distributions of 𝑌, but different marginal distributions of 𝑋. Formall
49,099
Truncated Beta parameters - method of moments
Your data is drawn from a censored Beta distribution, with the censoring point unknown as well as how many observations were censored. The PDF of the distribution is: $$p(x; a, b, c) = {x^{a-1}(1-x)^{b-1} \over \int_0^c t^{a-1}(1-t)^{b-1}\text{d}t}$$ The usual Beta functions cancel out between the numerator and the denominator. This distribution evidently has three parameters; the two parameters of the uncensored Beta distribution and the censoring point $c$. Consequently, in order to use a method-of-moments estimator, we'd need to use the first three moments. Some simulation results reported by Dishon and Weiss (1980) indicate that for the two-parameter Beta distribution the MLE is typically more efficient than the MOM estimator even for small samples unless $a=b$, as @whuber and @xi'an expected. I'd expect that adding the third moment to the MOM requirements would worsen the relative efficiency of the MOM estimator, so will continue by developing the MLE. Taking the log of the likelihood function results in: $$\ln L = (a-1)\sum \ln x_i + (b-1)\sum \ln (1-x_i) - n\ln\text{B}(c,a,b)$$ where $\text{B}(c,a,b)$ is the incomplete Beta function. For our R code, we'll instead use $\ln L = \ln(p_{\beta}(x;a,b)/P_{\beta}(c;a,b))$, as base R does not, as far as I know, have an unnormalized version of the incomplete Beta function. We'll use the L-BFGS-B multivariate function minimization technique, as it allows box constraints on the parameters; an alternative would be to transform the parameters to take on any values on the real line and transform the optimization results back. a=1.7 b=50 xfull=rbeta(100,a,b) censor <- sort(xfull)[80] x=xfull[xfull<censor] n <- length(x) lnl <- function(parms) { res <- sum(log(dbeta(x, parms[1], parms[2]))) - n*(log(pbeta(parms[3],parms[1],parms[2]))) if (res == Inf | res == -Inf | is.na(res)) { res = -9.9e99 } res } start <- c(a, b, min(max(x)+0.001, (1+max(x))/2)) optim(par=start, fn=lnl, lower=c(1e-05, 1e-05, max(x)+1e-05), upper=c(999, 999, 1-1e-05), control=list(fnscale=-1), method="L-BFGS-B") which gives results: $par [1] 1.72653952 49.99758058 0.04634924 $value [1] 244.2224 $counts function gradient 6 6 Not far off the actual values of $(1.7, 50, 0.04667)$, and in only six iterations. I ran this example 100 times (with different samples each time) and the maximum number of iterations required until convergence was 29, with no convergence failures.
Truncated Beta parameters - method of moments
Your data is drawn from a censored Beta distribution, with the censoring point unknown as well as how many observations were censored. The PDF of the distribution is: $$p(x; a, b, c) = {x^{a-1}(1-x)^
Truncated Beta parameters - method of moments Your data is drawn from a censored Beta distribution, with the censoring point unknown as well as how many observations were censored. The PDF of the distribution is: $$p(x; a, b, c) = {x^{a-1}(1-x)^{b-1} \over \int_0^c t^{a-1}(1-t)^{b-1}\text{d}t}$$ The usual Beta functions cancel out between the numerator and the denominator. This distribution evidently has three parameters; the two parameters of the uncensored Beta distribution and the censoring point $c$. Consequently, in order to use a method-of-moments estimator, we'd need to use the first three moments. Some simulation results reported by Dishon and Weiss (1980) indicate that for the two-parameter Beta distribution the MLE is typically more efficient than the MOM estimator even for small samples unless $a=b$, as @whuber and @xi'an expected. I'd expect that adding the third moment to the MOM requirements would worsen the relative efficiency of the MOM estimator, so will continue by developing the MLE. Taking the log of the likelihood function results in: $$\ln L = (a-1)\sum \ln x_i + (b-1)\sum \ln (1-x_i) - n\ln\text{B}(c,a,b)$$ where $\text{B}(c,a,b)$ is the incomplete Beta function. For our R code, we'll instead use $\ln L = \ln(p_{\beta}(x;a,b)/P_{\beta}(c;a,b))$, as base R does not, as far as I know, have an unnormalized version of the incomplete Beta function. We'll use the L-BFGS-B multivariate function minimization technique, as it allows box constraints on the parameters; an alternative would be to transform the parameters to take on any values on the real line and transform the optimization results back. a=1.7 b=50 xfull=rbeta(100,a,b) censor <- sort(xfull)[80] x=xfull[xfull<censor] n <- length(x) lnl <- function(parms) { res <- sum(log(dbeta(x, parms[1], parms[2]))) - n*(log(pbeta(parms[3],parms[1],parms[2]))) if (res == Inf | res == -Inf | is.na(res)) { res = -9.9e99 } res } start <- c(a, b, min(max(x)+0.001, (1+max(x))/2)) optim(par=start, fn=lnl, lower=c(1e-05, 1e-05, max(x)+1e-05), upper=c(999, 999, 1-1e-05), control=list(fnscale=-1), method="L-BFGS-B") which gives results: $par [1] 1.72653952 49.99758058 0.04634924 $value [1] 244.2224 $counts function gradient 6 6 Not far off the actual values of $(1.7, 50, 0.04667)$, and in only six iterations. I ran this example 100 times (with different samples each time) and the maximum number of iterations required until convergence was 29, with no convergence failures.
Truncated Beta parameters - method of moments Your data is drawn from a censored Beta distribution, with the censoring point unknown as well as how many observations were censored. The PDF of the distribution is: $$p(x; a, b, c) = {x^{a-1}(1-x)^
49,100
Understanding svycontrast in R with simple random sampling
svycontrast computes "linear or nonlinear contrasts of estimates produced by survey functions (or any object with coef and vcov methods)." That is, it takes the estimates that it is given and computes functions of them. It does not do anything with the individual data -- it does not even see the individual data (in general). When you do svycontrast(a, quote(api00^2 - api99^2)) you are asking for the difference between the estimate named api00, squared, and the estimate named api99, also squared, and you get (with one more digit than is printed) > 656.585^2 - 624.685^2 [1] 40872.51 This is the difference in the squares of the means of the estimates. If you do svycontrast(a, quote(`I(api00^2)` - `I(api99^2)`)) you are asking for a linear contrast: the estimate named I(api00^2) minus the estimate named I(api99^2). The answer is > 448697.875-408791.545 [1] 39906.33 Because this is a linear contrast, you could also get it with > svycontrast(a, c(0,0,1,-1)) contrast SE contrast 39906 2589.1 The tricky part is standard errors. The standard errors are computed by the delta method. That is, if the covariance matrix of the input estimates is $V$, and the input estimate vector is $\alpha$ and the output estimate vector is $\gamma$, the estimated variance of $\hat\gamma$ is $$\widehat{\mathrm{var}}[\hat\gamma] = \frac{\partial\gamma}{\partial\alpha}^TV\frac{\partial\gamma}{\partial\alpha}$$ For a linear contrast, the derivative is just the vector of coefficients. For a nonlinear contrast, the expression you give is symbolically differentiated.
Understanding svycontrast in R with simple random sampling
svycontrast computes "linear or nonlinear contrasts of estimates produced by survey functions (or any object with coef and vcov methods)." That is, it takes the estimates that it is given and computes
Understanding svycontrast in R with simple random sampling svycontrast computes "linear or nonlinear contrasts of estimates produced by survey functions (or any object with coef and vcov methods)." That is, it takes the estimates that it is given and computes functions of them. It does not do anything with the individual data -- it does not even see the individual data (in general). When you do svycontrast(a, quote(api00^2 - api99^2)) you are asking for the difference between the estimate named api00, squared, and the estimate named api99, also squared, and you get (with one more digit than is printed) > 656.585^2 - 624.685^2 [1] 40872.51 This is the difference in the squares of the means of the estimates. If you do svycontrast(a, quote(`I(api00^2)` - `I(api99^2)`)) you are asking for a linear contrast: the estimate named I(api00^2) minus the estimate named I(api99^2). The answer is > 448697.875-408791.545 [1] 39906.33 Because this is a linear contrast, you could also get it with > svycontrast(a, c(0,0,1,-1)) contrast SE contrast 39906 2589.1 The tricky part is standard errors. The standard errors are computed by the delta method. That is, if the covariance matrix of the input estimates is $V$, and the input estimate vector is $\alpha$ and the output estimate vector is $\gamma$, the estimated variance of $\hat\gamma$ is $$\widehat{\mathrm{var}}[\hat\gamma] = \frac{\partial\gamma}{\partial\alpha}^TV\frac{\partial\gamma}{\partial\alpha}$$ For a linear contrast, the derivative is just the vector of coefficients. For a nonlinear contrast, the expression you give is symbolically differentiated.
Understanding svycontrast in R with simple random sampling svycontrast computes "linear or nonlinear contrasts of estimates produced by survey functions (or any object with coef and vcov methods)." That is, it takes the estimates that it is given and computes