idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
50,401 | predictive distribution of bayesian logistic regression | (1) involves a multivariate Gaussian with potentially off-diagonal covariance matrix terms. You can see this from the previous section of Bishop, which gives the form of $q(w)$ as
$$
q(w) = \mathcal{N}(w \mid w_{\text{MAP}}, S_N),
$$
where $S_N$ can generally have off-diagonal terms. This is replaced by an integral over a single variable (as opposed to $D$ variables) with a univariate Gaussian. You see that even then, the integral isn't analytically tractable, which the author fixes by approximating the sigmoid function with a probit, $\Phi(\lambda a)$ and makes use of the identity,
$$
\int \Phi(\lambda a) \mathcal{N}(a \mid \mu \sigma^2) da = \Phi\left(\frac{\mu}{(\lambda^{-2} + \sigma^2)^{1/2}}\right),
$$
which wouldn't have been possible had he stuck with the original multivariate Gaussian with an unknown covariance. (I'm not sure if an analogous identity exists for a Gaussian with arbitrary covariance, though if you are able to reduce the expression to one that makes use of a simpler identity, then why not?)
As for the projection comment. Consider the following integral,
$$
S = \int_{-\infty}^\infty \int_{-\infty}^\infty f(a x + by) g(x,y) dx dy
$$
where $a$ and $b$ are constants. The integral is over the entire two-dimensional plane. Imagine, on the plane, the directional vector $(a,b)'$. You can define a line spanning the space along this direction. The inner product of any vector with $(a,b)'$ is a projection of that vector on this line. For example, suppose we have a vector $(c,d)'$ that's orthogonal to $(a,b)'$, so that $(c,d) (a,b)' = 0.$ Then $(x+c, y+d) (a,b)' = (x,y)(a,b)'.$ This means that the value of $f(ax + by)$ is determined entirely by the projection of $(x,y)'$ on the line defined by the directional vector $(a,b)'.$ Any change in a direction orthogonal to that line has no impact on the value of $f$.
We can then perceive this integral as an integral of $f$ over the line parameterized by a single variable $\lambda,$ given by a weighting function over that parameter, where the weight is given by $$p(\lambda) = \int \delta(\lambda - (a x + b y)) g(x,y) dx dy.$$
Consider what this integral is, for a given value of $\lambda.$ This is an integral of $g$ only over the line defined by $a x + b y = \lambda,$ which is perpendicular to the vector $(a,b)'.$ If $g$ is a two-dimensional probability distribution then you can think of this as marginalizing the distribution $g(x,y)$ over the direction orthogonal to $(a,b)'.$ If $g$ is a Gaussian, this marginalization is easy to do, and is done by Bishop in the passage you skipped over.
$S$ is an integral over $x$ and $y$, though you can re-imagine it as an integral over the line given by the direction $(a,b)'$, where at each incremental step, another integral is taken over the line orthogonal to $(a,b)'$ at that given point (so a rotation in the directions in which we integrate). This can be generalized in $D$ dimensions to an integral over a one-dimensional line, where at each increment a $D-1$ dimensional integral is taking place. Any function that depends only on a point's projection on the the line is a constant of the $D-1$ dimensional integral. | predictive distribution of bayesian logistic regression | (1) involves a multivariate Gaussian with potentially off-diagonal covariance matrix terms. You can see this from the previous section of Bishop, which gives the form of $q(w)$ as
$$
q(w) = \mathcal{N | predictive distribution of bayesian logistic regression
(1) involves a multivariate Gaussian with potentially off-diagonal covariance matrix terms. You can see this from the previous section of Bishop, which gives the form of $q(w)$ as
$$
q(w) = \mathcal{N}(w \mid w_{\text{MAP}}, S_N),
$$
where $S_N$ can generally have off-diagonal terms. This is replaced by an integral over a single variable (as opposed to $D$ variables) with a univariate Gaussian. You see that even then, the integral isn't analytically tractable, which the author fixes by approximating the sigmoid function with a probit, $\Phi(\lambda a)$ and makes use of the identity,
$$
\int \Phi(\lambda a) \mathcal{N}(a \mid \mu \sigma^2) da = \Phi\left(\frac{\mu}{(\lambda^{-2} + \sigma^2)^{1/2}}\right),
$$
which wouldn't have been possible had he stuck with the original multivariate Gaussian with an unknown covariance. (I'm not sure if an analogous identity exists for a Gaussian with arbitrary covariance, though if you are able to reduce the expression to one that makes use of a simpler identity, then why not?)
As for the projection comment. Consider the following integral,
$$
S = \int_{-\infty}^\infty \int_{-\infty}^\infty f(a x + by) g(x,y) dx dy
$$
where $a$ and $b$ are constants. The integral is over the entire two-dimensional plane. Imagine, on the plane, the directional vector $(a,b)'$. You can define a line spanning the space along this direction. The inner product of any vector with $(a,b)'$ is a projection of that vector on this line. For example, suppose we have a vector $(c,d)'$ that's orthogonal to $(a,b)'$, so that $(c,d) (a,b)' = 0.$ Then $(x+c, y+d) (a,b)' = (x,y)(a,b)'.$ This means that the value of $f(ax + by)$ is determined entirely by the projection of $(x,y)'$ on the line defined by the directional vector $(a,b)'.$ Any change in a direction orthogonal to that line has no impact on the value of $f$.
We can then perceive this integral as an integral of $f$ over the line parameterized by a single variable $\lambda,$ given by a weighting function over that parameter, where the weight is given by $$p(\lambda) = \int \delta(\lambda - (a x + b y)) g(x,y) dx dy.$$
Consider what this integral is, for a given value of $\lambda.$ This is an integral of $g$ only over the line defined by $a x + b y = \lambda,$ which is perpendicular to the vector $(a,b)'.$ If $g$ is a two-dimensional probability distribution then you can think of this as marginalizing the distribution $g(x,y)$ over the direction orthogonal to $(a,b)'.$ If $g$ is a Gaussian, this marginalization is easy to do, and is done by Bishop in the passage you skipped over.
$S$ is an integral over $x$ and $y$, though you can re-imagine it as an integral over the line given by the direction $(a,b)'$, where at each incremental step, another integral is taken over the line orthogonal to $(a,b)'$ at that given point (so a rotation in the directions in which we integrate). This can be generalized in $D$ dimensions to an integral over a one-dimensional line, where at each increment a $D-1$ dimensional integral is taking place. Any function that depends only on a point's projection on the the line is a constant of the $D-1$ dimensional integral. | predictive distribution of bayesian logistic regression
(1) involves a multivariate Gaussian with potentially off-diagonal covariance matrix terms. You can see this from the previous section of Bishop, which gives the form of $q(w)$ as
$$
q(w) = \mathcal{N |
50,402 | Evaluation of log Vs. non log models | Yes, what you describe is a logical approach.
Aside (back-)transforming the response variable I would suggest considering a model that does not rely heavily on assumptions regarding the model's error-structure and/or the distribution of the response variable. Immediate regression-like alternatives would be robust regression and quantile regression. Similarly there is little reason not to use tree-based (like CHAID trees) or gradient-boosting approaches (like XGBoost) if you are mostly interested in prediction rather than statistical inference. | Evaluation of log Vs. non log models | Yes, what you describe is a logical approach.
Aside (back-)transforming the response variable I would suggest considering a model that does not rely heavily on assumptions regarding the model's error- | Evaluation of log Vs. non log models
Yes, what you describe is a logical approach.
Aside (back-)transforming the response variable I would suggest considering a model that does not rely heavily on assumptions regarding the model's error-structure and/or the distribution of the response variable. Immediate regression-like alternatives would be robust regression and quantile regression. Similarly there is little reason not to use tree-based (like CHAID trees) or gradient-boosting approaches (like XGBoost) if you are mostly interested in prediction rather than statistical inference. | Evaluation of log Vs. non log models
Yes, what you describe is a logical approach.
Aside (back-)transforming the response variable I would suggest considering a model that does not rely heavily on assumptions regarding the model's error- |
50,403 | How to compute a weighted AUC? | Not sure if this question is still valid, but you can use PRROC package in R for weighted AUC computations. | How to compute a weighted AUC? | Not sure if this question is still valid, but you can use PRROC package in R for weighted AUC computations. | How to compute a weighted AUC?
Not sure if this question is still valid, but you can use PRROC package in R for weighted AUC computations. | How to compute a weighted AUC?
Not sure if this question is still valid, but you can use PRROC package in R for weighted AUC computations. |
50,404 | Plotting (multilevel) multiple regression [closed] | There are a number of packages that support the plotting of marginal effects of fixed effects in a mixed model. I'm aware of the following: visreg, effects, ggeffects and sjPlot. In what follows below, I illustrate the usage of visreg and ggeffects.
Using visreg
The visreg package supports plotting fixed effects as well as random effects.
An example using the data in the question would be:
library(ggplot2)
library(lme4)
library(visreg)
set.seed(142857)
groups <- floor(runif(1000, min=1, max=7))
sex <- rep(c("Male", "Female"), times= 500)
value1 <- runif(1000, min=1, max=10)
value2 <- runif(1000, min=1, max=100)
value3 <- runif(1000, min=1, max=200)
response <- runif(1000, min=1, max=100)
df <- data.frame(groups, sex, response, value1, value2, value3)
model <- lmer(scale(response) ~ scale(value1) + scale(value2) + scale(value3) + factor(sex) + (1|groups), data=df)
visreg(model
, "value1" # Variable to plot
, cond = list(value2 = 0, value3 = 0, sex = "Female") # Values of the other variables in the model
, gg = TRUE # Use ggplot2 for plotting?
)
The points display the partial residuals. If you set the option gg to TRUE, ggplot2 is used for plotting and base R otherwise.
By default, the values of the other variables in the model are set at their median or mode for continuous and categorical variables, respectively. Using the argument cond, you can set the values of the variables at arbitrary values.
Using ggeffects
The package ggeffects is also able to plot marginal effects. It is able to include the variances of the random effects into account (but this doesn't work here with these artificial data).
Here is an example:
library(ggplot2)
library(lme4)
library(ggeffects)
set.seed(142857)
groups <- floor(runif(1000, min=1, max=7))
sex <- rep(c("Male", "Female"), times= 500)
value1 <- runif(1000, min=1, max=10)
value2 <- runif(1000, min=1, max=100)
value3 <- runif(1000, min=1, max=200)
response <- runif(1000, min=1, max=100)
df <- data.frame(groups, sex, response, value1, value2, value3)
model <- lmer(scale(response) ~ scale(value1) + scale(value2) + scale(value3) + factor(sex) + (1|groups), data=df)
pr <- ggpredict(model, "value1")
plot(pr) | Plotting (multilevel) multiple regression [closed] | There are a number of packages that support the plotting of marginal effects of fixed effects in a mixed model. I'm aware of the following: visreg, effects, ggeffects and sjPlot. In what follows below | Plotting (multilevel) multiple regression [closed]
There are a number of packages that support the plotting of marginal effects of fixed effects in a mixed model. I'm aware of the following: visreg, effects, ggeffects and sjPlot. In what follows below, I illustrate the usage of visreg and ggeffects.
Using visreg
The visreg package supports plotting fixed effects as well as random effects.
An example using the data in the question would be:
library(ggplot2)
library(lme4)
library(visreg)
set.seed(142857)
groups <- floor(runif(1000, min=1, max=7))
sex <- rep(c("Male", "Female"), times= 500)
value1 <- runif(1000, min=1, max=10)
value2 <- runif(1000, min=1, max=100)
value3 <- runif(1000, min=1, max=200)
response <- runif(1000, min=1, max=100)
df <- data.frame(groups, sex, response, value1, value2, value3)
model <- lmer(scale(response) ~ scale(value1) + scale(value2) + scale(value3) + factor(sex) + (1|groups), data=df)
visreg(model
, "value1" # Variable to plot
, cond = list(value2 = 0, value3 = 0, sex = "Female") # Values of the other variables in the model
, gg = TRUE # Use ggplot2 for plotting?
)
The points display the partial residuals. If you set the option gg to TRUE, ggplot2 is used for plotting and base R otherwise.
By default, the values of the other variables in the model are set at their median or mode for continuous and categorical variables, respectively. Using the argument cond, you can set the values of the variables at arbitrary values.
Using ggeffects
The package ggeffects is also able to plot marginal effects. It is able to include the variances of the random effects into account (but this doesn't work here with these artificial data).
Here is an example:
library(ggplot2)
library(lme4)
library(ggeffects)
set.seed(142857)
groups <- floor(runif(1000, min=1, max=7))
sex <- rep(c("Male", "Female"), times= 500)
value1 <- runif(1000, min=1, max=10)
value2 <- runif(1000, min=1, max=100)
value3 <- runif(1000, min=1, max=200)
response <- runif(1000, min=1, max=100)
df <- data.frame(groups, sex, response, value1, value2, value3)
model <- lmer(scale(response) ~ scale(value1) + scale(value2) + scale(value3) + factor(sex) + (1|groups), data=df)
pr <- ggpredict(model, "value1")
plot(pr) | Plotting (multilevel) multiple regression [closed]
There are a number of packages that support the plotting of marginal effects of fixed effects in a mixed model. I'm aware of the following: visreg, effects, ggeffects and sjPlot. In what follows below |
50,405 | Why does locally connected layer work in convolutional neural network? | In a convolution layer the filter has an output depth parameter. So a 5x5 filter is actually 5x5xd, d being the output depth. This essentially means that there are 'd' different filters, each of which will (learn to) have different weights. Each filter will detect the presence of a particular pattern across a feature map (e.g. an input image) but different filters will likely learn to detect unrelated features. For example one filter might have learnt to find vertical lines while a second one might have learnt to detect slanted lines at 45 degrees.
So the application of a convolution layer will in fact generate new high-level features (like straight lines) which are unrelated, given low level features (pixels in a neighborhood) that are likely to be highly correlated. | Why does locally connected layer work in convolutional neural network? | In a convolution layer the filter has an output depth parameter. So a 5x5 filter is actually 5x5xd, d being the output depth. This essentially means that there are 'd' different filters, each of which | Why does locally connected layer work in convolutional neural network?
In a convolution layer the filter has an output depth parameter. So a 5x5 filter is actually 5x5xd, d being the output depth. This essentially means that there are 'd' different filters, each of which will (learn to) have different weights. Each filter will detect the presence of a particular pattern across a feature map (e.g. an input image) but different filters will likely learn to detect unrelated features. For example one filter might have learnt to find vertical lines while a second one might have learnt to detect slanted lines at 45 degrees.
So the application of a convolution layer will in fact generate new high-level features (like straight lines) which are unrelated, given low level features (pixels in a neighborhood) that are likely to be highly correlated. | Why does locally connected layer work in convolutional neural network?
In a convolution layer the filter has an output depth parameter. So a 5x5 filter is actually 5x5xd, d being the output depth. This essentially means that there are 'd' different filters, each of which |
50,406 | Why does locally connected layer work in convolutional neural network? | This is because if a filter is successful in extracting a useful feature from a small portion of the image, we would like to extract the same feature from other parts of the image. This is related to this fact that we would like to extract translation-invariant features from the image, that is we want to extract features from the image which do not change when the objects in the image are moved to another place in the image. This is not specific to the CNNs but is the common paradigm in machine vision. | Why does locally connected layer work in convolutional neural network? | This is because if a filter is successful in extracting a useful feature from a small portion of the image, we would like to extract the same feature from other parts of the image. This is related to | Why does locally connected layer work in convolutional neural network?
This is because if a filter is successful in extracting a useful feature from a small portion of the image, we would like to extract the same feature from other parts of the image. This is related to this fact that we would like to extract translation-invariant features from the image, that is we want to extract features from the image which do not change when the objects in the image are moved to another place in the image. This is not specific to the CNNs but is the common paradigm in machine vision. | Why does locally connected layer work in convolutional neural network?
This is because if a filter is successful in extracting a useful feature from a small portion of the image, we would like to extract the same feature from other parts of the image. This is related to |
50,407 | Why does locally connected layer work in convolutional neural network? | You could see the Convolutional layers as a dimension reduction technique. Indeed, nearby pixels share a lot of covariance and ideally the features for a machine learning approach are independent.
If the convolutional operator is replaced by a specific convolutional operator were all the weights are $1/d^2$ (i.e. the average) it effectively reduces the dimensions of the picture. This operation however is rather simple and not tuned to our specific learning goal. By learning the weights of a conv operator we can 'tune' our 'average' to our learning goal.
The conv operation works that well for 2d and 3d inputs because it gives structure to this dimension reduction. It focuses on learning the variance of nearby pixels. By removing the variances of nearby pixels the latter fully-connected layers can be more effective. | Why does locally connected layer work in convolutional neural network? | You could see the Convolutional layers as a dimension reduction technique. Indeed, nearby pixels share a lot of covariance and ideally the features for a machine learning approach are independent.
If | Why does locally connected layer work in convolutional neural network?
You could see the Convolutional layers as a dimension reduction technique. Indeed, nearby pixels share a lot of covariance and ideally the features for a machine learning approach are independent.
If the convolutional operator is replaced by a specific convolutional operator were all the weights are $1/d^2$ (i.e. the average) it effectively reduces the dimensions of the picture. This operation however is rather simple and not tuned to our specific learning goal. By learning the weights of a conv operator we can 'tune' our 'average' to our learning goal.
The conv operation works that well for 2d and 3d inputs because it gives structure to this dimension reduction. It focuses on learning the variance of nearby pixels. By removing the variances of nearby pixels the latter fully-connected layers can be more effective. | Why does locally connected layer work in convolutional neural network?
You could see the Convolutional layers as a dimension reduction technique. Indeed, nearby pixels share a lot of covariance and ideally the features for a machine learning approach are independent.
If |
50,408 | Why does locally connected layer work in convolutional neural network? | As per Ian Goodfellow et al. from deeplearningbook:
Locally connected layers are useful when we know that each feature should be
a function of a small part of space, but there is no reason to think that the same
feature should occur across all of space. For example, if we want to tell if an image
is a picture of a face, we only need to look for the mouth in the bottom half of the
image.
It can also be useful to make versions of convolution or locally connected layers
in which the connectivity is further restricted, for example to constrain each output
channel
i
to be a function of only a subset of the input channels
l | Why does locally connected layer work in convolutional neural network? | As per Ian Goodfellow et al. from deeplearningbook:
Locally connected layers are useful when we know that each feature should be
a function of a small part of space, but there is no reason to thin | Why does locally connected layer work in convolutional neural network?
As per Ian Goodfellow et al. from deeplearningbook:
Locally connected layers are useful when we know that each feature should be
a function of a small part of space, but there is no reason to think that the same
feature should occur across all of space. For example, if we want to tell if an image
is a picture of a face, we only need to look for the mouth in the bottom half of the
image.
It can also be useful to make versions of convolution or locally connected layers
in which the connectivity is further restricted, for example to constrain each output
channel
i
to be a function of only a subset of the input channels
l | Why does locally connected layer work in convolutional neural network?
As per Ian Goodfellow et al. from deeplearningbook:
Locally connected layers are useful when we know that each feature should be
a function of a small part of space, but there is no reason to thin |
50,409 | Should between-subject factors be included as random slopes for item in a mixed effects model? | See here. In short, "a model specifying random slopes for a between subjects variable would be unidentifiable." But you can still include within-subject factors as random slopes for subject RE. | Should between-subject factors be included as random slopes for item in a mixed effects model? | See here. In short, "a model specifying random slopes for a between subjects variable would be unidentifiable." But you can still include within-subject factors as random slopes for subject RE. | Should between-subject factors be included as random slopes for item in a mixed effects model?
See here. In short, "a model specifying random slopes for a between subjects variable would be unidentifiable." But you can still include within-subject factors as random slopes for subject RE. | Should between-subject factors be included as random slopes for item in a mixed effects model?
See here. In short, "a model specifying random slopes for a between subjects variable would be unidentifiable." But you can still include within-subject factors as random slopes for subject RE. |
50,410 | To what extent are convolutional neural networks inspired by biology? | The paper https://arxiv.org/pdf/1807.04587.pdf (July 2018) reports on some efforts to find artificial neural network learning algorithms that are biologically plausible. They focus mainly on backpropagation, but also discuss weight sharing. They review a lot of work by major researchers in the field and others.
They conclude that algorithms that work well are not plausible, and algorithms that are plausible don't work well. Their references look like a good starting point for further reading, and it looks like the whole question is heating up again a little bit.
I think there is some confusion about what is meant by convolutional. ConvNets, in ANN research, use weight sharing (aka weight tying). There is a tutorial at https://www.quora.com/What-exactly-is-meant-by-shared-weights-in-convolutional-neural-network.
Weight sharing, not convolution per se, is the point here. It is essential for translational invariance, which is one of ConvNets' most important claims. Without it they wouldn't be able to learn anything in reasonable time. So folks in ANN research tend to assume that "convolutional" implies weight sharing.
In other disciplines, I think there is no such notion as weight sharing. Convolutional structures are familiar in the brain, as @Carl says, but there seems to be nothing known in the brain that is like weight sharing in form or function.
So to answer the OP's original question: convolution is highly plausible, but weight sharing is not. Therefore there is no biologically plausible model for ConvNets, in vision or any other domain, nor for some other kinds of ANN that also use weight-sharing. (One could also say the same thing about all ANN's that use backprop, which includes most supervised learning, whether convolutional or not.)
Caveat: I only glanced at the paper @Carl referenced. Too much chemistry for me, so I just assumed that it has nothing about convolution with weight sharing. | To what extent are convolutional neural networks inspired by biology? | The paper https://arxiv.org/pdf/1807.04587.pdf (July 2018) reports on some efforts to find artificial neural network learning algorithms that are biologically plausible. They focus mainly on backpropa | To what extent are convolutional neural networks inspired by biology?
The paper https://arxiv.org/pdf/1807.04587.pdf (July 2018) reports on some efforts to find artificial neural network learning algorithms that are biologically plausible. They focus mainly on backpropagation, but also discuss weight sharing. They review a lot of work by major researchers in the field and others.
They conclude that algorithms that work well are not plausible, and algorithms that are plausible don't work well. Their references look like a good starting point for further reading, and it looks like the whole question is heating up again a little bit.
I think there is some confusion about what is meant by convolutional. ConvNets, in ANN research, use weight sharing (aka weight tying). There is a tutorial at https://www.quora.com/What-exactly-is-meant-by-shared-weights-in-convolutional-neural-network.
Weight sharing, not convolution per se, is the point here. It is essential for translational invariance, which is one of ConvNets' most important claims. Without it they wouldn't be able to learn anything in reasonable time. So folks in ANN research tend to assume that "convolutional" implies weight sharing.
In other disciplines, I think there is no such notion as weight sharing. Convolutional structures are familiar in the brain, as @Carl says, but there seems to be nothing known in the brain that is like weight sharing in form or function.
So to answer the OP's original question: convolution is highly plausible, but weight sharing is not. Therefore there is no biologically plausible model for ConvNets, in vision or any other domain, nor for some other kinds of ANN that also use weight-sharing. (One could also say the same thing about all ANN's that use backprop, which includes most supervised learning, whether convolutional or not.)
Caveat: I only glanced at the paper @Carl referenced. Too much chemistry for me, so I just assumed that it has nothing about convolution with weight sharing. | To what extent are convolutional neural networks inspired by biology?
The paper https://arxiv.org/pdf/1807.04587.pdf (July 2018) reports on some efforts to find artificial neural network learning algorithms that are biologically plausible. They focus mainly on backpropa |
50,411 | To what extent are convolutional neural networks inspired by biology? | Related to the paper linked here by @JWG, here is a lecture by Hinton regarding the same topic, also be sure you take a look into his lately explored notions on capsule networks
https://www.youtube.com/watch?v=rTawFwUvnLE
And in more general terms, Hinton is certainly one of my best first bets whenever trying to bridge the gap between the brain to modeling via ANNs. | To what extent are convolutional neural networks inspired by biology? | Related to the paper linked here by @JWG, here is a lecture by Hinton regarding the same topic, also be sure you take a look into his lately explored notions on capsule networks
https://www.youtube.co | To what extent are convolutional neural networks inspired by biology?
Related to the paper linked here by @JWG, here is a lecture by Hinton regarding the same topic, also be sure you take a look into his lately explored notions on capsule networks
https://www.youtube.com/watch?v=rTawFwUvnLE
And in more general terms, Hinton is certainly one of my best first bets whenever trying to bridge the gap between the brain to modeling via ANNs. | To what extent are convolutional neural networks inspired by biology?
Related to the paper linked here by @JWG, here is a lecture by Hinton regarding the same topic, also be sure you take a look into his lately explored notions on capsule networks
https://www.youtube.co |
50,412 | In mathematical optimization, are sequential quadratic programming and sequential least squares programming the same thing? | Actually SQP and SLSQP sovles the same subproblem of Quadratic Programming (QP) (see subproblem here) on each algorithm step.
In SQP the problem of QP is solved by methods of Quadratic Programming.
In SLSQP to solve the problem of QP you should $LDL^{-1}$-factorize Lagrange Hessian and then solve a linear least squares problem.
Check the article:
Kraft, D. A software package for sequential quadratic
programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
Center -- Institute for Flight Mechanics, Koln, Germany. | In mathematical optimization, are sequential quadratic programming and sequential least squares prog | Actually SQP and SLSQP sovles the same subproblem of Quadratic Programming (QP) (see subproblem here) on each algorithm step.
In SQP the problem of QP is solved by methods of Quadratic Programming.
| In mathematical optimization, are sequential quadratic programming and sequential least squares programming the same thing?
Actually SQP and SLSQP sovles the same subproblem of Quadratic Programming (QP) (see subproblem here) on each algorithm step.
In SQP the problem of QP is solved by methods of Quadratic Programming.
In SLSQP to solve the problem of QP you should $LDL^{-1}$-factorize Lagrange Hessian and then solve a linear least squares problem.
Check the article:
Kraft, D. A software package for sequential quadratic
programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace
Center -- Institute for Flight Mechanics, Koln, Germany. | In mathematical optimization, are sequential quadratic programming and sequential least squares prog
Actually SQP and SLSQP sovles the same subproblem of Quadratic Programming (QP) (see subproblem here) on each algorithm step.
In SQP the problem of QP is solved by methods of Quadratic Programming.
|
50,413 | How can I obtain prediction intervals for survival prediction in the Cox model? | It's perhaps useful to note in your example that when age is taken to be the empirical mean, there is no difference in the plot.survfit output: e.g. plot(survfit(fit, newdata = data.frame(age=mean(ovarian$age)))) and plot(survfit(fit)) produce the same results. This is because the cox model using the hazard ratio to describe differences in survival between ovarian cancer patients of varying ages.
The survivor function is related to the hazard via:
$$S(T; x) = \exp \left\{ -\int_{0}^t \lambda_x (t)\right\} $$
or, according to the cox model specification:
$$S(T; x) = \exp \left\{ -\int_{0}^t \lambda_0 (t) \exp(X\beta)\right\} $$
Which can be written in terms of X and $\beta$ as a exponential modification to the empirical survivor function
$$S(T; x) = S_0(T) ^ {\exp(X\beta)} $$
If this is confusing it is easier to think of age as centered and scaled, $S_0$ is the unstratified survivor function, which does not vary based on $X$.
To verify this, the survival at 1,000 days is: 0.5204807. 60 is 3.834558 years above the mean age, resulting in a hazard ratio of $\exp(3.83\times 0.16) = 1.858$
Verifying this is the exponent, you find the 1,000 day survival for an ovarian cancer patient age 60 is tail(survfit(fit, newdata=x_new)$surv, 1) = 0.2971346 which equals: $0.5204807^{1.858}$.
That means that the issue of calculating the standard error for the "scaled" kaplan meier is just a delta method treating the survival curve and the coefficients as independent.
If $S(T, x) = S(T, 0) ^{\exp(\hat{\beta} X)}$ then
$$\text{var} \left( S(T, x) \right) \approx \frac{\partial S(T, x)}{\partial [S_0(T), \beta]} \left[
\begin{array}[cc]\
\text{var} \left(S_0(T) \right) & 0 \\
0 & \text{var} \left(\hat{\beta} \right) \\
\end{array} \right] \frac{\partial S(T, x)}{\partial [S(T, 0), \beta]}^T $$
But as a note, calculation of bounds for the survivor function is still a huge area of research. I don't think this approach: using empirical bounds for the survivor function, takes adequate advantage of the proportional hazards assumption. | How can I obtain prediction intervals for survival prediction in the Cox model? | It's perhaps useful to note in your example that when age is taken to be the empirical mean, there is no difference in the plot.survfit output: e.g. plot(survfit(fit, newdata = data.frame(age=mean(ova | How can I obtain prediction intervals for survival prediction in the Cox model?
It's perhaps useful to note in your example that when age is taken to be the empirical mean, there is no difference in the plot.survfit output: e.g. plot(survfit(fit, newdata = data.frame(age=mean(ovarian$age)))) and plot(survfit(fit)) produce the same results. This is because the cox model using the hazard ratio to describe differences in survival between ovarian cancer patients of varying ages.
The survivor function is related to the hazard via:
$$S(T; x) = \exp \left\{ -\int_{0}^t \lambda_x (t)\right\} $$
or, according to the cox model specification:
$$S(T; x) = \exp \left\{ -\int_{0}^t \lambda_0 (t) \exp(X\beta)\right\} $$
Which can be written in terms of X and $\beta$ as a exponential modification to the empirical survivor function
$$S(T; x) = S_0(T) ^ {\exp(X\beta)} $$
If this is confusing it is easier to think of age as centered and scaled, $S_0$ is the unstratified survivor function, which does not vary based on $X$.
To verify this, the survival at 1,000 days is: 0.5204807. 60 is 3.834558 years above the mean age, resulting in a hazard ratio of $\exp(3.83\times 0.16) = 1.858$
Verifying this is the exponent, you find the 1,000 day survival for an ovarian cancer patient age 60 is tail(survfit(fit, newdata=x_new)$surv, 1) = 0.2971346 which equals: $0.5204807^{1.858}$.
That means that the issue of calculating the standard error for the "scaled" kaplan meier is just a delta method treating the survival curve and the coefficients as independent.
If $S(T, x) = S(T, 0) ^{\exp(\hat{\beta} X)}$ then
$$\text{var} \left( S(T, x) \right) \approx \frac{\partial S(T, x)}{\partial [S_0(T), \beta]} \left[
\begin{array}[cc]\
\text{var} \left(S_0(T) \right) & 0 \\
0 & \text{var} \left(\hat{\beta} \right) \\
\end{array} \right] \frac{\partial S(T, x)}{\partial [S(T, 0), \beta]}^T $$
But as a note, calculation of bounds for the survivor function is still a huge area of research. I don't think this approach: using empirical bounds for the survivor function, takes adequate advantage of the proportional hazards assumption. | How can I obtain prediction intervals for survival prediction in the Cox model?
It's perhaps useful to note in your example that when age is taken to be the empirical mean, there is no difference in the plot.survfit output: e.g. plot(survfit(fit, newdata = data.frame(age=mean(ova |
50,414 | Explanation of the 'free bits' technique for variational autoencoders | From what I can tell, and I'd love to be corrected as this seems quite interesting:
The `IAF' paper contains the relevant description of this 'free bits' method. In particular around equation (15). This identifies the term as relating to a modified objective function, "We then use the following objective, which ensures that using less than $\lambda$ nats of information per subset $j$ (on average per minibatch $M$) is not advantageous:"
$\tilde L_\lambda = E_{x∼M} E_{q(z|x)}[\log p(x|z)] - \sum_{j=1}^K \text{maximum}(λ, E_{x∼M} [D_{KL}(q(z_j |x)||p(z_j ))]) $
The $E_{x \sim M}$ notation is I believe $x$ within a minibatch $M$ and is related to the stochastic gradient ascent approach and hence not of the essence to your question. Let's ignore it, leaving:
$E_{q(z|x)}[\log p(x|z)] - \sum_{j=1}^K \text{maximum}(λ, D_{KL}(q(z_j |x)||p(z_j ))) $
They've split the latent variables into $K$ groups. As this seems to be icing on the cake, let's ignore it for the moment, leaving:
$E_{q(z|x)}[\log p(x|z)] - \text{maximum}(λ, D_{KL}(q(z |x)||p(z))) $
I think we're approaching ground here. If we dumped the maximisation we would be back to a vanilla Evidence Lower BOund (ELBO) criterion for Variational Bayes methods.
If we look at the expression $D_{KL}(q(z |x)||p(z ))$, this is the extra message length required to express a datum $z$ if the prior $p$ is used instead of the variational posterior $q$. This means that if our variational posterior is close to the prior in KL divergence, the $\max$ will take the value $\lambda$, and our current solution will be penalised more heavily than under a vanilla ELBO.
In particular if we have a solution where $D_{KL}<\lambda$ then we don't have to trade anything off in order to increase the model complexity a little bit (i.e. move $q$ further from $p$). I guess this is where they get the term 'free bits' from - increasing the model complexity for free, up to a certain point.
Bringing back the stuff we ignored: the summation over $K$ is establishing a complexity freebie quota per group of latent variables. This could be useful in some model where one group of parameters would otherwise hoover up the entire quota.
[EDIT] For instance suppose we had a (totally made up) model involving some 'filter weights' and some 'variances': if they were treated as sharing the same complexity quota, perhaps after training we would find that the 'variances' were still very close to the prior because the 'filter weights' had used up the free bits. By splitting the variables into two groups, we might be able to ensure the 'variances' also used some free bits / i.e. get pushed away from the prior. [/EDIT]
The expectation over $x$ in a minibatch - well I'm not as familiar with the notation - but from their quotation above the complexity quota is reset at the end of each mini batch.
[EDIT] Suppose we had a model where some of the latent variables $z$ were observation specific (think cluster indicators, matrix factors, random effects etc). Then for each observation we'd have a ration of something like $\lambda/N$ free bits. So as we got more data the ration would get smaller. By making $\lambda$ minibatch specific we could fix the ration size, so that even as more data came in overall the ration wouldn't go to zero.
[/EDIT] | Explanation of the 'free bits' technique for variational autoencoders | From what I can tell, and I'd love to be corrected as this seems quite interesting:
The `IAF' paper contains the relevant description of this 'free bits' method. In particular around equation (15). | Explanation of the 'free bits' technique for variational autoencoders
From what I can tell, and I'd love to be corrected as this seems quite interesting:
The `IAF' paper contains the relevant description of this 'free bits' method. In particular around equation (15). This identifies the term as relating to a modified objective function, "We then use the following objective, which ensures that using less than $\lambda$ nats of information per subset $j$ (on average per minibatch $M$) is not advantageous:"
$\tilde L_\lambda = E_{x∼M} E_{q(z|x)}[\log p(x|z)] - \sum_{j=1}^K \text{maximum}(λ, E_{x∼M} [D_{KL}(q(z_j |x)||p(z_j ))]) $
The $E_{x \sim M}$ notation is I believe $x$ within a minibatch $M$ and is related to the stochastic gradient ascent approach and hence not of the essence to your question. Let's ignore it, leaving:
$E_{q(z|x)}[\log p(x|z)] - \sum_{j=1}^K \text{maximum}(λ, D_{KL}(q(z_j |x)||p(z_j ))) $
They've split the latent variables into $K$ groups. As this seems to be icing on the cake, let's ignore it for the moment, leaving:
$E_{q(z|x)}[\log p(x|z)] - \text{maximum}(λ, D_{KL}(q(z |x)||p(z))) $
I think we're approaching ground here. If we dumped the maximisation we would be back to a vanilla Evidence Lower BOund (ELBO) criterion for Variational Bayes methods.
If we look at the expression $D_{KL}(q(z |x)||p(z ))$, this is the extra message length required to express a datum $z$ if the prior $p$ is used instead of the variational posterior $q$. This means that if our variational posterior is close to the prior in KL divergence, the $\max$ will take the value $\lambda$, and our current solution will be penalised more heavily than under a vanilla ELBO.
In particular if we have a solution where $D_{KL}<\lambda$ then we don't have to trade anything off in order to increase the model complexity a little bit (i.e. move $q$ further from $p$). I guess this is where they get the term 'free bits' from - increasing the model complexity for free, up to a certain point.
Bringing back the stuff we ignored: the summation over $K$ is establishing a complexity freebie quota per group of latent variables. This could be useful in some model where one group of parameters would otherwise hoover up the entire quota.
[EDIT] For instance suppose we had a (totally made up) model involving some 'filter weights' and some 'variances': if they were treated as sharing the same complexity quota, perhaps after training we would find that the 'variances' were still very close to the prior because the 'filter weights' had used up the free bits. By splitting the variables into two groups, we might be able to ensure the 'variances' also used some free bits / i.e. get pushed away from the prior. [/EDIT]
The expectation over $x$ in a minibatch - well I'm not as familiar with the notation - but from their quotation above the complexity quota is reset at the end of each mini batch.
[EDIT] Suppose we had a model where some of the latent variables $z$ were observation specific (think cluster indicators, matrix factors, random effects etc). Then for each observation we'd have a ration of something like $\lambda/N$ free bits. So as we got more data the ration would get smaller. By making $\lambda$ minibatch specific we could fix the ration size, so that even as more data came in overall the ration wouldn't go to zero.
[/EDIT] | Explanation of the 'free bits' technique for variational autoencoders
From what I can tell, and I'd love to be corrected as this seems quite interesting:
The `IAF' paper contains the relevant description of this 'free bits' method. In particular around equation (15). |
50,415 | Best ANN Architecture for high-energy physics problem | Your architecture looks fine. I mean, it's straight out of MNIST lenet. It's a good solid network to start from. You can then evolve it over time, according to your loss curves, by adding capacity, ie layers, channels per layer, etc.
You could also consider adding dropout, for regularization.
As far as convergence... it's pretty much impossible not to converge, unless you are using too high a learning rate. So, divide your learning rate by 10, until it starts converging. You can just pick some tiny subset of eg 32 images, and just train on those images, using smaller and smaller learning rates, until the error on those 32 images drops to zero (which it should, because you'll overfit them, easily).
Then, once the loss on 32 images is dropping to zero, ie you've picked a small enough learning rate, fixed any bugs etc, then you can add more and more data, and then start increasing capacity of your network, ie adding layers etc. And you probalby want to add dropout, it's really good at encouraging generalization to test data.
Edit: oh, you're using Adadelta etc, which should handle learning rate for you. Well... I've mostly used SGD, and SGD is somewhat standard for deep nets (though gradually falling out of favor a bit recently). You might consider trying SGD, with a small learning rate, and seeing what happens. | Best ANN Architecture for high-energy physics problem | Your architecture looks fine. I mean, it's straight out of MNIST lenet. It's a good solid network to start from. You can then evolve it over time, according to your loss curves, by adding capacity, | Best ANN Architecture for high-energy physics problem
Your architecture looks fine. I mean, it's straight out of MNIST lenet. It's a good solid network to start from. You can then evolve it over time, according to your loss curves, by adding capacity, ie layers, channels per layer, etc.
You could also consider adding dropout, for regularization.
As far as convergence... it's pretty much impossible not to converge, unless you are using too high a learning rate. So, divide your learning rate by 10, until it starts converging. You can just pick some tiny subset of eg 32 images, and just train on those images, using smaller and smaller learning rates, until the error on those 32 images drops to zero (which it should, because you'll overfit them, easily).
Then, once the loss on 32 images is dropping to zero, ie you've picked a small enough learning rate, fixed any bugs etc, then you can add more and more data, and then start increasing capacity of your network, ie adding layers etc. And you probalby want to add dropout, it's really good at encouraging generalization to test data.
Edit: oh, you're using Adadelta etc, which should handle learning rate for you. Well... I've mostly used SGD, and SGD is somewhat standard for deep nets (though gradually falling out of favor a bit recently). You might consider trying SGD, with a small learning rate, and seeing what happens. | Best ANN Architecture for high-energy physics problem
Your architecture looks fine. I mean, it's straight out of MNIST lenet. It's a good solid network to start from. You can then evolve it over time, according to your loss curves, by adding capacity, |
50,416 | Best ANN Architecture for high-energy physics problem | Thomas Russell
First of all it is very interesting problem to solve.
I think you should not merge features extracted from CNN and other variables. You can try training CNN end-to-end for predicting class scores and train a Neural Network using other variables for predicting class scores and then some how merge both predictions(e.g. take mean) for final prediction. But the performance of this strategy also depend also on how you normalize inputs and design your networks? | Best ANN Architecture for high-energy physics problem | Thomas Russell
First of all it is very interesting problem to solve.
I think you should not merge features extracted from CNN and other variables. You can try training CNN end-to-end for predicting cl | Best ANN Architecture for high-energy physics problem
Thomas Russell
First of all it is very interesting problem to solve.
I think you should not merge features extracted from CNN and other variables. You can try training CNN end-to-end for predicting class scores and train a Neural Network using other variables for predicting class scores and then some how merge both predictions(e.g. take mean) for final prediction. But the performance of this strategy also depend also on how you normalize inputs and design your networks? | Best ANN Architecture for high-energy physics problem
Thomas Russell
First of all it is very interesting problem to solve.
I think you should not merge features extracted from CNN and other variables. You can try training CNN end-to-end for predicting cl |
50,417 | Probability one Weibull is less than another, given upper-bound | OK, I have made some progress.
The solution for this general kind of problem is described in this post:
https://math.stackexchange.com/questions/396386/finding-an-expression-for-the-probability-that-one-random-variable-is-less-than
Assume for simplicity that $X$ and $Y$ have respectively density functions $f_X(x)$ and $f_Y(y)$. Then
$$\Pr(X\lt Y|Y\lt k)=\frac{\Pr((X\lt Y)\cap (Y\lt k)}{\Pr(Y\lt k))}.$$
Both numerator and denominator can be expressed as integrals. For the numerator, we want $\int_{y=0}^k\int_{x=0}^y f_X(x)f_Y(y)\,dx\,dy$.
I need to modify this a bit to get my $P(Y<X|X<t, Y<t)$, since there are two conditions. But first, I think the harder part is the numerator. The PDF for the Weibull distribution is
$$\nu \lambda x^{\nu-1}\text{exp}(-\lambda x^\nu)$$
(This is the PH parameterization. I had trouble doing the next step with the AFT parameterization.) I plug this into the 'numerator' from the other post, then plug that into Wolfram Mathematica:
FullSimplify[Integrate[(v*L*x^(v - 1)*Exp[-(L*x^v)])*(n*M*y^(n - 1)*Exp[-(M*y^n)]), {x, 0, U}, {y, 0, y}], L > 0 && M > 0 && n > 0 && v > 0]
This gives:
$$\text{exp}(-\lambda_1 x^\nu_1 -\lambda_2 y^\nu_2) * (-1+\text{exp}(-\lambda_1 x^\nu_1))*(-1+\text{exp}(-\lambda_2 y^\nu_2))$$
So I now have the numerator. To get the denominator, I just use the CCDF of the Weibull distribution:
$$P(X<t) \cap P(Y<t) = \text{exp}(-\lambda_1 t^{\nu_1})) * \text{exp}(-\lambda_2 t^{\nu_2}))$$
So I think dividing the first equation by the second will give me what I want. I'd like run some simulations to confirm that I haven't made any errors, but I believe I've arrived at my solution. | Probability one Weibull is less than another, given upper-bound | OK, I have made some progress.
The solution for this general kind of problem is described in this post:
https://math.stackexchange.com/questions/396386/finding-an-expression-for-the-probability-that-o | Probability one Weibull is less than another, given upper-bound
OK, I have made some progress.
The solution for this general kind of problem is described in this post:
https://math.stackexchange.com/questions/396386/finding-an-expression-for-the-probability-that-one-random-variable-is-less-than
Assume for simplicity that $X$ and $Y$ have respectively density functions $f_X(x)$ and $f_Y(y)$. Then
$$\Pr(X\lt Y|Y\lt k)=\frac{\Pr((X\lt Y)\cap (Y\lt k)}{\Pr(Y\lt k))}.$$
Both numerator and denominator can be expressed as integrals. For the numerator, we want $\int_{y=0}^k\int_{x=0}^y f_X(x)f_Y(y)\,dx\,dy$.
I need to modify this a bit to get my $P(Y<X|X<t, Y<t)$, since there are two conditions. But first, I think the harder part is the numerator. The PDF for the Weibull distribution is
$$\nu \lambda x^{\nu-1}\text{exp}(-\lambda x^\nu)$$
(This is the PH parameterization. I had trouble doing the next step with the AFT parameterization.) I plug this into the 'numerator' from the other post, then plug that into Wolfram Mathematica:
FullSimplify[Integrate[(v*L*x^(v - 1)*Exp[-(L*x^v)])*(n*M*y^(n - 1)*Exp[-(M*y^n)]), {x, 0, U}, {y, 0, y}], L > 0 && M > 0 && n > 0 && v > 0]
This gives:
$$\text{exp}(-\lambda_1 x^\nu_1 -\lambda_2 y^\nu_2) * (-1+\text{exp}(-\lambda_1 x^\nu_1))*(-1+\text{exp}(-\lambda_2 y^\nu_2))$$
So I now have the numerator. To get the denominator, I just use the CCDF of the Weibull distribution:
$$P(X<t) \cap P(Y<t) = \text{exp}(-\lambda_1 t^{\nu_1})) * \text{exp}(-\lambda_2 t^{\nu_2}))$$
So I think dividing the first equation by the second will give me what I want. I'd like run some simulations to confirm that I haven't made any errors, but I believe I've arrived at my solution. | Probability one Weibull is less than another, given upper-bound
OK, I have made some progress.
The solution for this general kind of problem is described in this post:
https://math.stackexchange.com/questions/396386/finding-an-expression-for-the-probability-that-o |
50,418 | how to use GLS with correlation structure to compare two temperature time series? | A way to answer your question (I want to be able to say if the shallow site is warmer than the deep one or not.) would be to work on the difference between the two time series SiteB - SiteA (A=deep, B= shallow).
Both time series are stationary. So the means of the time series are not time-dependent. Both time series can be well represented by a simple AR(1) model.
From your data, I found this ARMA model for the time series SiteB - SiteA:
arima(dTMP_BmA, order = c(2,0,1))
Call:
arima(x = dTMP_BmA, order = c(2, 0, 1))
Coefficients:
ar1 ar2 ma1 intercept
1.2600 -0.5131 -0.7269 0.7458
s.e. 0.1694 0.0973 0.1769 0.0553
sigma^2 estimated as 0.2353: log likelihood = -65.62, aic = 139.24
The plot of the autocorrelations of the residuals shows no anomaly.
The "intercept" parameter (0.7458) is in fact the estimated mean of the time series siteB - SiteA, with a standard error of 0.0553.
From this result, we can conclude that there is a significant difference between the mean of SiteB and the mean of SiteA. | how to use GLS with correlation structure to compare two temperature time series? | A way to answer your question (I want to be able to say if the shallow site is warmer than the deep one or not.) would be to work on the difference between the two time series SiteB - SiteA (A=deep, B | how to use GLS with correlation structure to compare two temperature time series?
A way to answer your question (I want to be able to say if the shallow site is warmer than the deep one or not.) would be to work on the difference between the two time series SiteB - SiteA (A=deep, B= shallow).
Both time series are stationary. So the means of the time series are not time-dependent. Both time series can be well represented by a simple AR(1) model.
From your data, I found this ARMA model for the time series SiteB - SiteA:
arima(dTMP_BmA, order = c(2,0,1))
Call:
arima(x = dTMP_BmA, order = c(2, 0, 1))
Coefficients:
ar1 ar2 ma1 intercept
1.2600 -0.5131 -0.7269 0.7458
s.e. 0.1694 0.0973 0.1769 0.0553
sigma^2 estimated as 0.2353: log likelihood = -65.62, aic = 139.24
The plot of the autocorrelations of the residuals shows no anomaly.
The "intercept" parameter (0.7458) is in fact the estimated mean of the time series siteB - SiteA, with a standard error of 0.0553.
From this result, we can conclude that there is a significant difference between the mean of SiteB and the mean of SiteA. | how to use GLS with correlation structure to compare two temperature time series?
A way to answer your question (I want to be able to say if the shallow site is warmer than the deep one or not.) would be to work on the difference between the two time series SiteB - SiteA (A=deep, B |
50,419 | how to use GLS with correlation structure to compare two temperature time series? | The autoregressive correlations of order 1 of the residuals of model m3a of the two time series were pretty high (A: ca. 0.4 and B: ca. 0.6), and both are significant at 0.05 level. As such, the results from summary() are invalid.
The autocorrelation coefficients for the two ts can be visualized by:
acf2(residuals(m3a,type="normalized")[seq(1,187,2)],max.lag=50) (note: acf2 from pkg astsa)
acf2(residuals(m3a,type="normalized")[seq(2,188,2)],max.lag=50)
Increasing AR order p and MA order q won't help mitigate the significant first order autocorrelation coefficients. To reduce first autocorrelation coefficient, one approach is to incorporate frequency domain methods. Here is what I do.
dTMP$time = rep(1:94, each = 2)
m3a<-gls(temp.avg~site+
sin(2*pi*time*1/94)+cos(2*pi*time*1/94)+
sin(2*pi*time*2/94)+
sin(2*pi*time*3/94)+cos(2*pi*time*3/94)+
sin(2*pi*time*4/94)+cos(2*pi*time*4/94)+
sin(2*pi*time*5/94)+cos(2*pi*time*5/94)+
sin(2*pi*time*6/94)+cos(2*pi*time*6/94)+
sin(2*pi*time*7/94)+cos(2*pi*time*7/94)+
sin(2*pi*time*8/94)+
cos(2*pi*time*9/94)+
cos(2*pi*time*10/94)+
sin(2*pi*time*11/94)+
# sin(2*pi*time*12/94)+cos(2*pi*time*12/94)+
sin(2*pi*time*13/94)+cos(2*pi*time*13/94)+
# sin(2*pi*time*14/94)+cos(2*pi*time*14/94)+
cos(2*pi*time*15/94)+
sin(2*pi*time*18/94)+cos(2*pi*time*18/94)+
sin(2*pi*time*23/94)+cos(2*pi*time*23/94)+
# sin(2*pi*time*29/94)+cos(2*pi*time*29/94)+
sin(2*pi*time*34/94),
# sin(2*pi*time*39/94)+cos(2*pi*time*39/94),
data=dTMP,
correlation=corARMA(form=~1|site,p=1,q=0))
Check autocorrelation of the new model
acf2(residuals(m3a,type="normalized")[seq(1,187,2)],max.lag=50)
acf2(residuals(m3a,type="normalized")[seq(2,188,2)],max.lag=50)
The first order autocorrelation coefficients for site A and B are reduced to about 0.1 and 0.2, respectively. These are much better.
If you check the results by summary(m3a), you will see the significant effect of site (the p = 0) | how to use GLS with correlation structure to compare two temperature time series? | The autoregressive correlations of order 1 of the residuals of model m3a of the two time series were pretty high (A: ca. 0.4 and B: ca. 0.6), and both are significant at 0.05 level. As such, the resul | how to use GLS with correlation structure to compare two temperature time series?
The autoregressive correlations of order 1 of the residuals of model m3a of the two time series were pretty high (A: ca. 0.4 and B: ca. 0.6), and both are significant at 0.05 level. As such, the results from summary() are invalid.
The autocorrelation coefficients for the two ts can be visualized by:
acf2(residuals(m3a,type="normalized")[seq(1,187,2)],max.lag=50) (note: acf2 from pkg astsa)
acf2(residuals(m3a,type="normalized")[seq(2,188,2)],max.lag=50)
Increasing AR order p and MA order q won't help mitigate the significant first order autocorrelation coefficients. To reduce first autocorrelation coefficient, one approach is to incorporate frequency domain methods. Here is what I do.
dTMP$time = rep(1:94, each = 2)
m3a<-gls(temp.avg~site+
sin(2*pi*time*1/94)+cos(2*pi*time*1/94)+
sin(2*pi*time*2/94)+
sin(2*pi*time*3/94)+cos(2*pi*time*3/94)+
sin(2*pi*time*4/94)+cos(2*pi*time*4/94)+
sin(2*pi*time*5/94)+cos(2*pi*time*5/94)+
sin(2*pi*time*6/94)+cos(2*pi*time*6/94)+
sin(2*pi*time*7/94)+cos(2*pi*time*7/94)+
sin(2*pi*time*8/94)+
cos(2*pi*time*9/94)+
cos(2*pi*time*10/94)+
sin(2*pi*time*11/94)+
# sin(2*pi*time*12/94)+cos(2*pi*time*12/94)+
sin(2*pi*time*13/94)+cos(2*pi*time*13/94)+
# sin(2*pi*time*14/94)+cos(2*pi*time*14/94)+
cos(2*pi*time*15/94)+
sin(2*pi*time*18/94)+cos(2*pi*time*18/94)+
sin(2*pi*time*23/94)+cos(2*pi*time*23/94)+
# sin(2*pi*time*29/94)+cos(2*pi*time*29/94)+
sin(2*pi*time*34/94),
# sin(2*pi*time*39/94)+cos(2*pi*time*39/94),
data=dTMP,
correlation=corARMA(form=~1|site,p=1,q=0))
Check autocorrelation of the new model
acf2(residuals(m3a,type="normalized")[seq(1,187,2)],max.lag=50)
acf2(residuals(m3a,type="normalized")[seq(2,188,2)],max.lag=50)
The first order autocorrelation coefficients for site A and B are reduced to about 0.1 and 0.2, respectively. These are much better.
If you check the results by summary(m3a), you will see the significant effect of site (the p = 0) | how to use GLS with correlation structure to compare two temperature time series?
The autoregressive correlations of order 1 of the residuals of model m3a of the two time series were pretty high (A: ca. 0.4 and B: ca. 0.6), and both are significant at 0.05 level. As such, the resul |
50,420 | Why noisy data will benefit Bayesian? | Adding noise reduces the quality of Bayesian results as it does for Frequentist and Likelihoodist methods. It will also slow down the model. This can be seen with a simple, degenerate example.
Consider a case of data consisting of five points (1,1), (2,2), (3,3), (4,4) and (5,5). The slope is 1 and the intercept is zero. There is 100% certainty as to the parameters, if the model is valid. The posterior will be the Dirac delta function. Now adding noise creates an ordinary posterior with less certainty as a necessity. Furthermore, anything which spreads the uncertainty increases computation time.
Where increases in variability do improve Bayesian methods is when it identifies signal rather than noise. Imagine a training set that only had green and brown eyed individuals. How would it handle its first blue-eyed person outside the training set? By having a blue-eyed person in the data set, this increase in natural variability improves the degree to which the model matches reality. This will speed up processing speed. It will narrow the variability. | Why noisy data will benefit Bayesian? | Adding noise reduces the quality of Bayesian results as it does for Frequentist and Likelihoodist methods. It will also slow down the model. This can be seen with a simple, degenerate example.
Consi | Why noisy data will benefit Bayesian?
Adding noise reduces the quality of Bayesian results as it does for Frequentist and Likelihoodist methods. It will also slow down the model. This can be seen with a simple, degenerate example.
Consider a case of data consisting of five points (1,1), (2,2), (3,3), (4,4) and (5,5). The slope is 1 and the intercept is zero. There is 100% certainty as to the parameters, if the model is valid. The posterior will be the Dirac delta function. Now adding noise creates an ordinary posterior with less certainty as a necessity. Furthermore, anything which spreads the uncertainty increases computation time.
Where increases in variability do improve Bayesian methods is when it identifies signal rather than noise. Imagine a training set that only had green and brown eyed individuals. How would it handle its first blue-eyed person outside the training set? By having a blue-eyed person in the data set, this increase in natural variability improves the degree to which the model matches reality. This will speed up processing speed. It will narrow the variability. | Why noisy data will benefit Bayesian?
Adding noise reduces the quality of Bayesian results as it does for Frequentist and Likelihoodist methods. It will also slow down the model. This can be seen with a simple, degenerate example.
Consi |
50,421 | confidence intervals in linear regression | I am from a different domain, and use somewhat different language, but maybe this will help.
Imagine doing an experiment. $x$ is a set of given values, an "independent variable". Not random. For each of these values you measure a dependent variable, $y$. Presumably, $y$ depends on $x$ in a deterministic (non-random) way, but your measurements are also affected by some noise, $\epsilon$. Therefore, $y$ is a random variable.
When you calculate coefficients of linear regression, $\hat{\beta}_0$ and $\hat{\beta}_1$ (which are estimates of "true", non-random $\beta_0$ and $\beta_1$), their values depend on $y$, and therefore they are also random. If you repeat your experiment once again, for the same $x$ values, you will get (slightly, or not slightly) different $y$, and then will calculate different $\hat{\beta}_0$ and $\hat{\beta}_1$.
Finally, note that $\hat{\beta}_0$ and $\hat{\beta}_1$ are expressed in terms of $y$ linearly. Therefore, variances of $\hat{\beta}_0$ and $\hat{\beta}_1$ are proportional to $var(y) = var(\epsilon)$. | confidence intervals in linear regression | I am from a different domain, and use somewhat different language, but maybe this will help.
Imagine doing an experiment. $x$ is a set of given values, an "independent variable". Not random. For each | confidence intervals in linear regression
I am from a different domain, and use somewhat different language, but maybe this will help.
Imagine doing an experiment. $x$ is a set of given values, an "independent variable". Not random. For each of these values you measure a dependent variable, $y$. Presumably, $y$ depends on $x$ in a deterministic (non-random) way, but your measurements are also affected by some noise, $\epsilon$. Therefore, $y$ is a random variable.
When you calculate coefficients of linear regression, $\hat{\beta}_0$ and $\hat{\beta}_1$ (which are estimates of "true", non-random $\beta_0$ and $\beta_1$), their values depend on $y$, and therefore they are also random. If you repeat your experiment once again, for the same $x$ values, you will get (slightly, or not slightly) different $y$, and then will calculate different $\hat{\beta}_0$ and $\hat{\beta}_1$.
Finally, note that $\hat{\beta}_0$ and $\hat{\beta}_1$ are expressed in terms of $y$ linearly. Therefore, variances of $\hat{\beta}_0$ and $\hat{\beta}_1$ are proportional to $var(y) = var(\epsilon)$. | confidence intervals in linear regression
I am from a different domain, and use somewhat different language, but maybe this will help.
Imagine doing an experiment. $x$ is a set of given values, an "independent variable". Not random. For each |
50,422 | confidence intervals in linear regression | Error is the only random variable. The X's are assumed to fixed but if you assume linearity you can generalize to other values of X. Of course extrapolation is very risky and rarely justified. | confidence intervals in linear regression | Error is the only random variable. The X's are assumed to fixed but if you assume linearity you can generalize to other values of X. Of course extrapolation is very risky and rarely justified. | confidence intervals in linear regression
Error is the only random variable. The X's are assumed to fixed but if you assume linearity you can generalize to other values of X. Of course extrapolation is very risky and rarely justified. | confidence intervals in linear regression
Error is the only random variable. The X's are assumed to fixed but if you assume linearity you can generalize to other values of X. Of course extrapolation is very risky and rarely justified. |
50,423 | Real world examples of the sleeping beauty paradox | My candidate for a real-world analogue: "How likely is it that there is intelligent life elsewhere in the universe?"
To simplify things, assume that God picked the fundamental physical constants at random. Assume that there was a 50% chance of picking values which would result in a universe hostile to life, where intelligence would only evolve on one planet (corresponding to Sleeping Beauty being woken up only once), and a 50% chance of picking values which would result in a universe friendly to life, where intelligence would evolve millions of times (corresponding to Sleeping Beauty being woken up multiple times.)
Then the question of whether the evolution of intelligence on our own planet should increase our probability that values friendly to life were picked corresponds to the question of whether Sleeping Beauty should consider her own waking up to be evidence that she is woken up multiple times.
To make my description of this scenario correspond better to reality, replace the random variable "which of these two hypothetical sets of physical constant values were picked" with the random variable "how probable is the evolution of intelligent life on a random planet given the laws of physics."
One objection to my argument would be: Sleeping Beauty knew going in that she would be woken up, but we didn't know that we would evolve until after we did. My reply: if we modify the Sleeping Beauty problem setup so that she doesn't initially know she's part of the experiment, and the experiment is only explained to her each time she is woken up, I don't think that fundamentally alters the paradox.
You could make the case that the question of how likely it is that we live in a computer simulation similarly corresponds to the Sleeping Beauty question. | Real world examples of the sleeping beauty paradox | My candidate for a real-world analogue: "How likely is it that there is intelligent life elsewhere in the universe?"
To simplify things, assume that God picked the fundamental physical constants at ra | Real world examples of the sleeping beauty paradox
My candidate for a real-world analogue: "How likely is it that there is intelligent life elsewhere in the universe?"
To simplify things, assume that God picked the fundamental physical constants at random. Assume that there was a 50% chance of picking values which would result in a universe hostile to life, where intelligence would only evolve on one planet (corresponding to Sleeping Beauty being woken up only once), and a 50% chance of picking values which would result in a universe friendly to life, where intelligence would evolve millions of times (corresponding to Sleeping Beauty being woken up multiple times.)
Then the question of whether the evolution of intelligence on our own planet should increase our probability that values friendly to life were picked corresponds to the question of whether Sleeping Beauty should consider her own waking up to be evidence that she is woken up multiple times.
To make my description of this scenario correspond better to reality, replace the random variable "which of these two hypothetical sets of physical constant values were picked" with the random variable "how probable is the evolution of intelligent life on a random planet given the laws of physics."
One objection to my argument would be: Sleeping Beauty knew going in that she would be woken up, but we didn't know that we would evolve until after we did. My reply: if we modify the Sleeping Beauty problem setup so that she doesn't initially know she's part of the experiment, and the experiment is only explained to her each time she is woken up, I don't think that fundamentally alters the paradox.
You could make the case that the question of how likely it is that we live in a computer simulation similarly corresponds to the Sleeping Beauty question. | Real world examples of the sleeping beauty paradox
My candidate for a real-world analogue: "How likely is it that there is intelligent life elsewhere in the universe?"
To simplify things, assume that God picked the fundamental physical constants at ra |
50,424 | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms? | The variant of Naive Bayes you use depends on the data. If your data consists of counts, the multinomial distribution may be an appropriate distribution for the likelihood, and thus multinomial Naive Bayes is appropriate.
Likewise, if your data points come from distribution $X$, use the likelihood for $X$ for Naive Bayes. Thus, it becomes $X$ Naive Bayes. | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms? | The variant of Naive Bayes you use depends on the data. If your data consists of counts, the multinomial distribution may be an appropriate distribution for the likelihood, and thus multinomial Naive | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms?
The variant of Naive Bayes you use depends on the data. If your data consists of counts, the multinomial distribution may be an appropriate distribution for the likelihood, and thus multinomial Naive Bayes is appropriate.
Likewise, if your data points come from distribution $X$, use the likelihood for $X$ for Naive Bayes. Thus, it becomes $X$ Naive Bayes. | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms?
The variant of Naive Bayes you use depends on the data. If your data consists of counts, the multinomial distribution may be an appropriate distribution for the likelihood, and thus multinomial Naive |
50,425 | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms? | As Ryan Rosario states, which model you choose depends on the kind of data you have. You may wish to read this paper by McCallum and Nigam from 1998 on the difference between the multinomial and bernoulli naive Bayes models:
"A Comparison of Event Models for Naive Bayes Text Classification"
http://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf
More generally, reading through an introduction to probability text book will help your understanding immensely:
https://www.amazon.com/First-Course-Probability-9th/dp/032179477X | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms? | As Ryan Rosario states, which model you choose depends on the kind of data you have. You may wish to read this paper by McCallum and Nigam from 1998 on the difference between the multinomial and berno | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms?
As Ryan Rosario states, which model you choose depends on the kind of data you have. You may wish to read this paper by McCallum and Nigam from 1998 on the difference between the multinomial and bernoulli naive Bayes models:
"A Comparison of Event Models for Naive Bayes Text Classification"
http://www.cs.cmu.edu/~knigam/papers/multinomial-aaaiws98.pdf
More generally, reading through an introduction to probability text book will help your understanding immensely:
https://www.amazon.com/First-Course-Probability-9th/dp/032179477X | Bernoulli NB vs MultiNomial NB, How to choose among different NB algorithms?
As Ryan Rosario states, which model you choose depends on the kind of data you have. You may wish to read this paper by McCallum and Nigam from 1998 on the difference between the multinomial and berno |
50,426 | Model/variable selection for time series | If the focus is on time-series and forecasting, then I would only consider rolling CV. When working with time-series it is critical to exclude any innovative (unknown) process from the fit.
ICs estimate variance by penalizing the model fit (through degrees of freedom or other variables). These formulas were designed when computing power and data were limited and an analytical solution was more efficient. | Model/variable selection for time series | If the focus is on time-series and forecasting, then I would only consider rolling CV. When working with time-series it is critical to exclude any innovative (unknown) process from the fit.
ICs estima | Model/variable selection for time series
If the focus is on time-series and forecasting, then I would only consider rolling CV. When working with time-series it is critical to exclude any innovative (unknown) process from the fit.
ICs estimate variance by penalizing the model fit (through degrees of freedom or other variables). These formulas were designed when computing power and data were limited and an analytical solution was more efficient. | Model/variable selection for time series
If the focus is on time-series and forecasting, then I would only consider rolling CV. When working with time-series it is critical to exclude any innovative (unknown) process from the fit.
ICs estima |
50,427 | Marginal distribution of the difference of two elements of a Dirichlet distributed vector | Analysis
The thread at Construction of Dirichlet distribution with Gamma distribution shows that the Dirichlet distribution with parameters $(\alpha_1, \alpha_2, \ldots, \alpha_{n+1})$ arises as the distribution of the ratios
$$X_i=\frac{Y_i}{Y_1+Y_2+\cdots + Y_{n+1}},$$
$i=1, 2, \ldots, n$ where the $Y_j$ are independently distributed with Gamma$(\alpha_j)$ distributions. This permits two simplifications, because (a) we may choose the order of the $\alpha_i,$ reducing the question to the difference $X_2 - X_1$ and (b) since the sum $Y_3 + \cdots +Y_{n+1}$ has a Gamma$(\alpha_3+\cdots+\alpha_{n+1})$ distribution, we only have to consider the case $n=2.$
Solution
As usual for computing marginal distributions, change the variables in the Dirichlet distribution from $(X_1,X_2)$ to $(X,X+Y)$ where $Y$ represents the difference $X_2-X_1.$ This transformation has unit Jacobian, so the integrand remains otherwise unchanged. The (unnormalized) integrand is proportional to
$$X^{\alpha_1-1} (X + Y)^{\alpha_2-1} (1 - X - (X+Y))^{\alpha_3-1}$$
and we have to integrate out $X$ to find the marginal distribution of $Y$.
Since none of the three terms can be negative, the integral breaks into two parts: one from $x=0$ to $x=(1-y)/2$ when $y \ge 0$ and the other from $x=-y$ to $x=(1-y)/2$ when $y \lt 0.$ These are well-known integrals--Mathematica or Maple or even tables of integrals should provide answers for you. The results are
$$f_Y(y) = \frac{\Gamma (\alpha_1+\alpha_2+\alpha_3)2^{-\alpha_1} }{\Gamma (\alpha_2)} y^{\alpha_2-1} (1-y)^{\alpha_1+\alpha_3-1} \,
_2\tilde{F}_1\left(\alpha_1,1-\alpha_2;\alpha_1+\alpha_3;\frac{y-1}{2 y}\right)$$
for $0 \le y \lt 1$ and
$$ \eqalign{f_Y(y) &=
\csc \left(\pi \left(\alpha _1+\alpha _2\right)\right) (1-y)^{\alpha _3-2} \\ &\left(\frac{(y-1) \sin \left(\pi \alpha
_1\right) \Gamma \left(\alpha\right) (-y)^{\alpha _1+\alpha _2}
}{y \Gamma \left(\alpha _3\right)} \,_2\tilde{F}_1\left(\alpha
_1,1-\alpha _3;\alpha _1+\alpha _2;\frac{2 y}{y-1}\right)\\
-\frac{\pi 2^{-\alpha_1-\alpha_2+1} \left(\alpha-1\right) (1-y)^{\alpha_1+\alpha_2}}{\Gamma
\left(\alpha _1\right) \Gamma \left(\alpha _2\right)}
\,_2\tilde{F}_1\left(1-\alpha _2,2-\alpha;-\alpha _1-\alpha _2+2;\frac{2 y}{y-1}\right)\right)
}
$$
for $-1 \lt y \lt 0,$ where $\alpha=\alpha_1+\alpha_2+\alpha_3$ and $\, _2\tilde{F}_1$ is the regularized Hypergeometric function.
For integral values of $\alpha_1+\alpha_2$ and negative values of $y$ you also have to take a limit (because the cosecant blows up). For small integral values of the parameters the function is algebraic (because the Hypergeometric functions that are involved reduce to polynomials). You can compute these by elementary means if you wish.
Verification
Here are histograms of independent simulations of $X_2-X_1$ using 50,000 iterations each for three combinations of $(\alpha_1,\alpha_2, \alpha_3+\cdots+\alpha_{n+1}).$ On each is superimposed the graph of $f_Y.$ All show close agreement. | Marginal distribution of the difference of two elements of a Dirichlet distributed vector | Analysis
The thread at Construction of Dirichlet distribution with Gamma distribution shows that the Dirichlet distribution with parameters $(\alpha_1, \alpha_2, \ldots, \alpha_{n+1})$ arises as the d | Marginal distribution of the difference of two elements of a Dirichlet distributed vector
Analysis
The thread at Construction of Dirichlet distribution with Gamma distribution shows that the Dirichlet distribution with parameters $(\alpha_1, \alpha_2, \ldots, \alpha_{n+1})$ arises as the distribution of the ratios
$$X_i=\frac{Y_i}{Y_1+Y_2+\cdots + Y_{n+1}},$$
$i=1, 2, \ldots, n$ where the $Y_j$ are independently distributed with Gamma$(\alpha_j)$ distributions. This permits two simplifications, because (a) we may choose the order of the $\alpha_i,$ reducing the question to the difference $X_2 - X_1$ and (b) since the sum $Y_3 + \cdots +Y_{n+1}$ has a Gamma$(\alpha_3+\cdots+\alpha_{n+1})$ distribution, we only have to consider the case $n=2.$
Solution
As usual for computing marginal distributions, change the variables in the Dirichlet distribution from $(X_1,X_2)$ to $(X,X+Y)$ where $Y$ represents the difference $X_2-X_1.$ This transformation has unit Jacobian, so the integrand remains otherwise unchanged. The (unnormalized) integrand is proportional to
$$X^{\alpha_1-1} (X + Y)^{\alpha_2-1} (1 - X - (X+Y))^{\alpha_3-1}$$
and we have to integrate out $X$ to find the marginal distribution of $Y$.
Since none of the three terms can be negative, the integral breaks into two parts: one from $x=0$ to $x=(1-y)/2$ when $y \ge 0$ and the other from $x=-y$ to $x=(1-y)/2$ when $y \lt 0.$ These are well-known integrals--Mathematica or Maple or even tables of integrals should provide answers for you. The results are
$$f_Y(y) = \frac{\Gamma (\alpha_1+\alpha_2+\alpha_3)2^{-\alpha_1} }{\Gamma (\alpha_2)} y^{\alpha_2-1} (1-y)^{\alpha_1+\alpha_3-1} \,
_2\tilde{F}_1\left(\alpha_1,1-\alpha_2;\alpha_1+\alpha_3;\frac{y-1}{2 y}\right)$$
for $0 \le y \lt 1$ and
$$ \eqalign{f_Y(y) &=
\csc \left(\pi \left(\alpha _1+\alpha _2\right)\right) (1-y)^{\alpha _3-2} \\ &\left(\frac{(y-1) \sin \left(\pi \alpha
_1\right) \Gamma \left(\alpha\right) (-y)^{\alpha _1+\alpha _2}
}{y \Gamma \left(\alpha _3\right)} \,_2\tilde{F}_1\left(\alpha
_1,1-\alpha _3;\alpha _1+\alpha _2;\frac{2 y}{y-1}\right)\\
-\frac{\pi 2^{-\alpha_1-\alpha_2+1} \left(\alpha-1\right) (1-y)^{\alpha_1+\alpha_2}}{\Gamma
\left(\alpha _1\right) \Gamma \left(\alpha _2\right)}
\,_2\tilde{F}_1\left(1-\alpha _2,2-\alpha;-\alpha _1-\alpha _2+2;\frac{2 y}{y-1}\right)\right)
}
$$
for $-1 \lt y \lt 0,$ where $\alpha=\alpha_1+\alpha_2+\alpha_3$ and $\, _2\tilde{F}_1$ is the regularized Hypergeometric function.
For integral values of $\alpha_1+\alpha_2$ and negative values of $y$ you also have to take a limit (because the cosecant blows up). For small integral values of the parameters the function is algebraic (because the Hypergeometric functions that are involved reduce to polynomials). You can compute these by elementary means if you wish.
Verification
Here are histograms of independent simulations of $X_2-X_1$ using 50,000 iterations each for three combinations of $(\alpha_1,\alpha_2, \alpha_3+\cdots+\alpha_{n+1}).$ On each is superimposed the graph of $f_Y.$ All show close agreement. | Marginal distribution of the difference of two elements of a Dirichlet distributed vector
Analysis
The thread at Construction of Dirichlet distribution with Gamma distribution shows that the Dirichlet distribution with parameters $(\alpha_1, \alpha_2, \ldots, \alpha_{n+1})$ arises as the d |
50,428 | Yearly Aggregated Loss Distribution (operational risk) | Since you are new to this, I think it's best to walk through an example. Let's consider the case of a single risk $Z$ (i.e. a certain type of operational risk).
The Loss Distribution Approach can be described as:
$$Z=\sum_{i=1}^{N}X_{i}$$
where $N$ is the number of events (frequency) over one year and $X_{i}$ is the severity of loss $i$. $N$ is modelled as a discrete random variable with probability mass function:
$$\quad\quad\quad p_{k}=\text{Pr}[N=k],\,\,\,k=0,1,2,\ldots$$
$X_{i}$ are iid and modelled with a continuous distribution function $F_{X}(x)$. Now, it is important to note the assumption we make that $N$ and $X_{i}$ are independent for all $i$.
Now, based on your data, you can find suitable distributions to describe the frequency and severity of your losses. The exact method you use to find a suitable distribution will depend on the context, but finding the MLE is usually a good option.
I'll describe an example now. Let's assume we're considering a single (operational) risk $Z$. Let's assume (based on suitable fitting methods) that the distribution of severity of losses are independent and identical and follow:
$$X_{i}\sim \text{LN}(\mu=1,\sigma=2)$$
Similarly, we can say the frequency of losses follows:
$$N\sim \text{Poisson}(\lambda=1)$$
Now, I can only assume (having not watched the linked video) that your goal is to evaluate $E[Z]$, $\text{SD}(Z)$, $\text{VaR}_{q}[Z]$ and $\text{ES}_{q}[Z]$ etc. via Monte Carlo methods. Luckily for us there are closed-form, analytical solutions for the expectation and standard deviation (allowing us to check our simulation results).
To perform the simulations I used MATLAB with $K=10^{6}$ simulations.
%Set vector of number of simulations for loss Z:
K=10^6;
%Set parameters to be used for Lognormal and Poisson random variables:
lambda=1;
mu=1;
sigma=2;
%Initialize annual loss amount vector:
Z_vec=zeros(K,1);
%Iterate for size of annual loss sample:
for k=1:1:K
%Simulate Poisson value:
p_rnd=poissrnd(lambda);
%Initialize loss severity vector, if Poisson>0:
if p_rnd>0
X_vec=zeros(p_rnd,1);
for m=1:1:p_rnd
%Simulate Lognormal value:
X_vec(m,1)=lognrnd(mu,sigma);
end
%Otherwise, set severity vector to zero:
else
X_vec=0;
end
Z_vec(k,1)=sum(X_vec);
end
So the vector Z_vec contains the $10^{6}$ simulations for $Z$. From here it's all very straightforward, calculating the mean, standard deviation and whatever else you are interested in.
From my simulations, I obtained:
$$\begin{align}
E[Z]&=20.1318\\
\text{SD}(Z)&=143.7883
\end{align}$$
The analytical solutions are (simple compound distribution formulae):
$$\begin{align}
E[Z]&=E[N]E[X]=20.0855\\
\text{SD}(Z)&=\big(E[N]\text{Var}(X)+\text{Var}(N)E[X]^{2}\big)^{1/2}=148.4132
\end{align}$$
Similar calculations can be made for any sort of risk measure you would like. Keep in mind this method can be generalized to include more risks (i.e. $Z_{i}$, $i=1,2,\ldots$) and include dependencies between the $Z_{i}$.
In terms of your confusion about the time horizon of losses, the time horizon you set is purely up to you. If you want to consider yearly losses, then partition your 7-year period into $j$ yearly periods. For any given yearly period $j$, the observed frequency of losses, $n_{j}$, will be the count of losses. The $n_{j}$ go towards estimating the frequency distribution $N$. Similarly, the severity of those losses in all the yearly periods go towards estimating the severity distribution $X_{i}$.
Hopefully the following diagram illustrates the point well, where in this example there is a 3-year period split into $j=3$ 1-year periods. Each $n_{k}$, $k=\{1,\ldots,j\}$ contributes to estimating $N$ and there are 90 $X_{i}$ observed over the 3-year period which go toward estimating $X$. | Yearly Aggregated Loss Distribution (operational risk) | Since you are new to this, I think it's best to walk through an example. Let's consider the case of a single risk $Z$ (i.e. a certain type of operational risk).
The Loss Distribution Approach can be d | Yearly Aggregated Loss Distribution (operational risk)
Since you are new to this, I think it's best to walk through an example. Let's consider the case of a single risk $Z$ (i.e. a certain type of operational risk).
The Loss Distribution Approach can be described as:
$$Z=\sum_{i=1}^{N}X_{i}$$
where $N$ is the number of events (frequency) over one year and $X_{i}$ is the severity of loss $i$. $N$ is modelled as a discrete random variable with probability mass function:
$$\quad\quad\quad p_{k}=\text{Pr}[N=k],\,\,\,k=0,1,2,\ldots$$
$X_{i}$ are iid and modelled with a continuous distribution function $F_{X}(x)$. Now, it is important to note the assumption we make that $N$ and $X_{i}$ are independent for all $i$.
Now, based on your data, you can find suitable distributions to describe the frequency and severity of your losses. The exact method you use to find a suitable distribution will depend on the context, but finding the MLE is usually a good option.
I'll describe an example now. Let's assume we're considering a single (operational) risk $Z$. Let's assume (based on suitable fitting methods) that the distribution of severity of losses are independent and identical and follow:
$$X_{i}\sim \text{LN}(\mu=1,\sigma=2)$$
Similarly, we can say the frequency of losses follows:
$$N\sim \text{Poisson}(\lambda=1)$$
Now, I can only assume (having not watched the linked video) that your goal is to evaluate $E[Z]$, $\text{SD}(Z)$, $\text{VaR}_{q}[Z]$ and $\text{ES}_{q}[Z]$ etc. via Monte Carlo methods. Luckily for us there are closed-form, analytical solutions for the expectation and standard deviation (allowing us to check our simulation results).
To perform the simulations I used MATLAB with $K=10^{6}$ simulations.
%Set vector of number of simulations for loss Z:
K=10^6;
%Set parameters to be used for Lognormal and Poisson random variables:
lambda=1;
mu=1;
sigma=2;
%Initialize annual loss amount vector:
Z_vec=zeros(K,1);
%Iterate for size of annual loss sample:
for k=1:1:K
%Simulate Poisson value:
p_rnd=poissrnd(lambda);
%Initialize loss severity vector, if Poisson>0:
if p_rnd>0
X_vec=zeros(p_rnd,1);
for m=1:1:p_rnd
%Simulate Lognormal value:
X_vec(m,1)=lognrnd(mu,sigma);
end
%Otherwise, set severity vector to zero:
else
X_vec=0;
end
Z_vec(k,1)=sum(X_vec);
end
So the vector Z_vec contains the $10^{6}$ simulations for $Z$. From here it's all very straightforward, calculating the mean, standard deviation and whatever else you are interested in.
From my simulations, I obtained:
$$\begin{align}
E[Z]&=20.1318\\
\text{SD}(Z)&=143.7883
\end{align}$$
The analytical solutions are (simple compound distribution formulae):
$$\begin{align}
E[Z]&=E[N]E[X]=20.0855\\
\text{SD}(Z)&=\big(E[N]\text{Var}(X)+\text{Var}(N)E[X]^{2}\big)^{1/2}=148.4132
\end{align}$$
Similar calculations can be made for any sort of risk measure you would like. Keep in mind this method can be generalized to include more risks (i.e. $Z_{i}$, $i=1,2,\ldots$) and include dependencies between the $Z_{i}$.
In terms of your confusion about the time horizon of losses, the time horizon you set is purely up to you. If you want to consider yearly losses, then partition your 7-year period into $j$ yearly periods. For any given yearly period $j$, the observed frequency of losses, $n_{j}$, will be the count of losses. The $n_{j}$ go towards estimating the frequency distribution $N$. Similarly, the severity of those losses in all the yearly periods go towards estimating the severity distribution $X_{i}$.
Hopefully the following diagram illustrates the point well, where in this example there is a 3-year period split into $j=3$ 1-year periods. Each $n_{k}$, $k=\{1,\ldots,j\}$ contributes to estimating $N$ and there are 90 $X_{i}$ observed over the 3-year period which go toward estimating $X$. | Yearly Aggregated Loss Distribution (operational risk)
Since you are new to this, I think it's best to walk through an example. Let's consider the case of a single risk $Z$ (i.e. a certain type of operational risk).
The Loss Distribution Approach can be d |
50,429 | Yearly Aggregated Loss Distribution (operational risk) | Your approach is as silly as everyone else's. Look at what the FRB is doing to forecast losses for CCAR Banks, their approach is described in: Dodd-Frank Act Stress Test 2016: Supervisory Stress Test Methodology and Results, June 2016. Read “Operational Risk Model Enhancement” section in Box 1 and “Losses Related to Operational-Risk Events” Section in Appendix B.
They using forecast combination approach where they average outputs of historical simulation and panel regression on macroeconomic variables.
The historical simulation is essentially a bootstrapping variation of LDA. It assumes that the losses are compound random variables, such as Poisson compound. They don't call it LDA anymore, because the term is out of favor in US banking supervision. The main difference with LDA is that instead of modeling the severity they bootstrap it from actual losses in the event database. In other words each loss in the compound Poisson is a random draw from the actual event losses observed historically.
The regression part is a simple panel regression on a bunch of variables such as firm characteristics and macroeconomy. You can get a flavor of the model from the FRB paper: U.S. Banking Sector Operational Losses and the Macroeconomic Environment | Yearly Aggregated Loss Distribution (operational risk) | Your approach is as silly as everyone else's. Look at what the FRB is doing to forecast losses for CCAR Banks, their approach is described in: Dodd-Frank Act Stress Test 2016: Supervisory Stress Test | Yearly Aggregated Loss Distribution (operational risk)
Your approach is as silly as everyone else's. Look at what the FRB is doing to forecast losses for CCAR Banks, their approach is described in: Dodd-Frank Act Stress Test 2016: Supervisory Stress Test Methodology and Results, June 2016. Read “Operational Risk Model Enhancement” section in Box 1 and “Losses Related to Operational-Risk Events” Section in Appendix B.
They using forecast combination approach where they average outputs of historical simulation and panel regression on macroeconomic variables.
The historical simulation is essentially a bootstrapping variation of LDA. It assumes that the losses are compound random variables, such as Poisson compound. They don't call it LDA anymore, because the term is out of favor in US banking supervision. The main difference with LDA is that instead of modeling the severity they bootstrap it from actual losses in the event database. In other words each loss in the compound Poisson is a random draw from the actual event losses observed historically.
The regression part is a simple panel regression on a bunch of variables such as firm characteristics and macroeconomy. You can get a flavor of the model from the FRB paper: U.S. Banking Sector Operational Losses and the Macroeconomic Environment | Yearly Aggregated Loss Distribution (operational risk)
Your approach is as silly as everyone else's. Look at what the FRB is doing to forecast losses for CCAR Banks, their approach is described in: Dodd-Frank Act Stress Test 2016: Supervisory Stress Test |
50,430 | Yearly Aggregated Loss Distribution (operational risk) | We have seen this question ( or one like this ) before . It involves using daily data to compute aggregated forecasts yielding the probability of making a goal. Look at Predict number of users for a discussion of how Proctor & Gamble phrased the question. You might also look at http://www.autobox.com/cms/index.php/blog for a discussion of how to actually use daily data to form a useful model. I have been one of the developers of AUTOBOX which might be useful to you in showing you an approach. This of course requires the user having daily data for every day . If you have any possible supporting variables they can also be considered for inclusion. In specific it might be interesting to predict/model the amount of losses as it relates to the # of losses. | Yearly Aggregated Loss Distribution (operational risk) | We have seen this question ( or one like this ) before . It involves using daily data to compute aggregated forecasts yielding the probability of making a goal. Look at Predict number of users for a d | Yearly Aggregated Loss Distribution (operational risk)
We have seen this question ( or one like this ) before . It involves using daily data to compute aggregated forecasts yielding the probability of making a goal. Look at Predict number of users for a discussion of how Proctor & Gamble phrased the question. You might also look at http://www.autobox.com/cms/index.php/blog for a discussion of how to actually use daily data to form a useful model. I have been one of the developers of AUTOBOX which might be useful to you in showing you an approach. This of course requires the user having daily data for every day . If you have any possible supporting variables they can also be considered for inclusion. In specific it might be interesting to predict/model the amount of losses as it relates to the # of losses. | Yearly Aggregated Loss Distribution (operational risk)
We have seen this question ( or one like this ) before . It involves using daily data to compute aggregated forecasts yielding the probability of making a goal. Look at Predict number of users for a d |
50,431 | Surface Fit Using Tensor Product of B-Splines | Not vectorizing your response matrix $Y$ is the way to go;
B = ginv(t(C) %*% C) %*% t(C) %*% Y #OLS
pred = C%*%B #Predictions
surface3d(x,x, pred, col = "green") #Plot | Surface Fit Using Tensor Product of B-Splines | Not vectorizing your response matrix $Y$ is the way to go;
B = ginv(t(C) %*% C) %*% t(C) %*% Y #OLS
pred = C%*%B #Predictions
surface3d(x,x, pred, col = "green") #Plot | Surface Fit Using Tensor Product of B-Splines
Not vectorizing your response matrix $Y$ is the way to go;
B = ginv(t(C) %*% C) %*% t(C) %*% Y #OLS
pred = C%*%B #Predictions
surface3d(x,x, pred, col = "green") #Plot | Surface Fit Using Tensor Product of B-Splines
Not vectorizing your response matrix $Y$ is the way to go;
B = ginv(t(C) %*% C) %*% t(C) %*% Y #OLS
pred = C%*%B #Predictions
surface3d(x,x, pred, col = "green") #Plot |
50,432 | How to randomly generate a positive semidefinite matrix subject to Loewner constraint? | Let $\mathbb S_n$ be the set of $n \times n$ symmetric matrices. Given positive semidefinite matrices $\mathrm A, \mathrm B \in \mathbb S_n$, the following (convex) set
$$\{ \mathrm X \in \mathbb S_n \mid \mathrm A \preceq \mathrm X \preceq \mathrm B \}$$
is a spectrahedron. To sample from spectrahedra, take a look at Narayanan's paper [0] and the references therein.
[0] Hariharan Narayanan, Randomized Interior Point methods for Sampling and Optimization, arXiv:0911.3950. | How to randomly generate a positive semidefinite matrix subject to Loewner constraint? | Let $\mathbb S_n$ be the set of $n \times n$ symmetric matrices. Given positive semidefinite matrices $\mathrm A, \mathrm B \in \mathbb S_n$, the following (convex) set
$$\{ \mathrm X \in \mathbb S_n | How to randomly generate a positive semidefinite matrix subject to Loewner constraint?
Let $\mathbb S_n$ be the set of $n \times n$ symmetric matrices. Given positive semidefinite matrices $\mathrm A, \mathrm B \in \mathbb S_n$, the following (convex) set
$$\{ \mathrm X \in \mathbb S_n \mid \mathrm A \preceq \mathrm X \preceq \mathrm B \}$$
is a spectrahedron. To sample from spectrahedra, take a look at Narayanan's paper [0] and the references therein.
[0] Hariharan Narayanan, Randomized Interior Point methods for Sampling and Optimization, arXiv:0911.3950. | How to randomly generate a positive semidefinite matrix subject to Loewner constraint?
Let $\mathbb S_n$ be the set of $n \times n$ symmetric matrices. Given positive semidefinite matrices $\mathrm A, \mathrm B \in \mathbb S_n$, the following (convex) set
$$\{ \mathrm X \in \mathbb S_n |
50,433 | P-value distribution under alternative hypothesis is stochastically smaller than uniform | I assume that this is a self-study question, so I will not give a full explanation, but rather some hints
Assuming you know
What "stochastically smaller" means (see Wikipedia)
And you are able to interpret the difference of two cumulative distribution such as in the figure above above (note that the red line is the uniform, maybe it helps to look at the histogram as well)
Then the answer should be obvious.
Side note: excellent that you do these kind of simulations in class, I think this is very instructive. | P-value distribution under alternative hypothesis is stochastically smaller than uniform | I assume that this is a self-study question, so I will not give a full explanation, but rather some hints
Assuming you know
What "stochastically smaller" means (see Wikipedia)
And you are able to in | P-value distribution under alternative hypothesis is stochastically smaller than uniform
I assume that this is a self-study question, so I will not give a full explanation, but rather some hints
Assuming you know
What "stochastically smaller" means (see Wikipedia)
And you are able to interpret the difference of two cumulative distribution such as in the figure above above (note that the red line is the uniform, maybe it helps to look at the histogram as well)
Then the answer should be obvious.
Side note: excellent that you do these kind of simulations in class, I think this is very instructive. | P-value distribution under alternative hypothesis is stochastically smaller than uniform
I assume that this is a self-study question, so I will not give a full explanation, but rather some hints
Assuming you know
What "stochastically smaller" means (see Wikipedia)
And you are able to in |
50,434 | Confidence intervals of bounded variable | This is a later answer but perhaps may be useful to someone. I have an R package on github (mlisi) with a set of convenient functions, including one that calculate boostrapped confidence intervals using the bias-corrected and accelerated method (Efron, 1987).
> set.seed(10)
> data = runif(1000, min=0, max=1)
> library(mlisi)
> bootMeanCI(data, nsim=10^4)
2.797862% 97.7708%
0.4874827 0.5240060
Although the BCa method is the default, you can also use the percentile method by setting the argument 'method'
> bootMeanCI(data, nsim=10^4, method="percentile")
2.5% 97.5%
0.4871504 0.5236511
You can install the package from github using devtools
library(devtools)
install_github("mattelisi/mlisi") | Confidence intervals of bounded variable | This is a later answer but perhaps may be useful to someone. I have an R package on github (mlisi) with a set of convenient functions, including one that calculate boostrapped confidence intervals usi | Confidence intervals of bounded variable
This is a later answer but perhaps may be useful to someone. I have an R package on github (mlisi) with a set of convenient functions, including one that calculate boostrapped confidence intervals using the bias-corrected and accelerated method (Efron, 1987).
> set.seed(10)
> data = runif(1000, min=0, max=1)
> library(mlisi)
> bootMeanCI(data, nsim=10^4)
2.797862% 97.7708%
0.4874827 0.5240060
Although the BCa method is the default, you can also use the percentile method by setting the argument 'method'
> bootMeanCI(data, nsim=10^4, method="percentile")
2.5% 97.5%
0.4871504 0.5236511
You can install the package from github using devtools
library(devtools)
install_github("mattelisi/mlisi") | Confidence intervals of bounded variable
This is a later answer but perhaps may be useful to someone. I have an R package on github (mlisi) with a set of convenient functions, including one that calculate boostrapped confidence intervals usi |
50,435 | Confidence intervals of bounded variable | Your best best here would be to use bootstrapped CIs instead of parametric CIs. Here is a contrived example to show when parametric CIs would give impossible results but bootstrap CIs do not:
> # Simulate Bounded Data
> set.seed(10)
> n <- 5
> data <- rnorm(n, mean = 1, sd = 0.5)
> data[data > 1] <- 1
>
> # Sample Mean
> est <- mean(data)
>
> # Parametric CI
> p_lci <- mean(data) - 1.96 * sd(data) / sqrt(n)
> p_uci <- mean(data) + 1.96 * sd(data) / sqrt(n)
>
> # Bootstrapped CI
> nboot <- 2000
> resample_dist <- rep(NA, length = nboot)
> for (i in 1:nboot) {
+ resample_i <- sample(data, size = n, replace = TRUE)
+ resample_dist[[i]] <- mean(resample_i)
+ }
> b_lci <- quantile(resample_dist, probs = 0.025)
> b_uci <- quantile(resample_dist, probs = 0.975)
>
> # Display Results
> sprintf("Parametric: %.3f [%.3f, %.3f]", est, p_lci, p_uci)
#> [1] "Parametric: 0.785 [0.530, 1.039]"
> sprintf("Bootstrapped: %.3f [%.3f, %.3f]", est, b_lci, b_uci)
#> [1] "Bootstrapped: 0.785 [0.529, 0.982]" | Confidence intervals of bounded variable | Your best best here would be to use bootstrapped CIs instead of parametric CIs. Here is a contrived example to show when parametric CIs would give impossible results but bootstrap CIs do not:
> # Simu | Confidence intervals of bounded variable
Your best best here would be to use bootstrapped CIs instead of parametric CIs. Here is a contrived example to show when parametric CIs would give impossible results but bootstrap CIs do not:
> # Simulate Bounded Data
> set.seed(10)
> n <- 5
> data <- rnorm(n, mean = 1, sd = 0.5)
> data[data > 1] <- 1
>
> # Sample Mean
> est <- mean(data)
>
> # Parametric CI
> p_lci <- mean(data) - 1.96 * sd(data) / sqrt(n)
> p_uci <- mean(data) + 1.96 * sd(data) / sqrt(n)
>
> # Bootstrapped CI
> nboot <- 2000
> resample_dist <- rep(NA, length = nboot)
> for (i in 1:nboot) {
+ resample_i <- sample(data, size = n, replace = TRUE)
+ resample_dist[[i]] <- mean(resample_i)
+ }
> b_lci <- quantile(resample_dist, probs = 0.025)
> b_uci <- quantile(resample_dist, probs = 0.975)
>
> # Display Results
> sprintf("Parametric: %.3f [%.3f, %.3f]", est, p_lci, p_uci)
#> [1] "Parametric: 0.785 [0.530, 1.039]"
> sprintf("Bootstrapped: %.3f [%.3f, %.3f]", est, b_lci, b_uci)
#> [1] "Bootstrapped: 0.785 [0.529, 0.982]" | Confidence intervals of bounded variable
Your best best here would be to use bootstrapped CIs instead of parametric CIs. Here is a contrived example to show when parametric CIs would give impossible results but bootstrap CIs do not:
> # Simu |
50,436 | Similarity probabilities in SNE vs t-SNE | I think the paper defines the joint distribution (not the conditional distribution!) as
$$p_{ij} = \frac{\exp(-||x_{i} - x_{j}||/2\sigma^{2})}{\sum_{k \neq l}{\exp(-||x_{k} - x_{l}||/2\sigma^{2})}},$$
but they do not use it and instead define $$p_{ij}=\frac{p_{j|i}+p_{i|j}}{2}.$$
As mentioned in the paper the original SNE and tSNE differ in two respects:
The cost function used by t-SNE differs from the one used by SNE in two ways: (1) it uses a
symmetrized version of the SNE cost function with simpler gradients that was briefly introduced by
Cook et al. (2007) and (2) it uses a Student-t distribution rather than a Gaussian to compute the similarity
between two points in the low-dimensional space. t-SNE employs a heavy-tailed distribution
in the low-dimensional space to alleviate both the crowding problem and the optimization problems
of SNE.
Update based on the question edit: The denominator in both cases is just the normalization to ensure that summation over i(p(j/i) and summation over i&j(p(i,j) sum to 1, the basic requirement for both to be distributions.
Also since there is one Gaussian here, we take sigma as it's standard deviation. In the first case there were i Gaussian, and we could have taken a common standard deviation, but instead we chose to make sigma dependent on the density of neighbors around a point. If a point has a large number of neighbors around it within distance x, the conditional distribution should drop faster, as compared to conditional distribution for points in sparser regions. | Similarity probabilities in SNE vs t-SNE | I think the paper defines the joint distribution (not the conditional distribution!) as
$$p_{ij} = \frac{\exp(-||x_{i} - x_{j}||/2\sigma^{2})}{\sum_{k \neq l}{\exp(-||x_{k} - x_{l}||/2\sigma^{2})}},$ | Similarity probabilities in SNE vs t-SNE
I think the paper defines the joint distribution (not the conditional distribution!) as
$$p_{ij} = \frac{\exp(-||x_{i} - x_{j}||/2\sigma^{2})}{\sum_{k \neq l}{\exp(-||x_{k} - x_{l}||/2\sigma^{2})}},$$
but they do not use it and instead define $$p_{ij}=\frac{p_{j|i}+p_{i|j}}{2}.$$
As mentioned in the paper the original SNE and tSNE differ in two respects:
The cost function used by t-SNE differs from the one used by SNE in two ways: (1) it uses a
symmetrized version of the SNE cost function with simpler gradients that was briefly introduced by
Cook et al. (2007) and (2) it uses a Student-t distribution rather than a Gaussian to compute the similarity
between two points in the low-dimensional space. t-SNE employs a heavy-tailed distribution
in the low-dimensional space to alleviate both the crowding problem and the optimization problems
of SNE.
Update based on the question edit: The denominator in both cases is just the normalization to ensure that summation over i(p(j/i) and summation over i&j(p(i,j) sum to 1, the basic requirement for both to be distributions.
Also since there is one Gaussian here, we take sigma as it's standard deviation. In the first case there were i Gaussian, and we could have taken a common standard deviation, but instead we chose to make sigma dependent on the density of neighbors around a point. If a point has a large number of neighbors around it within distance x, the conditional distribution should drop faster, as compared to conditional distribution for points in sparser regions. | Similarity probabilities in SNE vs t-SNE
I think the paper defines the joint distribution (not the conditional distribution!) as
$$p_{ij} = \frac{\exp(-||x_{i} - x_{j}||/2\sigma^{2})}{\sum_{k \neq l}{\exp(-||x_{k} - x_{l}||/2\sigma^{2})}},$ |
50,437 | If missing data process is known and it is MNAR, is it possible to get an unbiased estimate of parameter? | This is an interesting question.
First I will show that $r>0$. If $r=0$, then there is no observed data, and the likelihood function is no longer concave, thus this statistical problem is not well defined. Given $r>0$, $E(1/r)$ should be finite.
Let $p=\mathrm{Pr}(y_i<c) = 1-\mathrm{exp}(-c/\theta)$. We have $$E(1/r) = \sum_{r=1}^n \binom{n}{r} p^r (1-p)^{n-r}/r = n p (1-p)^{n-1} F(1, 1, 1-n;2, 2;p/(p-1)),$$
where $F(\cdot)$ is the generalized hypergeometric function. $E(\hat{\theta})$ can be computed by plugging $E(1/r)$ into the equation in the question.
The following numerical computation shows that $E(\hat{\theta}) = \theta$, i.e., $\hat\theta$ that considers the MNAR (missing not at random) mechanism is unbiased. Note that genhypergeo() used to calculate the generalized hypergeometric function has numerical error, but the above summation can be computed exactly.
library(hypergeo)
theta_hat = function(theta, n = 100) {
c = .5*theta # arbitrary c can be used
p = 1 - exp(-c/theta)
theta*(1-exp(-c/theta)*(c/theta+1))/(1-exp(-c/theta)) + (n *c) *
(genhypergeo(U=c(1,1,1-n), L=c(2,2), z=p/(p-1)) * n * p * (1-p)^(n-1)) - c
}
theta = 1:10
plot(theta, sapply(theta, theta_hat), ylab='theta_hat')
abline(a=0, b=1) | If missing data process is known and it is MNAR, is it possible to get an unbiased estimate of param | This is an interesting question.
First I will show that $r>0$. If $r=0$, then there is no observed data, and the likelihood function is no longer concave, thus this statistical problem is not well de | If missing data process is known and it is MNAR, is it possible to get an unbiased estimate of parameter?
This is an interesting question.
First I will show that $r>0$. If $r=0$, then there is no observed data, and the likelihood function is no longer concave, thus this statistical problem is not well defined. Given $r>0$, $E(1/r)$ should be finite.
Let $p=\mathrm{Pr}(y_i<c) = 1-\mathrm{exp}(-c/\theta)$. We have $$E(1/r) = \sum_{r=1}^n \binom{n}{r} p^r (1-p)^{n-r}/r = n p (1-p)^{n-1} F(1, 1, 1-n;2, 2;p/(p-1)),$$
where $F(\cdot)$ is the generalized hypergeometric function. $E(\hat{\theta})$ can be computed by plugging $E(1/r)$ into the equation in the question.
The following numerical computation shows that $E(\hat{\theta}) = \theta$, i.e., $\hat\theta$ that considers the MNAR (missing not at random) mechanism is unbiased. Note that genhypergeo() used to calculate the generalized hypergeometric function has numerical error, but the above summation can be computed exactly.
library(hypergeo)
theta_hat = function(theta, n = 100) {
c = .5*theta # arbitrary c can be used
p = 1 - exp(-c/theta)
theta*(1-exp(-c/theta)*(c/theta+1))/(1-exp(-c/theta)) + (n *c) *
(genhypergeo(U=c(1,1,1-n), L=c(2,2), z=p/(p-1)) * n * p * (1-p)^(n-1)) - c
}
theta = 1:10
plot(theta, sapply(theta, theta_hat), ylab='theta_hat')
abline(a=0, b=1) | If missing data process is known and it is MNAR, is it possible to get an unbiased estimate of param
This is an interesting question.
First I will show that $r>0$. If $r=0$, then there is no observed data, and the likelihood function is no longer concave, thus this statistical problem is not well de |
50,438 | Expected value of $\dfrac 1 {I(y_1<c) + I(y_2<c)}$, where $y_1$ and $y_2$ are i.i.d. random variables with exponential distribution | This random variable has a positive probability to be infinite, therefore its expectation is $+\infty$. | Expected value of $\dfrac 1 {I(y_1<c) + I(y_2<c)}$, where $y_1$ and $y_2$ are i.i.d. random variable | This random variable has a positive probability to be infinite, therefore its expectation is $+\infty$. | Expected value of $\dfrac 1 {I(y_1<c) + I(y_2<c)}$, where $y_1$ and $y_2$ are i.i.d. random variables with exponential distribution
This random variable has a positive probability to be infinite, therefore its expectation is $+\infty$. | Expected value of $\dfrac 1 {I(y_1<c) + I(y_2<c)}$, where $y_1$ and $y_2$ are i.i.d. random variable
This random variable has a positive probability to be infinite, therefore its expectation is $+\infty$. |
50,439 | Relationship between binomial regression link function and goodness-of-fit tests [now with link to R code] | I've been able to prove both effects shown here.
Let the model matrix be $X$, an $N \times (p+1)$ matrix whose first column is the intercept column (all ones) and whose $1 \times (p+1)$ columns are $x_k^T$. The fitted value from the regression is $p_k = g(x_k^T \hat\theta)$, with the link function $g(\eta)$.
Pearson test incompatibility with identity link
First I'll consider the collision between the Pearson test and the identity link.
According to Osius and Rojek (citing McCullagh and Nelder) the expected variance of the Pearson statistic is
$$
\hat{\sigma}^2 = \sum_k \left[ \frac{1}{p_k(1-p_k)} - 4 \right] - c^T I^{-1} c
$$
where $I$ is the information matrix at $\hat\theta$
$$
I = \sum_k\frac{g'(g^{-1}(p_k))^2}{p_k(1-p_k)} x_k x_k^T
$$
and $c$ is equal to
$$
c = \sum_k\frac{(1-2p_k)}{p_k(1-p_k)} g'(g^{-1}(p_k)) x_k
$$
Going further with helpful matrix notation, the $N \times (p+1)$ matrix $X$ can be written $\left[ x_k^T \right]$, i.e. a column vector of row vectors. Also define an $N \times N$ matrix $B$ and an $N \times 1$ column vector $C$:
\begin{align}
B &= \mathrm{diag}\left(\frac{g'^2_k}{p_k(1-p_k)}\right) \\
C &= \left[\frac{(1-2 p_k) g'_k}{p_k(1-p_k)}\right]
\end{align}
where $g'_k = g'(g^{-1}(p_k))$.
Thus
\begin{align}
\hat{\sigma}^2 &= C^T B^{-1} C - C^T X (X^T B X)^{-1} X^T C \\
&= C^T \left[ B^{-1} - X (X^T B X)^{-1} X^T \right] C
\end{align}
Defining $X'= B^{1/2} X$ and $C' = B^{-1/2} C$, this can be rewritten as
$$
\hat{\sigma}^2 = C'^T \left[ I - X' (X'^T X')^{-1} X'^T \right] C'
$$
This equation is of the form of quadratic form involving the hat matrix (e.g. projection matrix) based on $X'$:
$$
H' = X' (X'^T X')^{-1} X'^T
$$
and the vector $C'$.
The matrices $X'$ and $C'$ can be written
\begin{align}
X' = B^{1/2} X &= \left[ \begin{array}{c} \frac{g'_k x_k^T}{\sqrt{p_k(1-p_k)}} \end{array} \right] \\
C' = B^{-1/2} C &= \left[ \frac{1-2p_k}{\sqrt{p_k(1-p_k)}}\right]
\end{align}
Now, when a non-identity link is used $g'_k \neq 1$ and is not constant and $p_k$ is not linearly related to $x_k$.
However, using the identity link where $g'_k=1$, $C'$ becomes a vector in the column space of $X'$. Explicitly: Since in this case $p_k = x_k^T \hat\theta$, and the first element of $x_k$ is 1 for the intercept, $1 - 2 p_k = x_k^T \hat\theta'$ where
$\hat\theta' = \left[1-2\hat\theta_0,-2\hat\theta_1,-2\hat\theta_2,\ldots \right]$, and thus
$C' = X' \hat\theta'$.
Since $C'$ is in the column space of $X'$, $H' C' = C'$. Therefore, the expected variance is
$$
\hat{\sigma}^2 = C'^T \left[ I - H' \right] C = 0
$$
This proves that there is a collision between the Pearson test and the identity link.
Deviance test incompatibility with logit link
Things are very similar for the deviance statistic (as it is a part of the same power-divergence family as the Pearson statistic). Everything above still holds, adding the subscript 2 and with the replacement that
$$
C_2 = \left[-2\log \left(\frac{p_k}{1-p_k} \right) g'_k\right]
$$
and thus
$$
C'_2 = \left[-2\log \left(\frac{p_k}{1-p_k} \right) \sqrt{p_k(1-p_k)}\right]
$$
Now $C'_2$ isn't in the column space of $X'_2$ for the identity link $g'=1$ nor indeed for most links.
However, with the logit link, $g'_k = p_k(1-p_k)$ and $\log \left( \frac{p_k}{1-p_k} \right) = x_k^T \hat\theta$, and thus
\begin{align}
X'_2 &= \left[ \sqrt{p_k(1-p_k)} x_k^T \right] \\
C'_2 &= \left[-2\sqrt{p_k(1-p_k)} x_k^T \hat\theta\right]
\end{align}
It immediately follows that $C'_2 = -2 X'_2 \hat\theta$, that $H'_2 C'_2 = C'_2$, and therefore $\hat{\sigma}^2_2 = 0$.
The above proves that the Pearson test does not work in the identity link and that the deviance test does not work with the logit link. The power divergence family is a continuous family of tests indexed by $\lambda$, where the Pearson statistic and deviance statistic. It seems possible that for each $\lambda$ there is a corresponding link function for which the expected variance (according to a literal use of the formula provided in the literature) is null.
General case
In the general case, for the statistic $SD_\lambda$, the vector $C$ is given by
$$
C_\lambda = \frac{2}{\lambda(\lambda+1)} \left[ \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) g'_k \right]
$$
This is derived from the summand in the definition of $SD_\lambda$, taking the difference between the $\lambda=1$ and $\lambda=0$ cases, and including a $g'$ factor. You can check that $\lambda = 1$ and $\lambda \to 0$ give the Pearson and deviance cases above. With a general link $g$,
\begin{align}
X'_\lambda &= \left[ \begin{array}{c} \frac{g'_k x_k^T}{\sqrt{p_k(1-p_k)}} \end{array} \right] \\
C'_\lambda &= \frac{2}{\lambda(\lambda+1)} \left[ \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) \sqrt{p_k(1-p_k)} \right]
\end{align}
In order for $C'_\lambda$ to be in the column space of $X'_\lambda$, there must be some vector $v$ such that
$$
g'_k x_k^T v = \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) p_k (1-p_k)
$$
Letting $v = \alpha[1,0,...] + \beta \theta$, $x_k^T v = v_0 + g^{-1}(p_k)$.
Since the derivative of $h = g^{-1}$ is $h'(p) = 1/g'(g^{-1}(p))$, this is satisfied by a function $h$ obeying the differential equation
$$
\frac{\alpha+\beta h(p)}{h'(p)} = \frac{2}{\lambda(\lambda+1)} \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) p_k (1-p_k)
$$
Any solution to this differential equation will specify an (inverse) link function $h(p)$ incompatible with the $SD_\lambda$ test.
For example, for the Pearson test ($\lambda = 1$), the entire family of such functions is
$$
h_1(p) = -\frac{\alpha}{\beta}+\gamma (1-2p)^{-\beta/2}
$$
which includes the identity for $\alpha=1$, $\beta=-2$, and $\gamma = -1/2$.
For the deviance test, this d.e. in the limit $\lambda \to 0$ is
$$
\frac{\alpha+\beta h(p)}{h'(p)} = -2 \log \left(\frac{p}{1-p} \right) p_k (1-p_k)
$$
The family of solutions is
$$
h_0(p) = -\frac{\alpha}{\beta}+\gamma \log \left(\frac{p}{1-p} \right)^{-\beta/2}
$$
which includes the logit for $\alpha=0$, $\beta=-2$, and $\gamma=1$.
The differential equation is difficult for Mathematica to solve for arbitrary $\lambda > 0$, even with $\alpha=0$ and $\beta=1$. Trying out specific values like $\lambda=2/3$ (a choice suggested by Cressie and Reed), the solution is quite horrendous, involving the exponent of a term involving logs, powers, and two separate hypergeometric functions, so the inability to find a solution for general $\lambda$ is understandable. | Relationship between binomial regression link function and goodness-of-fit tests [now with link to R | I've been able to prove both effects shown here.
Let the model matrix be $X$, an $N \times (p+1)$ matrix whose first column is the intercept column (all ones) and whose $1 \times (p+1)$ columns are $x | Relationship between binomial regression link function and goodness-of-fit tests [now with link to R code]
I've been able to prove both effects shown here.
Let the model matrix be $X$, an $N \times (p+1)$ matrix whose first column is the intercept column (all ones) and whose $1 \times (p+1)$ columns are $x_k^T$. The fitted value from the regression is $p_k = g(x_k^T \hat\theta)$, with the link function $g(\eta)$.
Pearson test incompatibility with identity link
First I'll consider the collision between the Pearson test and the identity link.
According to Osius and Rojek (citing McCullagh and Nelder) the expected variance of the Pearson statistic is
$$
\hat{\sigma}^2 = \sum_k \left[ \frac{1}{p_k(1-p_k)} - 4 \right] - c^T I^{-1} c
$$
where $I$ is the information matrix at $\hat\theta$
$$
I = \sum_k\frac{g'(g^{-1}(p_k))^2}{p_k(1-p_k)} x_k x_k^T
$$
and $c$ is equal to
$$
c = \sum_k\frac{(1-2p_k)}{p_k(1-p_k)} g'(g^{-1}(p_k)) x_k
$$
Going further with helpful matrix notation, the $N \times (p+1)$ matrix $X$ can be written $\left[ x_k^T \right]$, i.e. a column vector of row vectors. Also define an $N \times N$ matrix $B$ and an $N \times 1$ column vector $C$:
\begin{align}
B &= \mathrm{diag}\left(\frac{g'^2_k}{p_k(1-p_k)}\right) \\
C &= \left[\frac{(1-2 p_k) g'_k}{p_k(1-p_k)}\right]
\end{align}
where $g'_k = g'(g^{-1}(p_k))$.
Thus
\begin{align}
\hat{\sigma}^2 &= C^T B^{-1} C - C^T X (X^T B X)^{-1} X^T C \\
&= C^T \left[ B^{-1} - X (X^T B X)^{-1} X^T \right] C
\end{align}
Defining $X'= B^{1/2} X$ and $C' = B^{-1/2} C$, this can be rewritten as
$$
\hat{\sigma}^2 = C'^T \left[ I - X' (X'^T X')^{-1} X'^T \right] C'
$$
This equation is of the form of quadratic form involving the hat matrix (e.g. projection matrix) based on $X'$:
$$
H' = X' (X'^T X')^{-1} X'^T
$$
and the vector $C'$.
The matrices $X'$ and $C'$ can be written
\begin{align}
X' = B^{1/2} X &= \left[ \begin{array}{c} \frac{g'_k x_k^T}{\sqrt{p_k(1-p_k)}} \end{array} \right] \\
C' = B^{-1/2} C &= \left[ \frac{1-2p_k}{\sqrt{p_k(1-p_k)}}\right]
\end{align}
Now, when a non-identity link is used $g'_k \neq 1$ and is not constant and $p_k$ is not linearly related to $x_k$.
However, using the identity link where $g'_k=1$, $C'$ becomes a vector in the column space of $X'$. Explicitly: Since in this case $p_k = x_k^T \hat\theta$, and the first element of $x_k$ is 1 for the intercept, $1 - 2 p_k = x_k^T \hat\theta'$ where
$\hat\theta' = \left[1-2\hat\theta_0,-2\hat\theta_1,-2\hat\theta_2,\ldots \right]$, and thus
$C' = X' \hat\theta'$.
Since $C'$ is in the column space of $X'$, $H' C' = C'$. Therefore, the expected variance is
$$
\hat{\sigma}^2 = C'^T \left[ I - H' \right] C = 0
$$
This proves that there is a collision between the Pearson test and the identity link.
Deviance test incompatibility with logit link
Things are very similar for the deviance statistic (as it is a part of the same power-divergence family as the Pearson statistic). Everything above still holds, adding the subscript 2 and with the replacement that
$$
C_2 = \left[-2\log \left(\frac{p_k}{1-p_k} \right) g'_k\right]
$$
and thus
$$
C'_2 = \left[-2\log \left(\frac{p_k}{1-p_k} \right) \sqrt{p_k(1-p_k)}\right]
$$
Now $C'_2$ isn't in the column space of $X'_2$ for the identity link $g'=1$ nor indeed for most links.
However, with the logit link, $g'_k = p_k(1-p_k)$ and $\log \left( \frac{p_k}{1-p_k} \right) = x_k^T \hat\theta$, and thus
\begin{align}
X'_2 &= \left[ \sqrt{p_k(1-p_k)} x_k^T \right] \\
C'_2 &= \left[-2\sqrt{p_k(1-p_k)} x_k^T \hat\theta\right]
\end{align}
It immediately follows that $C'_2 = -2 X'_2 \hat\theta$, that $H'_2 C'_2 = C'_2$, and therefore $\hat{\sigma}^2_2 = 0$.
The above proves that the Pearson test does not work in the identity link and that the deviance test does not work with the logit link. The power divergence family is a continuous family of tests indexed by $\lambda$, where the Pearson statistic and deviance statistic. It seems possible that for each $\lambda$ there is a corresponding link function for which the expected variance (according to a literal use of the formula provided in the literature) is null.
General case
In the general case, for the statistic $SD_\lambda$, the vector $C$ is given by
$$
C_\lambda = \frac{2}{\lambda(\lambda+1)} \left[ \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) g'_k \right]
$$
This is derived from the summand in the definition of $SD_\lambda$, taking the difference between the $\lambda=1$ and $\lambda=0$ cases, and including a $g'$ factor. You can check that $\lambda = 1$ and $\lambda \to 0$ give the Pearson and deviance cases above. With a general link $g$,
\begin{align}
X'_\lambda &= \left[ \begin{array}{c} \frac{g'_k x_k^T}{\sqrt{p_k(1-p_k)}} \end{array} \right] \\
C'_\lambda &= \frac{2}{\lambda(\lambda+1)} \left[ \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) \sqrt{p_k(1-p_k)} \right]
\end{align}
In order for $C'_\lambda$ to be in the column space of $X'_\lambda$, there must be some vector $v$ such that
$$
g'_k x_k^T v = \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) p_k (1-p_k)
$$
Letting $v = \alpha[1,0,...] + \beta \theta$, $x_k^T v = v_0 + g^{-1}(p_k)$.
Since the derivative of $h = g^{-1}$ is $h'(p) = 1/g'(g^{-1}(p))$, this is satisfied by a function $h$ obeying the differential equation
$$
\frac{\alpha+\beta h(p)}{h'(p)} = \frac{2}{\lambda(\lambda+1)} \left( p_k^{-\lambda } - (1-p_k)^{-\lambda} \right) p_k (1-p_k)
$$
Any solution to this differential equation will specify an (inverse) link function $h(p)$ incompatible with the $SD_\lambda$ test.
For example, for the Pearson test ($\lambda = 1$), the entire family of such functions is
$$
h_1(p) = -\frac{\alpha}{\beta}+\gamma (1-2p)^{-\beta/2}
$$
which includes the identity for $\alpha=1$, $\beta=-2$, and $\gamma = -1/2$.
For the deviance test, this d.e. in the limit $\lambda \to 0$ is
$$
\frac{\alpha+\beta h(p)}{h'(p)} = -2 \log \left(\frac{p}{1-p} \right) p_k (1-p_k)
$$
The family of solutions is
$$
h_0(p) = -\frac{\alpha}{\beta}+\gamma \log \left(\frac{p}{1-p} \right)^{-\beta/2}
$$
which includes the logit for $\alpha=0$, $\beta=-2$, and $\gamma=1$.
The differential equation is difficult for Mathematica to solve for arbitrary $\lambda > 0$, even with $\alpha=0$ and $\beta=1$. Trying out specific values like $\lambda=2/3$ (a choice suggested by Cressie and Reed), the solution is quite horrendous, involving the exponent of a term involving logs, powers, and two separate hypergeometric functions, so the inability to find a solution for general $\lambda$ is understandable. | Relationship between binomial regression link function and goodness-of-fit tests [now with link to R
I've been able to prove both effects shown here.
Let the model matrix be $X$, an $N \times (p+1)$ matrix whose first column is the intercept column (all ones) and whose $1 \times (p+1)$ columns are $x |
50,440 | Unsupervised outlier detection in 2D space | Your task seems to be rather a clustering than an outlier detection task.
In the following, I use this popular data set of User locations (Joensuu).
Running OPTICS with the parameters
-dbc.in /tmp/MopsiLocations2012-Joensuu.txt
-algorithm clustering.optics.OPTICSXi -opticsxi.xi 0.05
-algorithm.distancefunction geo.LngLatDistanceFunction
-optics.epsilon 5000.0 -optics.minpts 50
yields the following (hierarchical) clustering. You can see there are three larger clusters (corresponding to Joensuu, Lieska, and Savijärvi; note that the plot has latitude and longitude 'the wrong way'), and some noise (violet here) that is not density-reachable with 5km distance and 50 points. These are your outliers.
You can tell there are some subclusters in both cities. For example one corresponding to the Prisma Joensuu shopping mall. To see more detail, it is helpful to further reduce epsilon, maybe to just 500 meters. | Unsupervised outlier detection in 2D space | Your task seems to be rather a clustering than an outlier detection task.
In the following, I use this popular data set of User locations (Joensuu).
Running OPTICS with the parameters
-dbc.in /tmp/Mop | Unsupervised outlier detection in 2D space
Your task seems to be rather a clustering than an outlier detection task.
In the following, I use this popular data set of User locations (Joensuu).
Running OPTICS with the parameters
-dbc.in /tmp/MopsiLocations2012-Joensuu.txt
-algorithm clustering.optics.OPTICSXi -opticsxi.xi 0.05
-algorithm.distancefunction geo.LngLatDistanceFunction
-optics.epsilon 5000.0 -optics.minpts 50
yields the following (hierarchical) clustering. You can see there are three larger clusters (corresponding to Joensuu, Lieska, and Savijärvi; note that the plot has latitude and longitude 'the wrong way'), and some noise (violet here) that is not density-reachable with 5km distance and 50 points. These are your outliers.
You can tell there are some subclusters in both cities. For example one corresponding to the Prisma Joensuu shopping mall. To see more detail, it is helpful to further reduce epsilon, maybe to just 500 meters. | Unsupervised outlier detection in 2D space
Your task seems to be rather a clustering than an outlier detection task.
In the following, I use this popular data set of User locations (Joensuu).
Running OPTICS with the parameters
-dbc.in /tmp/Mop |
50,441 | Unsupervised outlier detection in 2D space | To answer Edit 2 of this old question -- one way would be to compute the Mahalanobis distance for each point to the center of the cluster, then delete those above a certain cutoff distance. | Unsupervised outlier detection in 2D space | To answer Edit 2 of this old question -- one way would be to compute the Mahalanobis distance for each point to the center of the cluster, then delete those above a certain cutoff distance. | Unsupervised outlier detection in 2D space
To answer Edit 2 of this old question -- one way would be to compute the Mahalanobis distance for each point to the center of the cluster, then delete those above a certain cutoff distance. | Unsupervised outlier detection in 2D space
To answer Edit 2 of this old question -- one way would be to compute the Mahalanobis distance for each point to the center of the cluster, then delete those above a certain cutoff distance. |
50,442 | Is it good practice to perform model parameter tuning on a random subsampling of a large dataset? | This question is really broad. Depends on the data and model, it can be a good practice and can be bad.
The overall idea is to think about the "complexity of data and model". We may need to review Bias and Variance trade-off, i.e., when under-fitting and over-fitting will happen and how to detect it.
How to know if a learning curve from SVM model suffers from bias or variance?
To your question about turning on samples: In general, the more complex the data is, with limited sample size, the harder to get "representative" samples.
If the data is "really complex", and samples are "representative", using samples to tune parameters is a bad practice. The way to fix is trying larger amount of samples, and use complicated models (such as neural network).
You can see my answer is really unclear in may parts, this is because it is hard to say the complexity of data and model, and how much samples are needed to to be "representative". | Is it good practice to perform model parameter tuning on a random subsampling of a large dataset? | This question is really broad. Depends on the data and model, it can be a good practice and can be bad.
The overall idea is to think about the "complexity of data and model". We may need to review Bia | Is it good practice to perform model parameter tuning on a random subsampling of a large dataset?
This question is really broad. Depends on the data and model, it can be a good practice and can be bad.
The overall idea is to think about the "complexity of data and model". We may need to review Bias and Variance trade-off, i.e., when under-fitting and over-fitting will happen and how to detect it.
How to know if a learning curve from SVM model suffers from bias or variance?
To your question about turning on samples: In general, the more complex the data is, with limited sample size, the harder to get "representative" samples.
If the data is "really complex", and samples are "representative", using samples to tune parameters is a bad practice. The way to fix is trying larger amount of samples, and use complicated models (such as neural network).
You can see my answer is really unclear in may parts, this is because it is hard to say the complexity of data and model, and how much samples are needed to to be "representative". | Is it good practice to perform model parameter tuning on a random subsampling of a large dataset?
This question is really broad. Depends on the data and model, it can be a good practice and can be bad.
The overall idea is to think about the "complexity of data and model". We may need to review Bia |
50,443 | Inequality regarding expectation of function of a random variable | I don't know how to answer this in general but here's something. Maybe this will give you or someone else some ideas if nothing else.
Let us assume that $X$ belongs to the one-parameter exponential family with natural parameter $\theta$, so that
$$
f(x; \theta) = \exp \left( x \theta - \kappa(\theta) + c(x) \right)
$$
for some functions $\kappa$ and $c$. The expectations $E(e^{-aX})$ that you're considering are moment generating functions evaluated at $t = -a$ so let's consider the MGF $M_{X_\theta}(t)$ of $X_\theta$, where I'm subscripting with $\theta$ to emphasize the dependence on $\theta$. Since we are only varying $\theta$ I'm going to just write $M_\theta$ instead of $M_{X_\theta}$. It can be shown that
$$
M_\theta(t) = \exp \left( \kappa(t + \theta) - \kappa(\theta) \right).
$$
Since $e^a \geq e^b \implies a \geq b$ we can compare $\log M_{\theta_1}(t) = \kappa(t + \theta_1) - \kappa(\theta_1)$ with $\log M_{\theta_2}(t) = \kappa(t + \theta_2) - \kappa(\theta_2)$.
Note that
$$
\frac{\log M_{\theta_1}(t)}{\log M_{\theta_2}(t)} = \frac{\kappa(t + \theta_1) - \kappa(\theta_1)}{\kappa(t + \theta_2) - \kappa(\theta_2)} = \frac{{1 \over t}}{{1 \over t}} \times \frac{\kappa(t + \theta_1) - \kappa(\theta_1)}{\kappa(t + \theta_2) - \kappa(\theta_2)} \approx \frac{\kappa'(\theta_1)}{\kappa'(\theta_2)}
$$
if $t$ is small.
We know that $E(X_\theta) = \kappa'(\theta)$, and if we make the assumption that $E(X_\theta)$ is monotonically decreasing in $\theta$ then
$$
\theta_1 \geq \theta_2 \implies \frac{\kappa'(\theta_1)}{\kappa'(\theta_2)} = \frac{E(X_1)}{E(X_2)} \leq 1.
$$
So this suggests that for this particular family of distributions when $t$ is small we have that $M_{\theta_1}(t) \leq M_{\theta_2}(t)$.
None of this makes sense if $E(e^{tX})$ is not finite so we want to restrict ourselves to distributions where the MGF converges. Here is says that "[e]very distribution possessing a moment-generating function is a member of a natural exponential family" so it seems that this result actually applies to a significant chunk of the 1 parameter distributions that we could care about. | Inequality regarding expectation of function of a random variable | I don't know how to answer this in general but here's something. Maybe this will give you or someone else some ideas if nothing else.
Let us assume that $X$ belongs to the one-parameter exponential fa | Inequality regarding expectation of function of a random variable
I don't know how to answer this in general but here's something. Maybe this will give you or someone else some ideas if nothing else.
Let us assume that $X$ belongs to the one-parameter exponential family with natural parameter $\theta$, so that
$$
f(x; \theta) = \exp \left( x \theta - \kappa(\theta) + c(x) \right)
$$
for some functions $\kappa$ and $c$. The expectations $E(e^{-aX})$ that you're considering are moment generating functions evaluated at $t = -a$ so let's consider the MGF $M_{X_\theta}(t)$ of $X_\theta$, where I'm subscripting with $\theta$ to emphasize the dependence on $\theta$. Since we are only varying $\theta$ I'm going to just write $M_\theta$ instead of $M_{X_\theta}$. It can be shown that
$$
M_\theta(t) = \exp \left( \kappa(t + \theta) - \kappa(\theta) \right).
$$
Since $e^a \geq e^b \implies a \geq b$ we can compare $\log M_{\theta_1}(t) = \kappa(t + \theta_1) - \kappa(\theta_1)$ with $\log M_{\theta_2}(t) = \kappa(t + \theta_2) - \kappa(\theta_2)$.
Note that
$$
\frac{\log M_{\theta_1}(t)}{\log M_{\theta_2}(t)} = \frac{\kappa(t + \theta_1) - \kappa(\theta_1)}{\kappa(t + \theta_2) - \kappa(\theta_2)} = \frac{{1 \over t}}{{1 \over t}} \times \frac{\kappa(t + \theta_1) - \kappa(\theta_1)}{\kappa(t + \theta_2) - \kappa(\theta_2)} \approx \frac{\kappa'(\theta_1)}{\kappa'(\theta_2)}
$$
if $t$ is small.
We know that $E(X_\theta) = \kappa'(\theta)$, and if we make the assumption that $E(X_\theta)$ is monotonically decreasing in $\theta$ then
$$
\theta_1 \geq \theta_2 \implies \frac{\kappa'(\theta_1)}{\kappa'(\theta_2)} = \frac{E(X_1)}{E(X_2)} \leq 1.
$$
So this suggests that for this particular family of distributions when $t$ is small we have that $M_{\theta_1}(t) \leq M_{\theta_2}(t)$.
None of this makes sense if $E(e^{tX})$ is not finite so we want to restrict ourselves to distributions where the MGF converges. Here is says that "[e]very distribution possessing a moment-generating function is a member of a natural exponential family" so it seems that this result actually applies to a significant chunk of the 1 parameter distributions that we could care about. | Inequality regarding expectation of function of a random variable
I don't know how to answer this in general but here's something. Maybe this will give you or someone else some ideas if nothing else.
Let us assume that $X$ belongs to the one-parameter exponential fa |
50,444 | Interpreting correlations between two time-series | You seem to have looked at spurious results by looking at correlations of absolute values rather than correlation of changes.
If so, then see these two links for an explanation (ignore otherwise): quant.stackexchange.com/questions/489/correlation-between-prices-or-returns & stats.stackexchange.com/a/133171/114856.
I write "seem" as you did not provide your code and I cannot reproduce your numbers.
#Attempt to reproduce
var_1 <- ts(c(25.1,21.8,15.6,28.0,25.8,26.2,29.9,30.6,28.3,22.1,20.2,20.5,18.4,12.0,8.1,8.6,8.2,9.17,8.8,9.7,10.4))
var_2 <- ts(c(-13.1,-7.5,0.1,-3.4,-6.0,-4.6,-0.1,4.8,4.3,-1.1,-6.5,-10.0,-9.2,-7.8,-7.6,-7.1,-11.4,-14.2,-19.6,-22.9,-23.5))
running(var_1, var_2, fun=cor, width=5, by=1, allow.fewer=TRUE, align=c("right"), simplify=TRUE)
#Same thing but on changes (use non-log approach as neg values)
chg_1 <- diff(var_1)/var_1[-length(var_1)]
chg_2 <- diff(var_2)/var_2[-length(var_2)]
running(chg_1, chg_2, fun=cor, width=5, by=1, allow.fewer=TRUE, align=c("right"), simplify=TRUE) | Interpreting correlations between two time-series | You seem to have looked at spurious results by looking at correlations of absolute values rather than correlation of changes.
If so, then see these two links for an explanation (ignore otherwise): qu | Interpreting correlations between two time-series
You seem to have looked at spurious results by looking at correlations of absolute values rather than correlation of changes.
If so, then see these two links for an explanation (ignore otherwise): quant.stackexchange.com/questions/489/correlation-between-prices-or-returns & stats.stackexchange.com/a/133171/114856.
I write "seem" as you did not provide your code and I cannot reproduce your numbers.
#Attempt to reproduce
var_1 <- ts(c(25.1,21.8,15.6,28.0,25.8,26.2,29.9,30.6,28.3,22.1,20.2,20.5,18.4,12.0,8.1,8.6,8.2,9.17,8.8,9.7,10.4))
var_2 <- ts(c(-13.1,-7.5,0.1,-3.4,-6.0,-4.6,-0.1,4.8,4.3,-1.1,-6.5,-10.0,-9.2,-7.8,-7.6,-7.1,-11.4,-14.2,-19.6,-22.9,-23.5))
running(var_1, var_2, fun=cor, width=5, by=1, allow.fewer=TRUE, align=c("right"), simplify=TRUE)
#Same thing but on changes (use non-log approach as neg values)
chg_1 <- diff(var_1)/var_1[-length(var_1)]
chg_2 <- diff(var_2)/var_2[-length(var_2)]
running(chg_1, chg_2, fun=cor, width=5, by=1, allow.fewer=TRUE, align=c("right"), simplify=TRUE) | Interpreting correlations between two time-series
You seem to have looked at spurious results by looking at correlations of absolute values rather than correlation of changes.
If so, then see these two links for an explanation (ignore otherwise): qu |
50,445 | Time series forecasting using statistical tools | Apparently the only information you have is the numbers of days and the number of news articles published on the feed.
I think what you are really asking is "Is this news feed worth my effort to poll?" and the desired answer is "Yes or No".
Therefore, you are actually wanting to perform a logistic regression. The result of logistic regression is a probability and in your case, the probability of whether you should poll the news feed. After you make your model, you then need to decide your threshold for action: Perhaps for Feed A (which is really important), you want to poll it if the probability is > 75% but for Feed B (maybe it is not so important) you may decide to have a higher threshold, maybe poll it only if the probability of a new feed is > 90%.
In your case, you have one additional component - time dependent data that you think are involved in this whole process.
I would suggest creating moving window logistic regression (like this example for linear regression). You of course would have to tune the number of days you want to incorporate on your model based on some modeling you would do ahead of time, and of course you would have to evaluate your model periodically! | Time series forecasting using statistical tools | Apparently the only information you have is the numbers of days and the number of news articles published on the feed.
I think what you are really asking is "Is this news feed worth my effort to poll | Time series forecasting using statistical tools
Apparently the only information you have is the numbers of days and the number of news articles published on the feed.
I think what you are really asking is "Is this news feed worth my effort to poll?" and the desired answer is "Yes or No".
Therefore, you are actually wanting to perform a logistic regression. The result of logistic regression is a probability and in your case, the probability of whether you should poll the news feed. After you make your model, you then need to decide your threshold for action: Perhaps for Feed A (which is really important), you want to poll it if the probability is > 75% but for Feed B (maybe it is not so important) you may decide to have a higher threshold, maybe poll it only if the probability of a new feed is > 90%.
In your case, you have one additional component - time dependent data that you think are involved in this whole process.
I would suggest creating moving window logistic regression (like this example for linear regression). You of course would have to tune the number of days you want to incorporate on your model based on some modeling you would do ahead of time, and of course you would have to evaluate your model periodically! | Time series forecasting using statistical tools
Apparently the only information you have is the numbers of days and the number of news articles published on the feed.
I think what you are really asking is "Is this news feed worth my effort to poll |
50,446 | Time series forecasting using statistical tools | Please don't get upset, I'm a newbie. My idea is even simpler than yours.
So you have to poll in a smart way, that is when the probability there are news articles on the feed is higher.
In my opinion the point is: "What is the probability to get new articles from $feed_n$ if today I poll it ?", if the probability is little, we don't waste time polling.
I'm thinking Poisson distribution. In the begin you have to poll each feed using the same frequency, once you get enough data you can start using it.You can also update the feeds models each month using the data you collected.
Most likely there are correlations between your feeds, but as simple start thinking feeds indipendent is a good option. | Time series forecasting using statistical tools | Please don't get upset, I'm a newbie. My idea is even simpler than yours.
So you have to poll in a smart way, that is when the probability there are news articles on the feed is higher.
In my opinion | Time series forecasting using statistical tools
Please don't get upset, I'm a newbie. My idea is even simpler than yours.
So you have to poll in a smart way, that is when the probability there are news articles on the feed is higher.
In my opinion the point is: "What is the probability to get new articles from $feed_n$ if today I poll it ?", if the probability is little, we don't waste time polling.
I'm thinking Poisson distribution. In the begin you have to poll each feed using the same frequency, once you get enough data you can start using it.You can also update the feeds models each month using the data you collected.
Most likely there are correlations between your feeds, but as simple start thinking feeds indipendent is a good option. | Time series forecasting using statistical tools
Please don't get upset, I'm a newbie. My idea is even simpler than yours.
So you have to poll in a smart way, that is when the probability there are news articles on the feed is higher.
In my opinion |
50,447 | Does the property of equivariance to translation of convolution layers help to learn translation-invariant features? [duplicate] | What causes convolutional neural networks to be somewhat translation invariant is the max pooling. Each neuron has a receptive field in the original image. For example, if you have two convolutional layers with stride 1 and one 2x2 max pooling step in between,
That is, input image --> C3x3/1 --> M2x2/2 --> C3x3/1 --> output feature map,
then each neuron in the output feature map sees 8x8 patches in the original image, i.e. has a 8x8 receptive field. That neuron gets excited by stuff that happens anywhere in this 8x8 region (ignoring border effects) because the spatial information was lost in the max pooling step. If you add more max pooling steps to the network you will increase this receptive field.
Typically, in the last few layers, densely connected layers are used, which combine the information from the different receptive fields. There, different regions of the image are connected with different weights, so it does matter where the information came from.
For example, in a face recognition software you might want to abstract the information a bit through max pooling, but not too much, because the information how the different image components (eyes, nose etc.) are spatially related is important.
Or, expanding on the example you gave. Imagine you were to train a network with images of cats and dogs in which the animals only ever appear in the upper left corner. Furthermore, you design the network such that the receptive field of your last feature map before the fully connected layer is a quarter of the input image. Then the classifier would not be able to recognise a cat or a dog in the lower right corner. The weights in the fully connected layer connecting to that part of the image would never have learned anything.
Lastly, you can make your network so deep that the receptive fields of the last layer before the fully connected layers, covers the whole image. In that case, anything in the input image can excite any neuron in the last feature map. | Does the property of equivariance to translation of convolution layers help to learn translation-inv | What causes convolutional neural networks to be somewhat translation invariant is the max pooling. Each neuron has a receptive field in the original image. For example, if you have two convolutional l | Does the property of equivariance to translation of convolution layers help to learn translation-invariant features? [duplicate]
What causes convolutional neural networks to be somewhat translation invariant is the max pooling. Each neuron has a receptive field in the original image. For example, if you have two convolutional layers with stride 1 and one 2x2 max pooling step in between,
That is, input image --> C3x3/1 --> M2x2/2 --> C3x3/1 --> output feature map,
then each neuron in the output feature map sees 8x8 patches in the original image, i.e. has a 8x8 receptive field. That neuron gets excited by stuff that happens anywhere in this 8x8 region (ignoring border effects) because the spatial information was lost in the max pooling step. If you add more max pooling steps to the network you will increase this receptive field.
Typically, in the last few layers, densely connected layers are used, which combine the information from the different receptive fields. There, different regions of the image are connected with different weights, so it does matter where the information came from.
For example, in a face recognition software you might want to abstract the information a bit through max pooling, but not too much, because the information how the different image components (eyes, nose etc.) are spatially related is important.
Or, expanding on the example you gave. Imagine you were to train a network with images of cats and dogs in which the animals only ever appear in the upper left corner. Furthermore, you design the network such that the receptive field of your last feature map before the fully connected layer is a quarter of the input image. Then the classifier would not be able to recognise a cat or a dog in the lower right corner. The weights in the fully connected layer connecting to that part of the image would never have learned anything.
Lastly, you can make your network so deep that the receptive fields of the last layer before the fully connected layers, covers the whole image. In that case, anything in the input image can excite any neuron in the last feature map. | Does the property of equivariance to translation of convolution layers help to learn translation-inv
What causes convolutional neural networks to be somewhat translation invariant is the max pooling. Each neuron has a receptive field in the original image. For example, if you have two convolutional l |
50,448 | Does the property of equivariance to translation of convolution layers help to learn translation-invariant features? [duplicate] | I think the equivariance property does carry over consecutive convolutional layers if you had a chain of ConvNet without anything in between. But in practice, you have a relu or a pool layer and so that equivariant property doesn't hold across layers.
For Pooling, I think it only helps with small translations in the input, keeping the output fairly constant (in the case of max-pool for example) and allowing the layer above to better learn that representation. I don't think Pooling helps with large translation like the one where a cat is moved from one corner to the other. Convolution rather would help with that, since it will make that signal more obvious to the later layers by giving a translated and proportionate increase in the output, for both pictures. | Does the property of equivariance to translation of convolution layers help to learn translation-inv | I think the equivariance property does carry over consecutive convolutional layers if you had a chain of ConvNet without anything in between. But in practice, you have a relu or a pool layer and so th | Does the property of equivariance to translation of convolution layers help to learn translation-invariant features? [duplicate]
I think the equivariance property does carry over consecutive convolutional layers if you had a chain of ConvNet without anything in between. But in practice, you have a relu or a pool layer and so that equivariant property doesn't hold across layers.
For Pooling, I think it only helps with small translations in the input, keeping the output fairly constant (in the case of max-pool for example) and allowing the layer above to better learn that representation. I don't think Pooling helps with large translation like the one where a cat is moved from one corner to the other. Convolution rather would help with that, since it will make that signal more obvious to the later layers by giving a translated and proportionate increase in the output, for both pictures. | Does the property of equivariance to translation of convolution layers help to learn translation-inv
I think the equivariance property does carry over consecutive convolutional layers if you had a chain of ConvNet without anything in between. But in practice, you have a relu or a pool layer and so th |
50,449 | Complexity of a random forest with respect to maximum depth | For smaller data sets as simulated below the process should be linear. As pointed out by @EngrStudent, it may be an issue of L1, L2 and RAM clock speed. As model complexity increases the random forest algorithm probably cannot compute the entire tree(...or sub branch of tree) in L1 and/or L2 cache.
I tried to run a similar test with R randomForest, where in fact it seems to be linear. I cannot choose maxdepth in randomForest but only max terminal nodes (maxnodes), but that's effectively the same.
max terminal nodes = $2^{(maxdepth-1)}$.
Notice I plot maxnodes (1,2,4,8,16,32,64) by a log scale, and then depth (0,1,2,3,4,5,6) is plotted linearly by x axis. Time consumption appear to increase linearly with depth.
library(randomForest)
library(ggplot2)
set.seed(1)
#make some data
vars=10
obs = 4000
X = data.frame(replicate(vars,rnorm(obs)))
y = with(X, X1+sin(X2*2*pi)+X3*X4)
#wrapper function to time a model
time_model = function(model_function,...) {
this_time = system.time({this_model_obj = do.call(model_function,list(...))})
this_time['elapsed']
}
#generate jobs to simulate, jobs are sets of parameters (pars)
fixed_pars = alist(model_function=randomForest,x=X,y=y) #unevaluated to save memory
iter_pars = list(maxnodes=c(1,2,4,8,16,32,64),ntree = c(10,25,50),rep=c(1:5))
iter_pars_matrix = do.call(expand.grid,iter_pars)
#combine fixed and iterative pars and shape as list of jobs
job_list = apply(iter_pars_matrix,1,c,fixed_pars)
#do jobs and collect results in a data.frame
times = sapply(job_list,function(aJob) do.call(time_model,aJob))
r_df = data.frame(times,iter_pars_matrix)
#plot the results
ggplot(r_df, aes (x = maxnodes,y = times,colour = factor(ntree))) +
geom_point() + scale_x_log10() | Complexity of a random forest with respect to maximum depth | For smaller data sets as simulated below the process should be linear. As pointed out by @EngrStudent, it may be an issue of L1, L2 and RAM clock speed. As model complexity increases the random forest | Complexity of a random forest with respect to maximum depth
For smaller data sets as simulated below the process should be linear. As pointed out by @EngrStudent, it may be an issue of L1, L2 and RAM clock speed. As model complexity increases the random forest algorithm probably cannot compute the entire tree(...or sub branch of tree) in L1 and/or L2 cache.
I tried to run a similar test with R randomForest, where in fact it seems to be linear. I cannot choose maxdepth in randomForest but only max terminal nodes (maxnodes), but that's effectively the same.
max terminal nodes = $2^{(maxdepth-1)}$.
Notice I plot maxnodes (1,2,4,8,16,32,64) by a log scale, and then depth (0,1,2,3,4,5,6) is plotted linearly by x axis. Time consumption appear to increase linearly with depth.
library(randomForest)
library(ggplot2)
set.seed(1)
#make some data
vars=10
obs = 4000
X = data.frame(replicate(vars,rnorm(obs)))
y = with(X, X1+sin(X2*2*pi)+X3*X4)
#wrapper function to time a model
time_model = function(model_function,...) {
this_time = system.time({this_model_obj = do.call(model_function,list(...))})
this_time['elapsed']
}
#generate jobs to simulate, jobs are sets of parameters (pars)
fixed_pars = alist(model_function=randomForest,x=X,y=y) #unevaluated to save memory
iter_pars = list(maxnodes=c(1,2,4,8,16,32,64),ntree = c(10,25,50),rep=c(1:5))
iter_pars_matrix = do.call(expand.grid,iter_pars)
#combine fixed and iterative pars and shape as list of jobs
job_list = apply(iter_pars_matrix,1,c,fixed_pars)
#do jobs and collect results in a data.frame
times = sapply(job_list,function(aJob) do.call(time_model,aJob))
r_df = data.frame(times,iter_pars_matrix)
#plot the results
ggplot(r_df, aes (x = maxnodes,y = times,colour = factor(ntree))) +
geom_point() + scale_x_log10() | Complexity of a random forest with respect to maximum depth
For smaller data sets as simulated below the process should be linear. As pointed out by @EngrStudent, it may be an issue of L1, L2 and RAM clock speed. As model complexity increases the random forest |
50,450 | How to calculate a multiple correlation with non-negative constraints on the linear model's parameters? | If I understand this right, you can estimate a multiple regression model with non-negativity restrictions on the coefficients (in R, this can be done with, for instance,the CRAN package nnls), and then use the R-squared from that fit. There might well be some similar functions in python. | How to calculate a multiple correlation with non-negative constraints on the linear model's paramete | If I understand this right, you can estimate a multiple regression model with non-negativity restrictions on the coefficients (in R, this can be done with, for instance,the CRAN package nnls), and the | How to calculate a multiple correlation with non-negative constraints on the linear model's parameters?
If I understand this right, you can estimate a multiple regression model with non-negativity restrictions on the coefficients (in R, this can be done with, for instance,the CRAN package nnls), and then use the R-squared from that fit. There might well be some similar functions in python. | How to calculate a multiple correlation with non-negative constraints on the linear model's paramete
If I understand this right, you can estimate a multiple regression model with non-negativity restrictions on the coefficients (in R, this can be done with, for instance,the CRAN package nnls), and the |
50,451 | How to calculate a multiple correlation with non-negative constraints on the linear model's parameters? | Core Answer
Echoing the answer by Kjetil, you could approach this using non-negative least squares followed by calculating the $R^2$ for the fitted model.
In Python you can use scipy.optimize.nnls.
Example 1
Here is an example usage adapted from the documentation:
import numpy as np
from scipy.optimize import nnls
# Make up some data
m, n = 100, 2
X = np.random.normal(size=m*n).reshape(m, n)
theta = np.arange(2) + 1
y = X @ theta
# Fit model
params, ss_res = nnls(X, y)
# Compute R^2
r_squared = 1 - ss_res / np.var(y)
Note that above I did not include an intercept, and as a consequence of that the linear model we trained could have a negative $R^2$ (although not on the idealized data we used in this case).
Example 2
This second example includes an intercept, which is achieved by including a column of ones and an additional parameter.
import numpy as np
from scipy.optimize import nnls
# Make up some data
m, n = 100, 2
X = np.random.normal(size=m*n).reshape(m, n)
X = np.concatenate((X, np.ones(m).reshape(-1, 1)), axis=1)
theta = np.arange(3) + 1
y = X @ theta
# Fit model
params, ss_res = nnls(X, y)
# Compute R^2
r_squared = 1 - ss_res / np.var(y) | How to calculate a multiple correlation with non-negative constraints on the linear model's paramete | Core Answer
Echoing the answer by Kjetil, you could approach this using non-negative least squares followed by calculating the $R^2$ for the fitted model.
In Python you can use scipy.optimize.nnls.
Ex | How to calculate a multiple correlation with non-negative constraints on the linear model's parameters?
Core Answer
Echoing the answer by Kjetil, you could approach this using non-negative least squares followed by calculating the $R^2$ for the fitted model.
In Python you can use scipy.optimize.nnls.
Example 1
Here is an example usage adapted from the documentation:
import numpy as np
from scipy.optimize import nnls
# Make up some data
m, n = 100, 2
X = np.random.normal(size=m*n).reshape(m, n)
theta = np.arange(2) + 1
y = X @ theta
# Fit model
params, ss_res = nnls(X, y)
# Compute R^2
r_squared = 1 - ss_res / np.var(y)
Note that above I did not include an intercept, and as a consequence of that the linear model we trained could have a negative $R^2$ (although not on the idealized data we used in this case).
Example 2
This second example includes an intercept, which is achieved by including a column of ones and an additional parameter.
import numpy as np
from scipy.optimize import nnls
# Make up some data
m, n = 100, 2
X = np.random.normal(size=m*n).reshape(m, n)
X = np.concatenate((X, np.ones(m).reshape(-1, 1)), axis=1)
theta = np.arange(3) + 1
y = X @ theta
# Fit model
params, ss_res = nnls(X, y)
# Compute R^2
r_squared = 1 - ss_res / np.var(y) | How to calculate a multiple correlation with non-negative constraints on the linear model's paramete
Core Answer
Echoing the answer by Kjetil, you could approach this using non-negative least squares followed by calculating the $R^2$ for the fitted model.
In Python you can use scipy.optimize.nnls.
Ex |
50,452 | ADF vs. DF what is the difference between augmented and the standard Dickey-Fuller test? | For future reference, the book referenced in Richard Hardy comment has the answer.
The book says, and I quote:
The unit root test described above are valid if the time series $y_t$ is well characterized by an AR(1) with white noise errors. Many financial time series, however, have a more complicated dynamic structure that is captured by a simple AR(1) model. Said and Dickey (1984) augmented the basic autoregressive unit root test to accommodate general ARMA(p, q) models with unknown orders and their test is referred to as augmented Dickey-Fuller (ADF) test.
The "test described above" is the standard Dickey-Fuller test.
Above quote is available on p. 120 of the book (or 140 of the pdf) | ADF vs. DF what is the difference between augmented and the standard Dickey-Fuller test? | For future reference, the book referenced in Richard Hardy comment has the answer.
The book says, and I quote:
The unit root test described above are valid if the time series $y_t$ is well characteri | ADF vs. DF what is the difference between augmented and the standard Dickey-Fuller test?
For future reference, the book referenced in Richard Hardy comment has the answer.
The book says, and I quote:
The unit root test described above are valid if the time series $y_t$ is well characterized by an AR(1) with white noise errors. Many financial time series, however, have a more complicated dynamic structure that is captured by a simple AR(1) model. Said and Dickey (1984) augmented the basic autoregressive unit root test to accommodate general ARMA(p, q) models with unknown orders and their test is referred to as augmented Dickey-Fuller (ADF) test.
The "test described above" is the standard Dickey-Fuller test.
Above quote is available on p. 120 of the book (or 140 of the pdf) | ADF vs. DF what is the difference between augmented and the standard Dickey-Fuller test?
For future reference, the book referenced in Richard Hardy comment has the answer.
The book says, and I quote:
The unit root test described above are valid if the time series $y_t$ is well characteri |
50,453 | How to calculate likelihood for a mixture model with missing data? | A possible completion for your model (as far as I understand it without proper mathematical notations) is the hierarchical structure
Generate index $\iota$ taking value $i$ with probability $\pi_i$
Generate positive integer $m$ from a fixed distribution, e.g., a shifted Poisson $1+\mathcal{P}(1)$
Generate $m$ iid values from $\text{N}(\mu_\iota,\sigma^2)$
If this model is acceptable, it is straightforward to write the EM algorithm for this extension of a standard mixture model. (Note that I picked a shifted Poisson $1+\mathcal{P}(1)$ as an arbitrary choice since it does not matter for inference.) | How to calculate likelihood for a mixture model with missing data? | A possible completion for your model (as far as I understand it without proper mathematical notations) is the hierarchical structure
Generate index $\iota$ taking value $i$ with probability $\pi_i$
G | How to calculate likelihood for a mixture model with missing data?
A possible completion for your model (as far as I understand it without proper mathematical notations) is the hierarchical structure
Generate index $\iota$ taking value $i$ with probability $\pi_i$
Generate positive integer $m$ from a fixed distribution, e.g., a shifted Poisson $1+\mathcal{P}(1)$
Generate $m$ iid values from $\text{N}(\mu_\iota,\sigma^2)$
If this model is acceptable, it is straightforward to write the EM algorithm for this extension of a standard mixture model. (Note that I picked a shifted Poisson $1+\mathcal{P}(1)$ as an arbitrary choice since it does not matter for inference.) | How to calculate likelihood for a mixture model with missing data?
A possible completion for your model (as far as I understand it without proper mathematical notations) is the hierarchical structure
Generate index $\iota$ taking value $i$ with probability $\pi_i$
G |
50,454 | Does LSTM Eliminate Need for Input Lags? | I believe it is actually pretty clear what you mean by the term input lags, but I will state explicitly.
When doing a regression problem with an LSTM, a input signal $ \mathbf{x} \in \mathbb{R}^{n \times t \times c_1 } $ is used to predict another signal $ \mathbf{y} \in \mathbb{R}^{n \times t \times c_2} $. For simplicity I consider $ c_1 = 1, c_2 = 1 $, and I will take one time series $ x[t] $ , so it is possible to talk about in discrete time series terms.
An input delay is then the choice of $ \tau \in \mathbb{Z}^{0+} $, which will transform the signal like $ x_{delayed}[t] = x[t - \tau] $, and for $ t < 0 $, we define $ x[t] = 0 $, so the signal is zero-padded at the beginning, but this is merely a choice of signal processing. With similiar reasoning it is possible to define the concept of output lag too.
In case of LSTMs input lags is typically less concern than output lags in my experience.
This could be checked by considering the problem of training an LSTM to predict the delayed version of itself. Consider the LSTM model equations,
$$
f_t = \sigma_g(W_{f} x_t + U_{f} h_{t-1} + b_f) \\
i_t = \sigma_g(W_{i} x_t + U_{i} h_{t-1} + b_i) \\
o_t = \sigma_g(W_{o} x_t + U_{o} h_{t-1} + b_o) \\
c_t = f_t \circ c_{t-1} + i_t \circ \sigma_c(W_{c} x_t + U_{c} h_{t-1} + b_c) \\
h_t = o_t \circ \sigma_h(c_t)
$$
The goal is then for the algorithm to learn $ h_t = x_{t-\tau} $. We cannot explicitly learn that, but it is possible to learn weights such that $ c_t = f(x_{t-\tau})$. Considering a mapping of $h_2 = x_1$, we could set $ o_2 = 1 , f_2 = 1, i_1 = 1, i_2 = 0, U_c = 0$, and $ W_c, b_c $ could be chosen to constrain the input values in the approximately linear regime of the sigmoid, so $ h_2 = \sigma_h ( \sigma_c ( x_1 )) \approx x_1 $. An additional regression layer might help to scale back the values from the linear regime of the sigmoid to the original scale. So in terms of the model equations, the parameters exist to circumvent the mapping, learnability is a more involved question to answer, depending on the actual optimisation used.
For output lags, this doesn't eliminate the need however, because an LSTM is a causal model. BLSTM as mentioned above, is acausal, so it might be used to circumvent this problem, however this comes at the cost of sacrificing causality of your model, i.e. real-time signal processing becomes unfeasible. | Does LSTM Eliminate Need for Input Lags? | I believe it is actually pretty clear what you mean by the term input lags, but I will state explicitly.
When doing a regression problem with an LSTM, a input signal $ \mathbf{x} \in \mathbb{R}^{n \ti | Does LSTM Eliminate Need for Input Lags?
I believe it is actually pretty clear what you mean by the term input lags, but I will state explicitly.
When doing a regression problem with an LSTM, a input signal $ \mathbf{x} \in \mathbb{R}^{n \times t \times c_1 } $ is used to predict another signal $ \mathbf{y} \in \mathbb{R}^{n \times t \times c_2} $. For simplicity I consider $ c_1 = 1, c_2 = 1 $, and I will take one time series $ x[t] $ , so it is possible to talk about in discrete time series terms.
An input delay is then the choice of $ \tau \in \mathbb{Z}^{0+} $, which will transform the signal like $ x_{delayed}[t] = x[t - \tau] $, and for $ t < 0 $, we define $ x[t] = 0 $, so the signal is zero-padded at the beginning, but this is merely a choice of signal processing. With similiar reasoning it is possible to define the concept of output lag too.
In case of LSTMs input lags is typically less concern than output lags in my experience.
This could be checked by considering the problem of training an LSTM to predict the delayed version of itself. Consider the LSTM model equations,
$$
f_t = \sigma_g(W_{f} x_t + U_{f} h_{t-1} + b_f) \\
i_t = \sigma_g(W_{i} x_t + U_{i} h_{t-1} + b_i) \\
o_t = \sigma_g(W_{o} x_t + U_{o} h_{t-1} + b_o) \\
c_t = f_t \circ c_{t-1} + i_t \circ \sigma_c(W_{c} x_t + U_{c} h_{t-1} + b_c) \\
h_t = o_t \circ \sigma_h(c_t)
$$
The goal is then for the algorithm to learn $ h_t = x_{t-\tau} $. We cannot explicitly learn that, but it is possible to learn weights such that $ c_t = f(x_{t-\tau})$. Considering a mapping of $h_2 = x_1$, we could set $ o_2 = 1 , f_2 = 1, i_1 = 1, i_2 = 0, U_c = 0$, and $ W_c, b_c $ could be chosen to constrain the input values in the approximately linear regime of the sigmoid, so $ h_2 = \sigma_h ( \sigma_c ( x_1 )) \approx x_1 $. An additional regression layer might help to scale back the values from the linear regime of the sigmoid to the original scale. So in terms of the model equations, the parameters exist to circumvent the mapping, learnability is a more involved question to answer, depending on the actual optimisation used.
For output lags, this doesn't eliminate the need however, because an LSTM is a causal model. BLSTM as mentioned above, is acausal, so it might be used to circumvent this problem, however this comes at the cost of sacrificing causality of your model, i.e. real-time signal processing becomes unfeasible. | Does LSTM Eliminate Need for Input Lags?
I believe it is actually pretty clear what you mean by the term input lags, but I will state explicitly.
When doing a regression problem with an LSTM, a input signal $ \mathbf{x} \in \mathbb{R}^{n \ti |
50,455 | Does LSTM Eliminate Need for Input Lags? | No, it doesn't eliminate that need. Sometimes people use a bi-directional LSTM to get information from both sides of a sample before making a prediction. In that case, you wouldn't have to do an input lag. | Does LSTM Eliminate Need for Input Lags? | No, it doesn't eliminate that need. Sometimes people use a bi-directional LSTM to get information from both sides of a sample before making a prediction. In that case, you wouldn't have to do an input | Does LSTM Eliminate Need for Input Lags?
No, it doesn't eliminate that need. Sometimes people use a bi-directional LSTM to get information from both sides of a sample before making a prediction. In that case, you wouldn't have to do an input lag. | Does LSTM Eliminate Need for Input Lags?
No, it doesn't eliminate that need. Sometimes people use a bi-directional LSTM to get information from both sides of a sample before making a prediction. In that case, you wouldn't have to do an input |
50,456 | Neural networks bounded output | A trick for bounded output range is to scale the target values between (0,1) and use sigmoid output + binary cross-entropy loss.
This is often used for image data, where all the pixel values are between (0,255).
Say $a=wh+b$ is the activation of the last layer,
for sigmoid output + binary cross-entropy loss
$$E(a,t')=t'\log\sigma(a)+(1-t')\log(1-\sigma(a)),\quad \frac{\partial E}{\partial a}=\sigma(a)-t'$$
where $t'$ is the scaled target value. The derivative wrt $a$ is just prediction - target, which is somewhat similar to the derivative of using unbounded activations + MSE. | Neural networks bounded output | A trick for bounded output range is to scale the target values between (0,1) and use sigmoid output + binary cross-entropy loss.
This is often used for image data, where all the pixel values are betwe | Neural networks bounded output
A trick for bounded output range is to scale the target values between (0,1) and use sigmoid output + binary cross-entropy loss.
This is often used for image data, where all the pixel values are between (0,255).
Say $a=wh+b$ is the activation of the last layer,
for sigmoid output + binary cross-entropy loss
$$E(a,t')=t'\log\sigma(a)+(1-t')\log(1-\sigma(a)),\quad \frac{\partial E}{\partial a}=\sigma(a)-t'$$
where $t'$ is the scaled target value. The derivative wrt $a$ is just prediction - target, which is somewhat similar to the derivative of using unbounded activations + MSE. | Neural networks bounded output
A trick for bounded output range is to scale the target values between (0,1) and use sigmoid output + binary cross-entropy loss.
This is often used for image data, where all the pixel values are betwe |
50,457 | Standardizing skewed distributions for visualisation alongside others | It may help if you can provide examples of what you've got and what you're going for.
For unknown distributions, it's hard to beat box plots. Not that they're perfect, but they are well known by technical audiences and able to show skewness and outliers. Here's an example with 20 mostly skewed distributions of 1000 data points each, standardized to mean = 0 and std = 1.
It sounds like your distributions may be log normal (like 11-14 in the plot). If you find that or another distribution that fits well, you could fit the parameters and plot only those. | Standardizing skewed distributions for visualisation alongside others | It may help if you can provide examples of what you've got and what you're going for.
For unknown distributions, it's hard to beat box plots. Not that they're perfect, but they are well known by techn | Standardizing skewed distributions for visualisation alongside others
It may help if you can provide examples of what you've got and what you're going for.
For unknown distributions, it's hard to beat box plots. Not that they're perfect, but they are well known by technical audiences and able to show skewness and outliers. Here's an example with 20 mostly skewed distributions of 1000 data points each, standardized to mean = 0 and std = 1.
It sounds like your distributions may be log normal (like 11-14 in the plot). If you find that or another distribution that fits well, you could fit the parameters and plot only those. | Standardizing skewed distributions for visualisation alongside others
It may help if you can provide examples of what you've got and what you're going for.
For unknown distributions, it's hard to beat box plots. Not that they're perfect, but they are well known by techn |
50,458 | How can I approximate the median with a linear function? | We are looking to find $N$ constrained $w_i$ with $\sum_{i=1}^n w_i=1$ which minimize
$$E\left[\left(\sum_{i=1}^N w_i X_i - \text{median}(X)\right)^{\!2\ }\right]$$
Equivalently, we are looking to find $N-1$ unconstrained $w_i$ which minimize
$$E\left[\left(\sum_{i=1}^{N-1} w_i(X_i-X_N) + X_N - \text{median}(X)\right)^{\!2\ }\right]$$
Taking the derivative with respect to $w_j$ gives
$$E\left[2\left(\sum_{i=1}^{N-1} w_i(X_i-X_N) + X_N - \text{median}(X)\right)(X_j-X_N)\right]=0$$
Or equivalently
$$\sum_{i=1}^{N-1}E\Big[(X_i-X_N)(X_j-X_N)\Big]w_i=
E\Big[(\text{median}(X)-X_N)(X_j-X_N)\Big]$$
We can put these equations in the matrix form $M w = C$ (where $M$ is square and $M$, $w$ and $C$ all have $N-1$ rows), and solve them as $w = M^{-1}C$.
If $X\sim N(\mu,\Sigma)$, then calculating $M$ is tedious, but each component has a formula in terms of $\mu$ and $\Sigma$. More numerical effort is required for calculating $M^{-1}$, and for calculating $C$: I think there are no closed formulas for the components of $C$ even in the simple case when $X$ is normal. | How can I approximate the median with a linear function? | We are looking to find $N$ constrained $w_i$ with $\sum_{i=1}^n w_i=1$ which minimize
$$E\left[\left(\sum_{i=1}^N w_i X_i - \text{median}(X)\right)^{\!2\ }\right]$$
Equivalently, we are looking to fin | How can I approximate the median with a linear function?
We are looking to find $N$ constrained $w_i$ with $\sum_{i=1}^n w_i=1$ which minimize
$$E\left[\left(\sum_{i=1}^N w_i X_i - \text{median}(X)\right)^{\!2\ }\right]$$
Equivalently, we are looking to find $N-1$ unconstrained $w_i$ which minimize
$$E\left[\left(\sum_{i=1}^{N-1} w_i(X_i-X_N) + X_N - \text{median}(X)\right)^{\!2\ }\right]$$
Taking the derivative with respect to $w_j$ gives
$$E\left[2\left(\sum_{i=1}^{N-1} w_i(X_i-X_N) + X_N - \text{median}(X)\right)(X_j-X_N)\right]=0$$
Or equivalently
$$\sum_{i=1}^{N-1}E\Big[(X_i-X_N)(X_j-X_N)\Big]w_i=
E\Big[(\text{median}(X)-X_N)(X_j-X_N)\Big]$$
We can put these equations in the matrix form $M w = C$ (where $M$ is square and $M$, $w$ and $C$ all have $N-1$ rows), and solve them as $w = M^{-1}C$.
If $X\sim N(\mu,\Sigma)$, then calculating $M$ is tedious, but each component has a formula in terms of $\mu$ and $\Sigma$. More numerical effort is required for calculating $M^{-1}$, and for calculating $C$: I think there are no closed formulas for the components of $C$ even in the simple case when $X$ is normal. | How can I approximate the median with a linear function?
We are looking to find $N$ constrained $w_i$ with $\sum_{i=1}^n w_i=1$ which minimize
$$E\left[\left(\sum_{i=1}^N w_i X_i - \text{median}(X)\right)^{\!2\ }\right]$$
Equivalently, we are looking to fin |
50,459 | How can I approximate the median with a linear function? | This is work towards an answer, too long for a comment:
One precise version of this question is: What vector of weights $w$ makes $w\cdot X$ the best estimate of the median of $X$, where $X$ is a normally distributed $n$-dimensional vector with mean $\mu$ and covariance matrix $\Sigma$?
This is different from asking for the vector $w$ for estimating the mean. E.g. suppose $X_1$, $X_2$, $X_3$ are independent and $N(10,1)$, $N(100,1)$ and $N(1000,1)$ respectively. Then the $w$ for estimating (and for exactly calculating) the mean of $X$ is
$(\frac13, \frac13, \frac13)$. Meanwhile, the $w$ for estimating the median is close to or exactly $(0,1,0)$.
The original question is more general than the normal case, but the normal case seems challenging already. | How can I approximate the median with a linear function? | This is work towards an answer, too long for a comment:
One precise version of this question is: What vector of weights $w$ makes $w\cdot X$ the best estimate of the median of $X$, where $X$ is a norm | How can I approximate the median with a linear function?
This is work towards an answer, too long for a comment:
One precise version of this question is: What vector of weights $w$ makes $w\cdot X$ the best estimate of the median of $X$, where $X$ is a normally distributed $n$-dimensional vector with mean $\mu$ and covariance matrix $\Sigma$?
This is different from asking for the vector $w$ for estimating the mean. E.g. suppose $X_1$, $X_2$, $X_3$ are independent and $N(10,1)$, $N(100,1)$ and $N(1000,1)$ respectively. Then the $w$ for estimating (and for exactly calculating) the mean of $X$ is
$(\frac13, \frac13, \frac13)$. Meanwhile, the $w$ for estimating the median is close to or exactly $(0,1,0)$.
The original question is more general than the normal case, but the normal case seems challenging already. | How can I approximate the median with a linear function?
This is work towards an answer, too long for a comment:
One precise version of this question is: What vector of weights $w$ makes $w\cdot X$ the best estimate of the median of $X$, where $X$ is a norm |
50,460 | gbm could make prediction out of thin air? | This is due to the default argument, bag.fraction=0.5. At this setting, for each tree a random 50% of the data is used for fitting. In your code, pred is the mean response for the 50% of rows that were chosen for the 100th tree. If you set bag.fraction=1 the mean prediction equals the mean response:
bm <- gbm(y~., data=df, distribution="gaussian", n.trees=100, cv.folds=10, bag.fraction=1) | gbm could make prediction out of thin air? | This is due to the default argument, bag.fraction=0.5. At this setting, for each tree a random 50% of the data is used for fitting. In your code, pred is the mean response for the 50% of rows that wer | gbm could make prediction out of thin air?
This is due to the default argument, bag.fraction=0.5. At this setting, for each tree a random 50% of the data is used for fitting. In your code, pred is the mean response for the 50% of rows that were chosen for the 100th tree. If you set bag.fraction=1 the mean prediction equals the mean response:
bm <- gbm(y~., data=df, distribution="gaussian", n.trees=100, cv.folds=10, bag.fraction=1) | gbm could make prediction out of thin air?
This is due to the default argument, bag.fraction=0.5. At this setting, for each tree a random 50% of the data is used for fitting. In your code, pred is the mean response for the 50% of rows that wer |
50,461 | How to classify temporal disease data | I'd encourage you to start with simple classification techniques. Typically a logistic regression. Very simple, runs fast, many examples in R online.
A simple decision tree is also an option (don't go for random forests if you don't have to).
Start with these, as they are simple to understand and have good implementations in R.
That will give you the capacity to classify a new patient. If you then want to predict changes of states and thinks like that, then it gets more complicated and other techniques are required - but only with properly formulated questions. | How to classify temporal disease data | I'd encourage you to start with simple classification techniques. Typically a logistic regression. Very simple, runs fast, many examples in R online.
A simple decision tree is also an option (don't g | How to classify temporal disease data
I'd encourage you to start with simple classification techniques. Typically a logistic regression. Very simple, runs fast, many examples in R online.
A simple decision tree is also an option (don't go for random forests if you don't have to).
Start with these, as they are simple to understand and have good implementations in R.
That will give you the capacity to classify a new patient. If you then want to predict changes of states and thinks like that, then it gets more complicated and other techniques are required - but only with properly formulated questions. | How to classify temporal disease data
I'd encourage you to start with simple classification techniques. Typically a logistic regression. Very simple, runs fast, many examples in R online.
A simple decision tree is also an option (don't g |
50,462 | How to classify temporal disease data | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Python scikit-learn allows for analysis like the one you are looking for: scikit-learn RBF SVM | How to classify temporal disease data | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to classify temporal disease data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Python scikit-learn allows for analysis like the one you are looking for: scikit-learn RBF SVM | How to classify temporal disease data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
50,463 | Looking for a stochastic curve fitting method | You can use the geom_smooth() function of the ggplot2 library in R.
Here is an example:
x = seq(0, 100, by=0.5)
y = sqrt(x)
y = y + rnorm(n = length(y),mean = 0,sd = 3)
df = data.frame(cbind(x,y))
require(ggplot2)
ggplot(data = df, aes(x = x,y = y)) + geom_smooth()
This is the output:
geom_smooth calls a curve fitting method called loess, which does local regression. With the default options, for every x, it considers a neighborhood containing 75% of the data points and fits a quadratic using weighted least squares. It assumes the errors are normally distributed and computes confidence intervals as described in pages 44-46 of http://www.netlib.org/a/cloess.pdf
Here is the documentation for loess:
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/loess.html | Looking for a stochastic curve fitting method | You can use the geom_smooth() function of the ggplot2 library in R.
Here is an example:
x = seq(0, 100, by=0.5)
y = sqrt(x)
y = y + rnorm(n = length(y),mean = 0,sd = 3)
df = data.frame(cbind(x,y))
re | Looking for a stochastic curve fitting method
You can use the geom_smooth() function of the ggplot2 library in R.
Here is an example:
x = seq(0, 100, by=0.5)
y = sqrt(x)
y = y + rnorm(n = length(y),mean = 0,sd = 3)
df = data.frame(cbind(x,y))
require(ggplot2)
ggplot(data = df, aes(x = x,y = y)) + geom_smooth()
This is the output:
geom_smooth calls a curve fitting method called loess, which does local regression. With the default options, for every x, it considers a neighborhood containing 75% of the data points and fits a quadratic using weighted least squares. It assumes the errors are normally distributed and computes confidence intervals as described in pages 44-46 of http://www.netlib.org/a/cloess.pdf
Here is the documentation for loess:
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/loess.html | Looking for a stochastic curve fitting method
You can use the geom_smooth() function of the ggplot2 library in R.
Here is an example:
x = seq(0, 100, by=0.5)
y = sqrt(x)
y = y + rnorm(n = length(y),mean = 0,sd = 3)
df = data.frame(cbind(x,y))
re |
50,464 | Variance-Covariance Matrix for $l_1$ regularized binomial logistic regression | (This answer is more of a comment than a full answer, but I'm posting it here since I don't have enough rep to comment.)
This is a very hard question to give an good answer to. Even in the non-penalized case, the covariance estimate for the parameters is based on a normal approximation. When you start penalizing, you also enter the realm of "post-selection inference" which is an active area of research. The work on post-selection inference for GLMs (including logistic regression) is in its infancy, but see [1] for a recent reference on it. I believe the method described in this paper is implemented in the selectiveInference R package [2].
Even ignoring the mathematical difficulty, there are philosophical difficulties inherent in your question. Covariance matrices of estimators are tied to coverage under repeated sampling in the frequentist framework. If you had a new sample, you're not guaranteed to select the same variables, so can we even define "coverage" in a sensible way? There are many different (valid) ways to define coverage, each of which gives rise to a different school of thought about how post-selection inference should be defined and performed.
[1] J. Taylor, R. Tibshirani. "Post-selection inference for L1-penalized likelihood models." Canadian Journal of Statistics (to appear).
http://doi.org/10.1002/cjs.11313
[2] https://cran.r-project.org/package=selectiveInference | Variance-Covariance Matrix for $l_1$ regularized binomial logistic regression | (This answer is more of a comment than a full answer, but I'm posting it here since I don't have enough rep to comment.)
This is a very hard question to give an good answer to. Even in the non-penaliz | Variance-Covariance Matrix for $l_1$ regularized binomial logistic regression
(This answer is more of a comment than a full answer, but I'm posting it here since I don't have enough rep to comment.)
This is a very hard question to give an good answer to. Even in the non-penalized case, the covariance estimate for the parameters is based on a normal approximation. When you start penalizing, you also enter the realm of "post-selection inference" which is an active area of research. The work on post-selection inference for GLMs (including logistic regression) is in its infancy, but see [1] for a recent reference on it. I believe the method described in this paper is implemented in the selectiveInference R package [2].
Even ignoring the mathematical difficulty, there are philosophical difficulties inherent in your question. Covariance matrices of estimators are tied to coverage under repeated sampling in the frequentist framework. If you had a new sample, you're not guaranteed to select the same variables, so can we even define "coverage" in a sensible way? There are many different (valid) ways to define coverage, each of which gives rise to a different school of thought about how post-selection inference should be defined and performed.
[1] J. Taylor, R. Tibshirani. "Post-selection inference for L1-penalized likelihood models." Canadian Journal of Statistics (to appear).
http://doi.org/10.1002/cjs.11313
[2] https://cran.r-project.org/package=selectiveInference | Variance-Covariance Matrix for $l_1$ regularized binomial logistic regression
(This answer is more of a comment than a full answer, but I'm posting it here since I don't have enough rep to comment.)
This is a very hard question to give an good answer to. Even in the non-penaliz |
50,465 | Interpreting QLIKE and MSE Loss function (Patton 2011) | Just noticed this question is over two years old, but this answer might prove useful for future readers.
Check Figure 3 of the paper you referenced in the comments, or Figure 1 of the version of the paper that was published in the JoE. This figure demonstrates the shape of several different loss functions, including both MSE and QLIKE.
Importantly, you'll note that MSE and QLIKE have a very different shape, so you can expect the results from using the two loss functions to be quite different. Your assertion in the comments that MSE is more sensitive to outliers is mostly correct, except for the far left tail of the loss function, where QLIKE is actually more sensitive to outliers.
More generally, what you're running up against is one of the core problems of this area of the literature. Volatility processes in asset returns tends to be dominated by a small number of very large observations. This means that loss functions such as the MSE are not good at rejecting null hypotheses, since the analysis is typically dominated by a small number of large observations that get emphasized by the non-robust nature of the MSE loss function.
QLIKE partially solves this problem by being robust to extreme observations in the right tail, but unfortunately it is not particularly robust in the left tail. Further, since QLIKE is not a symmetric loss function (in fact MSE is the only symmetric loss function in the class discussed by Patton), so it penalizes positive and negative loss differently. This means that if you are comparing two forecast procedures, one of which on average produces positively biased forecasts, and the other on average produces negative biased forecasts where the bias is of the same magnitude, then using QLIKE will massively favour the forecast with positive bias. So unless you have some reason to particularly prefer bias of one kind over another, QLIKE must be regarded as an imperfect solution to the problem.
So what is the solution? There may not be one, unless you're willing to make some further structural assumptions about the true data-generating process behind your model. | Interpreting QLIKE and MSE Loss function (Patton 2011) | Just noticed this question is over two years old, but this answer might prove useful for future readers.
Check Figure 3 of the paper you referenced in the comments, or Figure 1 of the version of the p | Interpreting QLIKE and MSE Loss function (Patton 2011)
Just noticed this question is over two years old, but this answer might prove useful for future readers.
Check Figure 3 of the paper you referenced in the comments, or Figure 1 of the version of the paper that was published in the JoE. This figure demonstrates the shape of several different loss functions, including both MSE and QLIKE.
Importantly, you'll note that MSE and QLIKE have a very different shape, so you can expect the results from using the two loss functions to be quite different. Your assertion in the comments that MSE is more sensitive to outliers is mostly correct, except for the far left tail of the loss function, where QLIKE is actually more sensitive to outliers.
More generally, what you're running up against is one of the core problems of this area of the literature. Volatility processes in asset returns tends to be dominated by a small number of very large observations. This means that loss functions such as the MSE are not good at rejecting null hypotheses, since the analysis is typically dominated by a small number of large observations that get emphasized by the non-robust nature of the MSE loss function.
QLIKE partially solves this problem by being robust to extreme observations in the right tail, but unfortunately it is not particularly robust in the left tail. Further, since QLIKE is not a symmetric loss function (in fact MSE is the only symmetric loss function in the class discussed by Patton), so it penalizes positive and negative loss differently. This means that if you are comparing two forecast procedures, one of which on average produces positively biased forecasts, and the other on average produces negative biased forecasts where the bias is of the same magnitude, then using QLIKE will massively favour the forecast with positive bias. So unless you have some reason to particularly prefer bias of one kind over another, QLIKE must be regarded as an imperfect solution to the problem.
So what is the solution? There may not be one, unless you're willing to make some further structural assumptions about the true data-generating process behind your model. | Interpreting QLIKE and MSE Loss function (Patton 2011)
Just noticed this question is over two years old, but this answer might prove useful for future readers.
Check Figure 3 of the paper you referenced in the comments, or Figure 1 of the version of the p |
50,466 | How to make sense of non-linear data transformations? What conclusions drawn can you apply to original data? | This question is a similar to: Interpretation of log transformed predictor. I recommend looking at the answer by jthetzel (profile: https://stats.stackexchange.com/users/2981/jthetzel) who summarized the effects of multiple well known transformations and their meanings (and posted great links).
It should be noted that most transformations quickly become difficult to understand once you leave the basic transformations (i.e. $log(x)$ or $e^x$), and be careful when trying to make conclusions when you have used a transformation. Some effects of transformations on data are mentioned here: http://pareonline.net/getvn.asp?v=8&n=6, which briefly mentions things like changes in the properties of the data, statistical procedure and conclusion issues. It would be wise to consult with a trained mathematician/statistician to determine the effects and implications of a transformation. | How to make sense of non-linear data transformations? What conclusions drawn can you apply to origin | This question is a similar to: Interpretation of log transformed predictor. I recommend looking at the answer by jthetzel (profile: https://stats.stackexchange.com/users/2981/jthetzel) who summarized | How to make sense of non-linear data transformations? What conclusions drawn can you apply to original data?
This question is a similar to: Interpretation of log transformed predictor. I recommend looking at the answer by jthetzel (profile: https://stats.stackexchange.com/users/2981/jthetzel) who summarized the effects of multiple well known transformations and their meanings (and posted great links).
It should be noted that most transformations quickly become difficult to understand once you leave the basic transformations (i.e. $log(x)$ or $e^x$), and be careful when trying to make conclusions when you have used a transformation. Some effects of transformations on data are mentioned here: http://pareonline.net/getvn.asp?v=8&n=6, which briefly mentions things like changes in the properties of the data, statistical procedure and conclusion issues. It would be wise to consult with a trained mathematician/statistician to determine the effects and implications of a transformation. | How to make sense of non-linear data transformations? What conclusions drawn can you apply to origin
This question is a similar to: Interpretation of log transformed predictor. I recommend looking at the answer by jthetzel (profile: https://stats.stackexchange.com/users/2981/jthetzel) who summarized |
50,467 | Sensitivity Analysis for Missing Not at Random (MNAR) data | I'm currently dealing with that same problem too.
I have a data set with 70 kovariables and a lot of them have missing values. Most of them are definitely MNAR.
One great paper i found is this one.
http://journals.lww.com/epidem/Fulltext/2011/03000/Sensitivity_Analysis_When_Data_Are_Missing.25.aspx
they also perform a sensitivy analalysis witn SensMice and have great examples.
Did you made any progress in your analysis yet?
I am still struggling with the interpretation of a sensitivity analysis with mice.
I mean it's a lot of work to do a sensitivity analysis for one variable, and i have 70 variables....
Can i just ask one thing just for clarification?
You do a sensitivity analysis because you want to see what happens if you slowly drift from the MAR assumption to MNAR right?
So you change step by step the parameters from the assumpted distribution...
But in the end you won't be able to find the "best" parameters, you just know at what value you must be careful when you interprate the parameters of your prediction model ( since you used MAR assumption).
In the end you still use MICE under MAR assumption....
I'm struggling with this topic for a while now, and I'm really not sure if i understood it right....Maybe you can help me out? | Sensitivity Analysis for Missing Not at Random (MNAR) data | I'm currently dealing with that same problem too.
I have a data set with 70 kovariables and a lot of them have missing values. Most of them are definitely MNAR.
One great paper i found is this one.
ht | Sensitivity Analysis for Missing Not at Random (MNAR) data
I'm currently dealing with that same problem too.
I have a data set with 70 kovariables and a lot of them have missing values. Most of them are definitely MNAR.
One great paper i found is this one.
http://journals.lww.com/epidem/Fulltext/2011/03000/Sensitivity_Analysis_When_Data_Are_Missing.25.aspx
they also perform a sensitivy analalysis witn SensMice and have great examples.
Did you made any progress in your analysis yet?
I am still struggling with the interpretation of a sensitivity analysis with mice.
I mean it's a lot of work to do a sensitivity analysis for one variable, and i have 70 variables....
Can i just ask one thing just for clarification?
You do a sensitivity analysis because you want to see what happens if you slowly drift from the MAR assumption to MNAR right?
So you change step by step the parameters from the assumpted distribution...
But in the end you won't be able to find the "best" parameters, you just know at what value you must be careful when you interprate the parameters of your prediction model ( since you used MAR assumption).
In the end you still use MICE under MAR assumption....
I'm struggling with this topic for a while now, and I'm really not sure if i understood it right....Maybe you can help me out? | Sensitivity Analysis for Missing Not at Random (MNAR) data
I'm currently dealing with that same problem too.
I have a data set with 70 kovariables and a lot of them have missing values. Most of them are definitely MNAR.
One great paper i found is this one.
ht |
50,468 | Help Deriving Variance Function - Binomial GLM | I think I figured it out. It's important to mention that the discussion in Faraway was in the context of IRWLS.
First of all, we can use either the variance of the response or the variance function in our IRWLS implementation. It just represents a scale change: $Var(Y)=V(\mu)a(\phi)$ where $a(\phi)$ is just some constant. So I think Faraway is actually using the Variance of Y.
Second, Faraway was using R, which actually takes the sample proportion, $\bar{Y}=Y/n$, when it fits the model:
If a binomial glm model was specified by giving a two-column response, the weights returned by prior.weights are the total numbers of cases (factored by the supplied case weights) and the component y of the result is the proportion of successes.
So even though $Y\sim Bin(n,\mu)$, the response $\bar{Y}$ has variance $Var(\bar{Y})=\frac{1}{n^2}Var(Y)=\frac{\mu(1-\mu)}{n}$
So until I'm told otherwise, I will assume that his writing $V(\mu)$ above was either an error or done to gloss over something. It's the variance of the response. | Help Deriving Variance Function - Binomial GLM | I think I figured it out. It's important to mention that the discussion in Faraway was in the context of IRWLS.
First of all, we can use either the variance of the response or the variance function | Help Deriving Variance Function - Binomial GLM
I think I figured it out. It's important to mention that the discussion in Faraway was in the context of IRWLS.
First of all, we can use either the variance of the response or the variance function in our IRWLS implementation. It just represents a scale change: $Var(Y)=V(\mu)a(\phi)$ where $a(\phi)$ is just some constant. So I think Faraway is actually using the Variance of Y.
Second, Faraway was using R, which actually takes the sample proportion, $\bar{Y}=Y/n$, when it fits the model:
If a binomial glm model was specified by giving a two-column response, the weights returned by prior.weights are the total numbers of cases (factored by the supplied case weights) and the component y of the result is the proportion of successes.
So even though $Y\sim Bin(n,\mu)$, the response $\bar{Y}$ has variance $Var(\bar{Y})=\frac{1}{n^2}Var(Y)=\frac{\mu(1-\mu)}{n}$
So until I'm told otherwise, I will assume that his writing $V(\mu)$ above was either an error or done to gloss over something. It's the variance of the response. | Help Deriving Variance Function - Binomial GLM
I think I figured it out. It's important to mention that the discussion in Faraway was in the context of IRWLS.
First of all, we can use either the variance of the response or the variance function |
50,469 | Help Deriving Variance Function - Binomial GLM | There is nothing wrong in your derivation, however, the variance function for exponential family written in your form is
$$V(\mu)=a(\phi)b^{''}(\theta)$$
Which is exactly $\frac{\mu(1-\mu)}{n}$. | Help Deriving Variance Function - Binomial GLM | There is nothing wrong in your derivation, however, the variance function for exponential family written in your form is
$$V(\mu)=a(\phi)b^{''}(\theta)$$
Which is exactly $\frac{\mu(1-\mu)}{n}$. | Help Deriving Variance Function - Binomial GLM
There is nothing wrong in your derivation, however, the variance function for exponential family written in your form is
$$V(\mu)=a(\phi)b^{''}(\theta)$$
Which is exactly $\frac{\mu(1-\mu)}{n}$. | Help Deriving Variance Function - Binomial GLM
There is nothing wrong in your derivation, however, the variance function for exponential family written in your form is
$$V(\mu)=a(\phi)b^{''}(\theta)$$
Which is exactly $\frac{\mu(1-\mu)}{n}$. |
50,470 | Does JAGS have an R front end like brms for Stan? [closed] | Based on your last comment ("I'm hoping for a runtime translation of R-formula syntax into JAGS model specification"), I think runjags::template.jags does what you want (at least partly). It automatically generates a complete JAGS model (and data) representation of a (G)L(M)M based on lme4-style syntax and a data frame supplied by the user. For example:
library('runjags')
# Use an example from glmer:
library('lme4')
fitdata <- cbpp
fitdata$Resp <- cbind(fitdata$incidence, fitdata$size - fitdata$incidence)
# As in ?glmer:
gm1a <- glmer(Resp ~ period + (1 | herd), fitdata, binomial)
# Create (and display) the JAGS code:
mf <- template.jags(Resp ~ period + (1 | herd), fitdata, n.chains=2, family='binomial')
cat(readLines(mf),sep='\n')
r <- run.jags(mf, burnin=5000, sample=10000)
r
summary(gm1a)
There are two obvious things missing: random slopes are not yet supported (but that is something I am looking to add), and non-linear models are not directly supported (but a linear model could be generated and then edited by the user). Note that it's not possible to include arbitrary R functions in the JAGS code, so these will have to be re-written in JAGS (or in C++ as a JAGS module).
To repeat my earlier comment, the motivation is to help the user write their own code rather than doing all the work without the user having to think about or understand anything. To clarify: I see the benefit of helping a reasonably knowledgable user to quickly generate code which otherwise might be tedious to write (particularly if they struggle with BUGS syntax despite understanding the theory of MCMC), but I am uncomfortable with the idea of truly novice users using the automatically generated code without understanding what is happening (i.e. as a totally black box). But perhaps I am being overly cautious ... I would be very interested to hear others' opinions (privately by email to the maintainer of the runjags package if preferred) as I have not yet decided how far to develop these functions, and would certainly take on board useful comments and suggestions. | Does JAGS have an R front end like brms for Stan? [closed] | Based on your last comment ("I'm hoping for a runtime translation of R-formula syntax into JAGS model specification"), I think runjags::template.jags does what you want (at least partly). It automati | Does JAGS have an R front end like brms for Stan? [closed]
Based on your last comment ("I'm hoping for a runtime translation of R-formula syntax into JAGS model specification"), I think runjags::template.jags does what you want (at least partly). It automatically generates a complete JAGS model (and data) representation of a (G)L(M)M based on lme4-style syntax and a data frame supplied by the user. For example:
library('runjags')
# Use an example from glmer:
library('lme4')
fitdata <- cbpp
fitdata$Resp <- cbind(fitdata$incidence, fitdata$size - fitdata$incidence)
# As in ?glmer:
gm1a <- glmer(Resp ~ period + (1 | herd), fitdata, binomial)
# Create (and display) the JAGS code:
mf <- template.jags(Resp ~ period + (1 | herd), fitdata, n.chains=2, family='binomial')
cat(readLines(mf),sep='\n')
r <- run.jags(mf, burnin=5000, sample=10000)
r
summary(gm1a)
There are two obvious things missing: random slopes are not yet supported (but that is something I am looking to add), and non-linear models are not directly supported (but a linear model could be generated and then edited by the user). Note that it's not possible to include arbitrary R functions in the JAGS code, so these will have to be re-written in JAGS (or in C++ as a JAGS module).
To repeat my earlier comment, the motivation is to help the user write their own code rather than doing all the work without the user having to think about or understand anything. To clarify: I see the benefit of helping a reasonably knowledgable user to quickly generate code which otherwise might be tedious to write (particularly if they struggle with BUGS syntax despite understanding the theory of MCMC), but I am uncomfortable with the idea of truly novice users using the automatically generated code without understanding what is happening (i.e. as a totally black box). But perhaps I am being overly cautious ... I would be very interested to hear others' opinions (privately by email to the maintainer of the runjags package if preferred) as I have not yet decided how far to develop these functions, and would certainly take on board useful comments and suggestions. | Does JAGS have an R front end like brms for Stan? [closed]
Based on your last comment ("I'm hoping for a runtime translation of R-formula syntax into JAGS model specification"), I think runjags::template.jags does what you want (at least partly). It automati |
50,471 | Hausman Test interpretation is based on the p-value? - R output | Yes. See the following taken from a Princeton slide:
To decide between fixed or random effects you can run a Hausman test
where the null hypothesis is that the preferred model is random
effects vs. the alternative the fixed effects (see Green, 2008,
chapter 9). It basically tests whether the unique errors (ui) are
correlated with the regressors, the null hypothesis is they are not.
Run a fixed effects model and save the estimates, then run a random
model and save the estimates, then perform the test. If the p-value is
significant (for example <0.05) then use fixed effects, if not use
random effects.
see: https://dss.princeton.edu/training/Panel101R.pdf | Hausman Test interpretation is based on the p-value? - R output | Yes. See the following taken from a Princeton slide:
To decide between fixed or random effects you can run a Hausman test
where the null hypothesis is that the preferred model is random
effects v | Hausman Test interpretation is based on the p-value? - R output
Yes. See the following taken from a Princeton slide:
To decide between fixed or random effects you can run a Hausman test
where the null hypothesis is that the preferred model is random
effects vs. the alternative the fixed effects (see Green, 2008,
chapter 9). It basically tests whether the unique errors (ui) are
correlated with the regressors, the null hypothesis is they are not.
Run a fixed effects model and save the estimates, then run a random
model and save the estimates, then perform the test. If the p-value is
significant (for example <0.05) then use fixed effects, if not use
random effects.
see: https://dss.princeton.edu/training/Panel101R.pdf | Hausman Test interpretation is based on the p-value? - R output
Yes. See the following taken from a Princeton slide:
To decide between fixed or random effects you can run a Hausman test
where the null hypothesis is that the preferred model is random
effects v |
50,472 | Why neural and convolutional neural network detect edges first? | Convolution operation have close relationship to the frequency domain. See Convolution Theorem for details.
What makes edge an edge? Sudden changes / high frequency changes on the value. Intuitively this is why convolution can detect edges.
For example, Think about the following 1D toy data.
000000000000111111111111
For the homogeneous part the frequency is 0. | Why neural and convolutional neural network detect edges first? | Convolution operation have close relationship to the frequency domain. See Convolution Theorem for details.
What makes edge an edge? Sudden changes / high frequency changes on the value. Intuitively t | Why neural and convolutional neural network detect edges first?
Convolution operation have close relationship to the frequency domain. See Convolution Theorem for details.
What makes edge an edge? Sudden changes / high frequency changes on the value. Intuitively this is why convolution can detect edges.
For example, Think about the following 1D toy data.
000000000000111111111111
For the homogeneous part the frequency is 0. | Why neural and convolutional neural network detect edges first?
Convolution operation have close relationship to the frequency domain. See Convolution Theorem for details.
What makes edge an edge? Sudden changes / high frequency changes on the value. Intuitively t |
50,473 | Why neural and convolutional neural network detect edges first? | One thing that makes neural networks so interesting is that every subset of layers can be thought of as a neural network itself. So, after the first layer transforms its input, the second-through-last layers can be thought of as a network of their own.
So, in an optimized network, the goal of the first layer is to transform the input so that the later layers can classify it as well as possible. This means turning the input into something that's easier to work with. Hence, edges. Edges are complex pixel patterns that can be linearly combined in later layers to form even more complex features, that are even easier for the rest of the network (which itself is a sequence of networks) to classify with. | Why neural and convolutional neural network detect edges first? | One thing that makes neural networks so interesting is that every subset of layers can be thought of as a neural network itself. So, after the first layer transforms its input, the second-through-last | Why neural and convolutional neural network detect edges first?
One thing that makes neural networks so interesting is that every subset of layers can be thought of as a neural network itself. So, after the first layer transforms its input, the second-through-last layers can be thought of as a network of their own.
So, in an optimized network, the goal of the first layer is to transform the input so that the later layers can classify it as well as possible. This means turning the input into something that's easier to work with. Hence, edges. Edges are complex pixel patterns that can be linearly combined in later layers to form even more complex features, that are even easier for the rest of the network (which itself is a sequence of networks) to classify with. | Why neural and convolutional neural network detect edges first?
One thing that makes neural networks so interesting is that every subset of layers can be thought of as a neural network itself. So, after the first layer transforms its input, the second-through-last |
50,474 | Guidelines to improve a convolutional neural network? | Would anybody be aware of how to tackle the problem of finding good parameters (including architecture) for a CNN apart from trying?
No. Hence, they are optimized by 'graduate student descent' :)
A standard-ish architecture for mnist is lenet-5, and closely related variants, eg Karpathy's convnetjs implementation, which uses the following layers:
type:'input', out_sx:24, out_sy:24, out_depth:1
type:'conv', sx:5, filters:8, stride:1, pad:2, activation:'relu'
type:'pool', sx:2, stride:2
type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'
type:'pool', sx:3, stride:3
type:'softmax', num_classes:10
(and I think it augments the data by cutting random 24x24 patches out of the original 28x28 images, which is a key part of obtaining higher accuracies on this (tiny) dataset).
There are meta-learning techniques, which is an open research area. For example "Neural architecture search with reinforcement learning", by Barret Zoph and Quoc Le, 2016, uses reinforcement learning to try different architectures, find out what works well. It does this in an automated way, without human intervention. Of course this needs a ton of GPU power... | Guidelines to improve a convolutional neural network? | Would anybody be aware of how to tackle the problem of finding good parameters (including architecture) for a CNN apart from trying?
No. Hence, they are optimized by 'graduate student descent' :)
A s | Guidelines to improve a convolutional neural network?
Would anybody be aware of how to tackle the problem of finding good parameters (including architecture) for a CNN apart from trying?
No. Hence, they are optimized by 'graduate student descent' :)
A standard-ish architecture for mnist is lenet-5, and closely related variants, eg Karpathy's convnetjs implementation, which uses the following layers:
type:'input', out_sx:24, out_sy:24, out_depth:1
type:'conv', sx:5, filters:8, stride:1, pad:2, activation:'relu'
type:'pool', sx:2, stride:2
type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'
type:'pool', sx:3, stride:3
type:'softmax', num_classes:10
(and I think it augments the data by cutting random 24x24 patches out of the original 28x28 images, which is a key part of obtaining higher accuracies on this (tiny) dataset).
There are meta-learning techniques, which is an open research area. For example "Neural architecture search with reinforcement learning", by Barret Zoph and Quoc Le, 2016, uses reinforcement learning to try different architectures, find out what works well. It does this in an automated way, without human intervention. Of course this needs a ton of GPU power... | Guidelines to improve a convolutional neural network?
Would anybody be aware of how to tackle the problem of finding good parameters (including architecture) for a CNN apart from trying?
No. Hence, they are optimized by 'graduate student descent' :)
A s |
50,475 | Guidelines to improve a convolutional neural network? | You might decorrelate your data first using PCA, and then clamp the objects to your input nodes (i.e., input the PCs from PCA into the CNN). Did you select any features, or use everything? (don't know if the features were pre-selected and users are expected to use everything?). | Guidelines to improve a convolutional neural network? | You might decorrelate your data first using PCA, and then clamp the objects to your input nodes (i.e., input the PCs from PCA into the CNN). Did you select any features, or use everything? (don't kn | Guidelines to improve a convolutional neural network?
You might decorrelate your data first using PCA, and then clamp the objects to your input nodes (i.e., input the PCs from PCA into the CNN). Did you select any features, or use everything? (don't know if the features were pre-selected and users are expected to use everything?). | Guidelines to improve a convolutional neural network?
You might decorrelate your data first using PCA, and then clamp the objects to your input nodes (i.e., input the PCs from PCA into the CNN). Did you select any features, or use everything? (don't kn |
50,476 | Guidelines to improve a convolutional neural network? | I would suggest you try another data set. MNIST data has been "over tuned" on test data set!!
You can try to run a test for "human accuracy" on this data, and you may not get over 95% accuracy. BTW, I tried, there are many digits are not quite recognizable. Here is an example, it can be 3 or 5.
In sum, today's NN tools are really good, that only needs few changes you may get very good results especially on classic data set. My suggestions to you would not be push the limit of the model on MINIST data but try more other data sets. | Guidelines to improve a convolutional neural network? | I would suggest you try another data set. MNIST data has been "over tuned" on test data set!!
You can try to run a test for "human accuracy" on this data, and you may not get over 95% accuracy. BTW, I | Guidelines to improve a convolutional neural network?
I would suggest you try another data set. MNIST data has been "over tuned" on test data set!!
You can try to run a test for "human accuracy" on this data, and you may not get over 95% accuracy. BTW, I tried, there are many digits are not quite recognizable. Here is an example, it can be 3 or 5.
In sum, today's NN tools are really good, that only needs few changes you may get very good results especially on classic data set. My suggestions to you would not be push the limit of the model on MINIST data but try more other data sets. | Guidelines to improve a convolutional neural network?
I would suggest you try another data set. MNIST data has been "over tuned" on test data set!!
You can try to run a test for "human accuracy" on this data, and you may not get over 95% accuracy. BTW, I |
50,477 | How do I test difference in Percentages? | Use a hypergeometric test to see whether each region's proportion is significantly greater or significantly less than the national proportion.
The hypergeometric test treats the population as a bag with 21568 stones, 10820 of which are white. Each region is then treated as a random sample from that bag. For example the North East is like grabbing 1919 of those stones and getting 1032 white ones. You can calculate how unlikely that sample is in R:
> phyper(1032-1, 10820, 21568-10820, 1919, lower.tail = F)
[1] 0.0004970996
The 0.0004970996 says there's there's a very small chance that randomly drawing from the national votes 1919 times will give you 1032 yeses. I.e. this tests whether the proportion of North East yeses is significantly different than the population average, and it is (p = 0.0004).
However, you're also interested in whether the regional proportion is significantly less than the national average! To calculate that you can use the lower.tail=T option:
> phyper(1031, 10820, 10748, 1919, lower.tail = T)
[1] 0.9995029
So the North East's proportion looks significantly greater than the national average (p = 0.0004) and not significantly less (p = 0.9995).
To answer your question you could repeat this process of testing both tails for each region. Since this means conducting 26 hypothesis tests, you could expect that one or two might be significant just by random chance if you're using a threshold like p<0.05. For that reason I'd finish the analysis off with a multiple comparison correction. | How do I test difference in Percentages? | Use a hypergeometric test to see whether each region's proportion is significantly greater or significantly less than the national proportion.
The hypergeometric test treats the population as a bag w | How do I test difference in Percentages?
Use a hypergeometric test to see whether each region's proportion is significantly greater or significantly less than the national proportion.
The hypergeometric test treats the population as a bag with 21568 stones, 10820 of which are white. Each region is then treated as a random sample from that bag. For example the North East is like grabbing 1919 of those stones and getting 1032 white ones. You can calculate how unlikely that sample is in R:
> phyper(1032-1, 10820, 21568-10820, 1919, lower.tail = F)
[1] 0.0004970996
The 0.0004970996 says there's there's a very small chance that randomly drawing from the national votes 1919 times will give you 1032 yeses. I.e. this tests whether the proportion of North East yeses is significantly different than the population average, and it is (p = 0.0004).
However, you're also interested in whether the regional proportion is significantly less than the national average! To calculate that you can use the lower.tail=T option:
> phyper(1031, 10820, 10748, 1919, lower.tail = T)
[1] 0.9995029
So the North East's proportion looks significantly greater than the national average (p = 0.0004) and not significantly less (p = 0.9995).
To answer your question you could repeat this process of testing both tails for each region. Since this means conducting 26 hypothesis tests, you could expect that one or two might be significant just by random chance if you're using a threshold like p<0.05. For that reason I'd finish the analysis off with a multiple comparison correction. | How do I test difference in Percentages?
Use a hypergeometric test to see whether each region's proportion is significantly greater or significantly less than the national proportion.
The hypergeometric test treats the population as a bag w |
50,478 | PCA vs FA vs ICA for dimensionality reduction in questionaire data | I was curious about your question, because I had never even heard of Independent Component Analysis (ICA), but I use factor analysis all the time. So looking up ICA, I found that one of the key assumptions was that "the values in each source signal have non-Gaussian distributions" (Wikipedia). This doesn't seem like a very helpful assumption if we're trying to discern or confirm a latent construct -- like a personality trait, if we're assuming that our item-responses are being drawn from a normal distribution, or that our latent construct is normally distributed. As such, ICA seems to be used for things like studying radio signals, and not personality traits. | PCA vs FA vs ICA for dimensionality reduction in questionaire data | I was curious about your question, because I had never even heard of Independent Component Analysis (ICA), but I use factor analysis all the time. So looking up ICA, I found that one of the key assump | PCA vs FA vs ICA for dimensionality reduction in questionaire data
I was curious about your question, because I had never even heard of Independent Component Analysis (ICA), but I use factor analysis all the time. So looking up ICA, I found that one of the key assumptions was that "the values in each source signal have non-Gaussian distributions" (Wikipedia). This doesn't seem like a very helpful assumption if we're trying to discern or confirm a latent construct -- like a personality trait, if we're assuming that our item-responses are being drawn from a normal distribution, or that our latent construct is normally distributed. As such, ICA seems to be used for things like studying radio signals, and not personality traits. | PCA vs FA vs ICA for dimensionality reduction in questionaire data
I was curious about your question, because I had never even heard of Independent Component Analysis (ICA), but I use factor analysis all the time. So looking up ICA, I found that one of the key assump |
50,479 | Critical region of likelihood ratio test | $\chi^2_1(0.95) = 3.841$
$C = \{\mathbf{Y}: (Y_1+3Y_2) \log{\frac{p_0}{\hat{p}}} + (Y_1+3Y_0) \log(\frac{1-p_0}{1-\hat{p}}) \geq \frac{-\chi^2_1(0.95)}{2} \}$
When $Y_0 =Y_2$, $\hat{p} = 1/2$
Thus, $\log(4p_0(1-p_0)) \geq \frac{-3.841}{2(Y_1+3Y_2)} $ $\implies p_0(1-p_0) \geq 0.25 \exp{(\frac{-1.92}{Y_1+3Y_2})}$ | Critical region of likelihood ratio test | $\chi^2_1(0.95) = 3.841$
$C = \{\mathbf{Y}: (Y_1+3Y_2) \log{\frac{p_0}{\hat{p}}} + (Y_1+3Y_0) \log(\frac{1-p_0}{1-\hat{p}}) \geq \frac{-\chi^2_1(0.95)}{2} \}$
When $Y_0 =Y_2$, $\hat{p} = 1/2$
Thus, $\ | Critical region of likelihood ratio test
$\chi^2_1(0.95) = 3.841$
$C = \{\mathbf{Y}: (Y_1+3Y_2) \log{\frac{p_0}{\hat{p}}} + (Y_1+3Y_0) \log(\frac{1-p_0}{1-\hat{p}}) \geq \frac{-\chi^2_1(0.95)}{2} \}$
When $Y_0 =Y_2$, $\hat{p} = 1/2$
Thus, $\log(4p_0(1-p_0)) \geq \frac{-3.841}{2(Y_1+3Y_2)} $ $\implies p_0(1-p_0) \geq 0.25 \exp{(\frac{-1.92}{Y_1+3Y_2})}$ | Critical region of likelihood ratio test
$\chi^2_1(0.95) = 3.841$
$C = \{\mathbf{Y}: (Y_1+3Y_2) \log{\frac{p_0}{\hat{p}}} + (Y_1+3Y_0) \log(\frac{1-p_0}{1-\hat{p}}) \geq \frac{-\chi^2_1(0.95)}{2} \}$
When $Y_0 =Y_2$, $\hat{p} = 1/2$
Thus, $\ |
50,480 | How to better plot and compare overlapping histograms? | The usua alternatives to display "overlapping" histograms are to:
place the bar side by side (but I don't think that it is working well visually in most of the situations):
connect the heights of the bars with a line (and drop the bar itself - there exists alternatives where the outline of the histogram is plotted, like a skyline):
I am adding R code used to make the figures:
dataf <- bind_rows(lapply(1:10,
function(x) {
data.frame(grp=x,
value=rnorm(100,
mean=runif(1)))
}))
ggplot(dataf) +
geom_histogram(aes(x=value, fill=factor(grp)),
position="dodge", binwidth=.5)
ggplot(dataf) +
geom_freqpoly(aes(x=value, color=factor(grp)), binwidth=.5) | How to better plot and compare overlapping histograms? | The usua alternatives to display "overlapping" histograms are to:
place the bar side by side (but I don't think that it is working well visually in most of the situations):
connect the heights of | How to better plot and compare overlapping histograms?
The usua alternatives to display "overlapping" histograms are to:
place the bar side by side (but I don't think that it is working well visually in most of the situations):
connect the heights of the bars with a line (and drop the bar itself - there exists alternatives where the outline of the histogram is plotted, like a skyline):
I am adding R code used to make the figures:
dataf <- bind_rows(lapply(1:10,
function(x) {
data.frame(grp=x,
value=rnorm(100,
mean=runif(1)))
}))
ggplot(dataf) +
geom_histogram(aes(x=value, fill=factor(grp)),
position="dodge", binwidth=.5)
ggplot(dataf) +
geom_freqpoly(aes(x=value, color=factor(grp)), binwidth=.5) | How to better plot and compare overlapping histograms?
The usua alternatives to display "overlapping" histograms are to:
place the bar side by side (but I don't think that it is working well visually in most of the situations):
connect the heights of |
50,481 | How to better plot and compare overlapping histograms? | Plotting histograms together can be fine, but it breaks down when you have more than two histograms, or the more they overlap, both of which apply in your case. I would suggest you start by making a plot matrix (so long as you don't have so many groups the plots become unusable).
Likewise, plots with too many, and too different, objects can become difficult to interpret. You want to compare histograms, and you want to compare kernel density plots. Note that a plot matrix has a main diagonal for each group, and then the upper and lower triangles are symmetrical. For a given plot in the upper triangle that compares two groups, there is a corresponding plot in the lower triangle that compares the same two groups. Thus, I would suggest you compare histograms in the upper triangle plots, and kernel density plots in the lower triangle plots.
Because it might still be difficult to compare two overlapping histograms in the subplots, I would suggest you make back to back histograms instead of overlapping histograms.
Putting these suggestions together, you could get something like this:
This was coded using R. The double histogram code was adapted from here. I suspect the code won't be interpretable to people who aren't already very familiar with R, but for those who do want to see it, it is displayed below:
data(mtcars)
d = mtcars[,c("qsec","cyl")]
ud = unstack(d)
ud = data.frame(four = c(ud[[1]], rep(NA,3)),
six = c(ud[[2]], rep(NA,7)),
eight = ud[[3]] )
upper = function(x, y){
usr = par("usr"); on.exit(par(usr)); par(usr = c(0, 1, 0, 1), new=TRUE)
hx = hist(x, plot=FALSE)
hy = hist(y, plot=FALSE)
lim = ifelse(max(hy$counts)>max(hx$counts), max(hy$counts), max(hx$counts))
hy$counts = - hy$counts
plot(hy, ylim=c(-lim, lim), col="red", xlim=c(14,23), axes=FALSE, main="")
lines(hx, col="blue")
abline(h=0)
}
diag.hist = function(x, ...){
usr = par("usr"); on.exit(par(usr)); par(usr=c(usr[1:2], 0, 1.5), new=TRUE)
hist(x, freq=FALSE, xlim=c(14,23), ylim=c(0,.8), main="", axes=FALSE)
lines(density(na.omit(x)))
}
lower = function(x, y){
usr = par("usr"); on.exit(par(usr)); par(usr = c(0, 1, 0, 1), new=TRUE)
plot( density(na.omit(x)), xlim=c(14,23),ylim=c(0,.5),main="",axes=FALSE, col="blue")
lines(density(na.omit(y)), col="red")
}
windows()
pairs(ud, upper.panel=upper, diag.panel=diag.hist, lower.panel=lower) | How to better plot and compare overlapping histograms? | Plotting histograms together can be fine, but it breaks down when you have more than two histograms, or the more they overlap, both of which apply in your case. I would suggest you start by making a | How to better plot and compare overlapping histograms?
Plotting histograms together can be fine, but it breaks down when you have more than two histograms, or the more they overlap, both of which apply in your case. I would suggest you start by making a plot matrix (so long as you don't have so many groups the plots become unusable).
Likewise, plots with too many, and too different, objects can become difficult to interpret. You want to compare histograms, and you want to compare kernel density plots. Note that a plot matrix has a main diagonal for each group, and then the upper and lower triangles are symmetrical. For a given plot in the upper triangle that compares two groups, there is a corresponding plot in the lower triangle that compares the same two groups. Thus, I would suggest you compare histograms in the upper triangle plots, and kernel density plots in the lower triangle plots.
Because it might still be difficult to compare two overlapping histograms in the subplots, I would suggest you make back to back histograms instead of overlapping histograms.
Putting these suggestions together, you could get something like this:
This was coded using R. The double histogram code was adapted from here. I suspect the code won't be interpretable to people who aren't already very familiar with R, but for those who do want to see it, it is displayed below:
data(mtcars)
d = mtcars[,c("qsec","cyl")]
ud = unstack(d)
ud = data.frame(four = c(ud[[1]], rep(NA,3)),
six = c(ud[[2]], rep(NA,7)),
eight = ud[[3]] )
upper = function(x, y){
usr = par("usr"); on.exit(par(usr)); par(usr = c(0, 1, 0, 1), new=TRUE)
hx = hist(x, plot=FALSE)
hy = hist(y, plot=FALSE)
lim = ifelse(max(hy$counts)>max(hx$counts), max(hy$counts), max(hx$counts))
hy$counts = - hy$counts
plot(hy, ylim=c(-lim, lim), col="red", xlim=c(14,23), axes=FALSE, main="")
lines(hx, col="blue")
abline(h=0)
}
diag.hist = function(x, ...){
usr = par("usr"); on.exit(par(usr)); par(usr=c(usr[1:2], 0, 1.5), new=TRUE)
hist(x, freq=FALSE, xlim=c(14,23), ylim=c(0,.8), main="", axes=FALSE)
lines(density(na.omit(x)))
}
lower = function(x, y){
usr = par("usr"); on.exit(par(usr)); par(usr = c(0, 1, 0, 1), new=TRUE)
plot( density(na.omit(x)), xlim=c(14,23),ylim=c(0,.5),main="",axes=FALSE, col="blue")
lines(density(na.omit(y)), col="red")
}
windows()
pairs(ud, upper.panel=upper, diag.panel=diag.hist, lower.panel=lower) | How to better plot and compare overlapping histograms?
Plotting histograms together can be fine, but it breaks down when you have more than two histograms, or the more they overlap, both of which apply in your case. I would suggest you start by making a |
50,482 | Reproducible benchmarks for the performance of statistical prediction methods? | Tentative answer here but, well, there's a paper [1] comparing the performance of 22 classification algorithms predicting software failures in 10 public domain NASA Metrics Data repository datasets.
The data used in this study stems from the NASA MDP repository [10]. Ten software defect prediction data sets are analyzed, including the eight sets used in [44] as well as two additional data sets (JM1 and KC1, see also Table 1). Each data set is comprised of several software modules, together with their number of faults and characteristic code attributes.
The benchmarking experiment aims at contrasting the competitive performance of several classification algorithms. To that end, an overall number of 22 classifiers is selected, which may be grouped into the categories of statistical approaches, nearest-neighbor methods, neural networks, support vector machines, tree-based methods, and ensembles. The selection aims at achieving a balance between established techniques, such as Naive Bayes, decision trees, or logistic regression, and novel approaches that have not yet found widespread usage in defect prediction (e.g., different variants of support vector machines, logistic model trees, or random forests). The classifiers are sketched in Table 2, together with a brief description of their underlying paradigms.
I also found another paper [2] comparing some algorithms in two ecological modelling datasets. It has interesting discussion of the ease of use of such algorithms and tries to predict the probability of finding specimen given geographic features (check Fig. 3 and Fig. 4).
Logistic Multiple Regression, Principal Component Regression and Classification and Regression Tree Analysis (CART), commonly used in ecological modelling using GIS, are compared with a relatively new statistical technique, Multivariate Adaptive Regression Splines (MARS), to test their accuracy, reliability, implementation within GIS and ease of use. All were applied to the same two data sets, covering a wide range of conditions common in predictive modelling, namely geographical range, scale, nature of the predictors and sampling method.
The Grimmia data set (1285 cases; 419 present, 866 absent) represents the distribution of species of the moss genus Grimmia in Latin America, from Mexico to Cape Hornos (Fig. 3). Grimmia was recently revised for Latin America and its taxonomy is well known worldwide (Muñoz 1999; Muñoz & Pando 2000).
The Fagus data set (103 181 cases; ca. 50% each of presences and absences) was selected to represent high spatial resolution at a regional scale. The dependent variable was the presence/absence of Fagus sylvatica oligotrophic forest in the La Liébana region (Cantabria Province, NW Spain).
[1] Lessmann, S., Baesens, B., Mues, C., & Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. Software Engineering, IEEE Transactions on, 34(4), 485-496. [PDF at IEEE]
[2] Muñoz, J., & Felicísimo, Á. M. (2004). Comparison of statistical methods commonly used in predictive modelling. Journal of Vegetation Science, 15(2), 285-292.[PDF at Wiley] | Reproducible benchmarks for the performance of statistical prediction methods? | Tentative answer here but, well, there's a paper [1] comparing the performance of 22 classification algorithms predicting software failures in 10 public domain NASA Metrics Data repository datasets.
T | Reproducible benchmarks for the performance of statistical prediction methods?
Tentative answer here but, well, there's a paper [1] comparing the performance of 22 classification algorithms predicting software failures in 10 public domain NASA Metrics Data repository datasets.
The data used in this study stems from the NASA MDP repository [10]. Ten software defect prediction data sets are analyzed, including the eight sets used in [44] as well as two additional data sets (JM1 and KC1, see also Table 1). Each data set is comprised of several software modules, together with their number of faults and characteristic code attributes.
The benchmarking experiment aims at contrasting the competitive performance of several classification algorithms. To that end, an overall number of 22 classifiers is selected, which may be grouped into the categories of statistical approaches, nearest-neighbor methods, neural networks, support vector machines, tree-based methods, and ensembles. The selection aims at achieving a balance between established techniques, such as Naive Bayes, decision trees, or logistic regression, and novel approaches that have not yet found widespread usage in defect prediction (e.g., different variants of support vector machines, logistic model trees, or random forests). The classifiers are sketched in Table 2, together with a brief description of their underlying paradigms.
I also found another paper [2] comparing some algorithms in two ecological modelling datasets. It has interesting discussion of the ease of use of such algorithms and tries to predict the probability of finding specimen given geographic features (check Fig. 3 and Fig. 4).
Logistic Multiple Regression, Principal Component Regression and Classification and Regression Tree Analysis (CART), commonly used in ecological modelling using GIS, are compared with a relatively new statistical technique, Multivariate Adaptive Regression Splines (MARS), to test their accuracy, reliability, implementation within GIS and ease of use. All were applied to the same two data sets, covering a wide range of conditions common in predictive modelling, namely geographical range, scale, nature of the predictors and sampling method.
The Grimmia data set (1285 cases; 419 present, 866 absent) represents the distribution of species of the moss genus Grimmia in Latin America, from Mexico to Cape Hornos (Fig. 3). Grimmia was recently revised for Latin America and its taxonomy is well known worldwide (Muñoz 1999; Muñoz & Pando 2000).
The Fagus data set (103 181 cases; ca. 50% each of presences and absences) was selected to represent high spatial resolution at a regional scale. The dependent variable was the presence/absence of Fagus sylvatica oligotrophic forest in the La Liébana region (Cantabria Province, NW Spain).
[1] Lessmann, S., Baesens, B., Mues, C., & Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. Software Engineering, IEEE Transactions on, 34(4), 485-496. [PDF at IEEE]
[2] Muñoz, J., & Felicísimo, Á. M. (2004). Comparison of statistical methods commonly used in predictive modelling. Journal of Vegetation Science, 15(2), 285-292.[PDF at Wiley] | Reproducible benchmarks for the performance of statistical prediction methods?
Tentative answer here but, well, there's a paper [1] comparing the performance of 22 classification algorithms predicting software failures in 10 public domain NASA Metrics Data repository datasets.
T |
50,483 | Test goodness of fit for geometric distribution | The differences between the "events" have a poisson distribution. Let $N(l)$ be the number of "events" to occur in $[0,l]$, $l$ fixed.
We know,
$$P\{N(l) = k\} = \frac{e^{-\lambda l}(\lambda l)^k}{k!}, k=0,1,2,...$$
take $0<l_1<l_2<...<l_n<\infty$ where any given difference
$$N(l_1), N(l_2)-N(l_1), ..., N(l_n)-N(l_{n-1})$$
and
$$N(l_i) - N(l_{i-1})$$
has a poisson distribution with parameter $\lambda(l_i - l_{i-1})$ | Test goodness of fit for geometric distribution | The differences between the "events" have a poisson distribution. Let $N(l)$ be the number of "events" to occur in $[0,l]$, $l$ fixed.
We know,
$$P\{N(l) = k\} = \frac{e^{-\lambda l}(\lambda l)^k}{k | Test goodness of fit for geometric distribution
The differences between the "events" have a poisson distribution. Let $N(l)$ be the number of "events" to occur in $[0,l]$, $l$ fixed.
We know,
$$P\{N(l) = k\} = \frac{e^{-\lambda l}(\lambda l)^k}{k!}, k=0,1,2,...$$
take $0<l_1<l_2<...<l_n<\infty$ where any given difference
$$N(l_1), N(l_2)-N(l_1), ..., N(l_n)-N(l_{n-1})$$
and
$$N(l_i) - N(l_{i-1})$$
has a poisson distribution with parameter $\lambda(l_i - l_{i-1})$ | Test goodness of fit for geometric distribution
The differences between the "events" have a poisson distribution. Let $N(l)$ be the number of "events" to occur in $[0,l]$, $l$ fixed.
We know,
$$P\{N(l) = k\} = \frac{e^{-\lambda l}(\lambda l)^k}{k |
50,484 | Test goodness of fit for geometric distribution | I have a very similar problem to yours actually, also non-overlapping variable length intervals.
I am right now working on this so I cant give a full answer, also I am not a statistician so I wont even approach this from a math side.
But for your 2nd question, one possible approach is the following:
you compute the interarrival times, so the number of failures between success.
In my case I have strong belief that these interarrival times follow a geometric distribution. Hence what you can do is a probability plot, in python this can be done with scipy.
I do the probplot with these interarrival times and the distribution i expect them to be
If the points lie on a line this is a good argument that they indeed follow this distribution.
Here is an artificial example
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy
from scipy import stats
d1 = np.random.geometric(p=0.1, size=50)
scipy.stats.probplot(d1, dist="expon", plot=plt)
plt.show()
Note: Im using the exponential distribution (which is the continuous counterpart of geometric as far as i understand) because this plot does not seem to work with discrete distributions.
Anyway this produces:
Unfortunately I cant guarantee that this answer is completely correct since again this is also a new area to me. If I learn more I may update the answer | Test goodness of fit for geometric distribution | I have a very similar problem to yours actually, also non-overlapping variable length intervals.
I am right now working on this so I cant give a full answer, also I am not a statistician so I wont eve | Test goodness of fit for geometric distribution
I have a very similar problem to yours actually, also non-overlapping variable length intervals.
I am right now working on this so I cant give a full answer, also I am not a statistician so I wont even approach this from a math side.
But for your 2nd question, one possible approach is the following:
you compute the interarrival times, so the number of failures between success.
In my case I have strong belief that these interarrival times follow a geometric distribution. Hence what you can do is a probability plot, in python this can be done with scipy.
I do the probplot with these interarrival times and the distribution i expect them to be
If the points lie on a line this is a good argument that they indeed follow this distribution.
Here is an artificial example
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy
from scipy import stats
d1 = np.random.geometric(p=0.1, size=50)
scipy.stats.probplot(d1, dist="expon", plot=plt)
plt.show()
Note: Im using the exponential distribution (which is the continuous counterpart of geometric as far as i understand) because this plot does not seem to work with discrete distributions.
Anyway this produces:
Unfortunately I cant guarantee that this answer is completely correct since again this is also a new area to me. If I learn more I may update the answer | Test goodness of fit for geometric distribution
I have a very similar problem to yours actually, also non-overlapping variable length intervals.
I am right now working on this so I cant give a full answer, also I am not a statistician so I wont eve |
50,485 | Sigmoid type functions for logistic regression | Not all distributions' CDFs are sigmoid. Consider the CDF of the uniform distribution (figure copied from Wikipedia):
Other distributions may be less obvious, but still problematic. Consider the CDF of a Gamma distribution with $k=.5,\ \theta=1$ (the lavender line at the far left; figure copied from Wikipedia):
At a minimum, you are going to need a distribution whose support is $(-\infty, \infty)$ before you could consider using its CDF as a link function.
There are many (I suppose infinite) possible link functions that can be used, though. You don't have to use the logit, and it isn't necessarily the best (although we need to be more precise about what "best" means). You may be interested in reading my answers here: Difference between logit and probit models, or here: Is the logit function always the best for regression modeling of binary data? | Sigmoid type functions for logistic regression | Not all distributions' CDFs are sigmoid. Consider the CDF of the uniform distribution (figure copied from Wikipedia):
Other distributions may be less obvious, but still problematic. Consider the | Sigmoid type functions for logistic regression
Not all distributions' CDFs are sigmoid. Consider the CDF of the uniform distribution (figure copied from Wikipedia):
Other distributions may be less obvious, but still problematic. Consider the CDF of a Gamma distribution with $k=.5,\ \theta=1$ (the lavender line at the far left; figure copied from Wikipedia):
At a minimum, you are going to need a distribution whose support is $(-\infty, \infty)$ before you could consider using its CDF as a link function.
There are many (I suppose infinite) possible link functions that can be used, though. You don't have to use the logit, and it isn't necessarily the best (although we need to be more precise about what "best" means). You may be interested in reading my answers here: Difference between logit and probit models, or here: Is the logit function always the best for regression modeling of binary data? | Sigmoid type functions for logistic regression
Not all distributions' CDFs are sigmoid. Consider the CDF of the uniform distribution (figure copied from Wikipedia):
Other distributions may be less obvious, but still problematic. Consider the |
50,486 | Sigmoid type functions for logistic regression | I believe tanh(z) is a good replacement for the sigmoid function. Behaves almost exactly the same way. Also, the gradient descent update expression is the same | Sigmoid type functions for logistic regression | I believe tanh(z) is a good replacement for the sigmoid function. Behaves almost exactly the same way. Also, the gradient descent update expression is the same | Sigmoid type functions for logistic regression
I believe tanh(z) is a good replacement for the sigmoid function. Behaves almost exactly the same way. Also, the gradient descent update expression is the same | Sigmoid type functions for logistic regression
I believe tanh(z) is a good replacement for the sigmoid function. Behaves almost exactly the same way. Also, the gradient descent update expression is the same |
50,487 | Show that the signal $x_n = A \cos(\omega n)$ can be fully predicted by a system with two weights $w_1,w_2$ | This is basically equivalent to, given $\cos(a)$, $\cos(a-\omega)$, predict $\cos(a+\omega)$.
$$\cos(a-\omega)=\cos(a)\cos(\omega)+\sin(a)\sin(\omega)$$
$$\cos(a+\omega)=\cos(a)\cos(\omega)-\sin(a)\sin(\omega)$$
So let $w_1=2\cos(\omega)$, $w_2=-1$, we have
$$w_1\cos(a)+w_2\cos(a-\omega)=2\cos(\omega)\cos(a)-\cos(a)\cos(\omega)-\sin(a)\sin(\omega)\\ =\cos(a)\cos(\omega)-\sin(a)\sin(\omega)=\cos(a+\omega).$$
The rest should be easy. | Show that the signal $x_n = A \cos(\omega n)$ can be fully predicted by a system with two weights $w | This is basically equivalent to, given $\cos(a)$, $\cos(a-\omega)$, predict $\cos(a+\omega)$.
$$\cos(a-\omega)=\cos(a)\cos(\omega)+\sin(a)\sin(\omega)$$
$$\cos(a+\omega)=\cos(a)\cos(\omega)-\sin(a)\si | Show that the signal $x_n = A \cos(\omega n)$ can be fully predicted by a system with two weights $w_1,w_2$
This is basically equivalent to, given $\cos(a)$, $\cos(a-\omega)$, predict $\cos(a+\omega)$.
$$\cos(a-\omega)=\cos(a)\cos(\omega)+\sin(a)\sin(\omega)$$
$$\cos(a+\omega)=\cos(a)\cos(\omega)-\sin(a)\sin(\omega)$$
So let $w_1=2\cos(\omega)$, $w_2=-1$, we have
$$w_1\cos(a)+w_2\cos(a-\omega)=2\cos(\omega)\cos(a)-\cos(a)\cos(\omega)-\sin(a)\sin(\omega)\\ =\cos(a)\cos(\omega)-\sin(a)\sin(\omega)=\cos(a+\omega).$$
The rest should be easy. | Show that the signal $x_n = A \cos(\omega n)$ can be fully predicted by a system with two weights $w
This is basically equivalent to, given $\cos(a)$, $\cos(a-\omega)$, predict $\cos(a+\omega)$.
$$\cos(a-\omega)=\cos(a)\cos(\omega)+\sin(a)\sin(\omega)$$
$$\cos(a+\omega)=\cos(a)\cos(\omega)-\sin(a)\si |
50,488 | Can different classification methods be compared in the same manner as models during hyper-parameter tuning? | Yes, you can generalize the procedure for selecting from very different models. Think of optimizing the "training algorithm" hyperparameter. | Can different classification methods be compared in the same manner as models during hyper-parameter | Yes, you can generalize the procedure for selecting from very different models. Think of optimizing the "training algorithm" hyperparameter. | Can different classification methods be compared in the same manner as models during hyper-parameter tuning?
Yes, you can generalize the procedure for selecting from very different models. Think of optimizing the "training algorithm" hyperparameter. | Can different classification methods be compared in the same manner as models during hyper-parameter
Yes, you can generalize the procedure for selecting from very different models. Think of optimizing the "training algorithm" hyperparameter. |
50,489 | Selecting Link Function for Negative Binomial GLM | First, you need to understand better what link functions are. Then, maybe look at what others are doing in your field, for instance this paper.
Then, you have count data, and for such data the most natural link function is the log link function. See for example Goodness of fit and which model to choose linear regression or Poisson. So, unless you have very strong reasons otherwise, you should start out with the log link function. | Selecting Link Function for Negative Binomial GLM | First, you need to understand better what link functions are. Then, maybe look at what others are doing in your field, for instance this paper.
Then, you have count data, and for such data the most n | Selecting Link Function for Negative Binomial GLM
First, you need to understand better what link functions are. Then, maybe look at what others are doing in your field, for instance this paper.
Then, you have count data, and for such data the most natural link function is the log link function. See for example Goodness of fit and which model to choose linear regression or Poisson. So, unless you have very strong reasons otherwise, you should start out with the log link function. | Selecting Link Function for Negative Binomial GLM
First, you need to understand better what link functions are. Then, maybe look at what others are doing in your field, for instance this paper.
Then, you have count data, and for such data the most n |
50,490 | What's the "best" way to calculate sample size for A/B tests? | There is no best to use because each method relates to specific assumptions about the testing methodology. Evan Miller's calculator calculates sample size for a two-tailed test. In the past Optimizely's calculator was calculating samples for a one-tailed test. Currently, Optimizely uses a Bayesian states engine and their sample size calculator has no input for Power, based on the construction of their stats engine. You can back into the sample size for each variation in the VWO calculator by multiplying the daily traffic * the number of days the test will run / number of variations. The results seem to imply they are also calculating sample size generically, like Evan's calculator, for a two-tailed hypothesis. | What's the "best" way to calculate sample size for A/B tests? | There is no best to use because each method relates to specific assumptions about the testing methodology. Evan Miller's calculator calculates sample size for a two-tailed test. In the past Optimize | What's the "best" way to calculate sample size for A/B tests?
There is no best to use because each method relates to specific assumptions about the testing methodology. Evan Miller's calculator calculates sample size for a two-tailed test. In the past Optimizely's calculator was calculating samples for a one-tailed test. Currently, Optimizely uses a Bayesian states engine and their sample size calculator has no input for Power, based on the construction of their stats engine. You can back into the sample size for each variation in the VWO calculator by multiplying the daily traffic * the number of days the test will run / number of variations. The results seem to imply they are also calculating sample size generically, like Evan's calculator, for a two-tailed hypothesis. | What's the "best" way to calculate sample size for A/B tests?
There is no best to use because each method relates to specific assumptions about the testing methodology. Evan Miller's calculator calculates sample size for a two-tailed test. In the past Optimize |
50,491 | Comparing two datasets with same variable | If you don't have concern about the accuraccy degrading over time or don't have concerns that the time of day results in less accurate measurements then I would advocate simplicity here through the use of a paired-sample t-test. You have completely missing data for the :15 and :45 intervals, so I'd throw those measurements away as you have nothing to compare them against from Satellite measurements. Then, with the remaining data, take the differences between the satellite measurement and the ground measurements, $y_{diff}=y_{satellite}-y_{ground}$. Then do a simple t-test on $y_{diff}$ to determine if $H_0:y_{diff}=0$ can be rejected at your desired level of confidence.
If there are temporal concerns, I'd take a look at building time-series type model for analyses. | Comparing two datasets with same variable | If you don't have concern about the accuraccy degrading over time or don't have concerns that the time of day results in less accurate measurements then I would advocate simplicity here through the us | Comparing two datasets with same variable
If you don't have concern about the accuraccy degrading over time or don't have concerns that the time of day results in less accurate measurements then I would advocate simplicity here through the use of a paired-sample t-test. You have completely missing data for the :15 and :45 intervals, so I'd throw those measurements away as you have nothing to compare them against from Satellite measurements. Then, with the remaining data, take the differences between the satellite measurement and the ground measurements, $y_{diff}=y_{satellite}-y_{ground}$. Then do a simple t-test on $y_{diff}$ to determine if $H_0:y_{diff}=0$ can be rejected at your desired level of confidence.
If there are temporal concerns, I'd take a look at building time-series type model for analyses. | Comparing two datasets with same variable
If you don't have concern about the accuraccy degrading over time or don't have concerns that the time of day results in less accurate measurements then I would advocate simplicity here through the us |
50,492 | Aging data in the German tank problem | A reasonable approach may be to estimate the production rate by always using the maximum time period available. That is, create an estimate of $N$ every day, but use today's estimate along with the day 1 estimate and the number of days that have passed to get the estimated production rate.
For $i \ge 2,$ your day $i$ growth rate estimate $\hat{G}$ will be $$\hat{G}={{\hat{N_i}-\hat{N_1}} \over {i-1}}$$
The idea is similar to observing a stochastic process and estimating the rate as the number of observed events divided by the total time.
Because your daily estimates $\hat{N_i}$ are unbiased for the total number of tanks, the difference $\hat{N_i}-\hat{N_1}$ is unbiased for the number of tanks produced in that time period, and your overall production rate estimate $\hat{G}$ will also be unbiased.
It is true that you can get negative estimates in the early periods. So you will have to decide if you want to cap those at zero or use some other method to handle those cases.
Note that your estimator is unbiased only if the sampling is without replacement. If your sampling is with replacement, you will need to consider another estimator. | Aging data in the German tank problem | A reasonable approach may be to estimate the production rate by always using the maximum time period available. That is, create an estimate of $N$ every day, but use today's estimate along with the da | Aging data in the German tank problem
A reasonable approach may be to estimate the production rate by always using the maximum time period available. That is, create an estimate of $N$ every day, but use today's estimate along with the day 1 estimate and the number of days that have passed to get the estimated production rate.
For $i \ge 2,$ your day $i$ growth rate estimate $\hat{G}$ will be $$\hat{G}={{\hat{N_i}-\hat{N_1}} \over {i-1}}$$
The idea is similar to observing a stochastic process and estimating the rate as the number of observed events divided by the total time.
Because your daily estimates $\hat{N_i}$ are unbiased for the total number of tanks, the difference $\hat{N_i}-\hat{N_1}$ is unbiased for the number of tanks produced in that time period, and your overall production rate estimate $\hat{G}$ will also be unbiased.
It is true that you can get negative estimates in the early periods. So you will have to decide if you want to cap those at zero or use some other method to handle those cases.
Note that your estimator is unbiased only if the sampling is without replacement. If your sampling is with replacement, you will need to consider another estimator. | Aging data in the German tank problem
A reasonable approach may be to estimate the production rate by always using the maximum time period available. That is, create an estimate of $N$ every day, but use today's estimate along with the da |
50,493 | DBSCAN: What is a Core Point? | In a database, all points are equal.
The blue point has 1 point in its neighborhood - itself.
The yellow points have 2 points in their neighborhood each.
The red points have 4-5 points in their neighborhood each.
Note that the definitions don't say "minPts other points"; but "minPts points". You can't ignore the one point you already know (what if it has duplicates?) | DBSCAN: What is a Core Point? | In a database, all points are equal.
The blue point has 1 point in its neighborhood - itself.
The yellow points have 2 points in their neighborhood each.
The red points have 4-5 points in their neigh | DBSCAN: What is a Core Point?
In a database, all points are equal.
The blue point has 1 point in its neighborhood - itself.
The yellow points have 2 points in their neighborhood each.
The red points have 4-5 points in their neighborhood each.
Note that the definitions don't say "minPts other points"; but "minPts points". You can't ignore the one point you already know (what if it has duplicates?) | DBSCAN: What is a Core Point?
In a database, all points are equal.
The blue point has 1 point in its neighborhood - itself.
The yellow points have 2 points in their neighborhood each.
The red points have 4-5 points in their neigh |
50,494 | What can go wrong using lagged terms as instrumental variables? | Consider a causal $ARMA(1,2)$ process
$$
Y_t=\phi Y_{t-1}+\epsilon_t+\theta_1\epsilon_{t-1}+\theta_2\epsilon_{t-2}
$$
Suppose our interest centers on estimating $\phi$, but we are not aware of the $MA$ components (or we just do not know how to fit ARMA models :-)).
One strategy might therefore consist of just running an OLS regression of $Y_t$ on $Y_{t-1}$. That regression would however inconsistently estimate $\phi$, as the regressor $Y_{t-1}$ is correlated with $\epsilon_{t-1}$ and $\epsilon_{t-2}$, which can be seen directly from shifting the ARMA(2,1) model by one period:
$$
Y_{t-1}=\phi Y_{t-2}+\epsilon_{t-1}+\theta_1\epsilon_{t-2}+\theta_2\epsilon_{t-3}
$$
(One might work out the exact plim as for IV below.)
Suppose we instead use IV to estimate $\phi$. The IV estimator of $\phi$ using the lag $Y_{t-2}$ as an instrument for $Y_{t-1}$ is
$$
\hat{\phi}_{IV}=\frac{\sum_tY_{t-2}Y_{t}}{\sum_tY_{t-2}Y_{t-1}}
$$
Its probability limit therefore is
$$
\hat{\phi}_{IV}=\frac{\frac{1}{T}\sum_tY_{t-2}Y_{t}}{\frac{1}{T}\sum_tY_{t-2}Y_{t-1}}\to_p\frac{\gamma_2}{\gamma_1},
$$
where $\gamma_j$ denotes the $j$th autocovariance.
We first find the $MA(\infty)$ representation of the process to find the autocovariances necessary for expressing the probability limit.
Matching coefficients in
$$
(1-\phi L)(\psi_0+\psi_1L+\psi_2L^2+\psi_3L^3+\ldots)=1+\theta_1L+\theta_2L^2
$$
gives
\begin{eqnarray*}
\psi_0&=&1\\
-\phi\psi_0+\psi_1&=&\theta_1\quad\Rightarrow\quad\psi_1=\theta_1+\phi\\
-\phi\psi_1+\psi_2&=&\theta_2\quad\Rightarrow\quad\psi_2=\theta_2+\phi(\theta_1+\phi)\\
-\phi\psi_2+\psi_3&=&0\quad\Rightarrow\quad\psi_3=\phi(\theta_2+\phi(\theta_1+\phi))\\
\psi_j&=&\phi^{j-2}(\theta_2+\phi(\theta_1+\phi))\qquad j>1
\end{eqnarray*}
We may now use this to find $\gamma_1$ and $\gamma_2$.
From the general result on the autocovariance of an $MA(\infty)$ process that $\gamma_k=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}$ we conclude that $\gamma_1=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+1}$. Hence,
\begin{eqnarray*}
\gamma_1&=&\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+(\theta_2+\phi(\theta_1+\phi))\sum_{j=2}^\infty\phi^{j-2}\phi^{j-1}\right]\\
&=&\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+\phi\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right],
\end{eqnarray*}
as $\sum_{j=2}^\infty\phi^{j-2}\phi^{j-1}=\sum_{j=2}^\infty\phi^{2j-3}=\phi\sum_{j=0}^\infty\phi^{2j}$. Similarly,
\begin{eqnarray*}
\gamma_2&=&\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+2}\\
&=&\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+(\theta_2+\phi(\theta_1+\phi))\sum_{j=2}^\infty\phi^{j-2}\phi^{j}\right]\\
&=&\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+\phi^2\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]
\end{eqnarray*}
The IV estimator therefore converges to
$$
\hat{\phi}_{IV}\to_p\frac{\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+\phi^2\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]}{\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+\phi\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]}
$$
This does not equal $\phi$ in general.
Intuitively, the instrument then is not uncorrelated with the error term, as $E(Y_{t-2}\epsilon_{t-2})\neq0$.
If, however, $\theta_2=0$ (i.e., we have an $ARMA(1,1)$) the IV estimator would be consistent for $\phi$:
$$
\hat{\phi}_{IV}\to_p\frac{\phi(\theta_1+\phi)+\phi^2(\theta_1+\phi)^2+\phi^2\frac{(\phi(\theta_1+\phi))}{1-\phi^2}}{\theta_1+\phi+\phi(\theta_1+\phi)^2+\phi\frac{(\phi(\theta_1+\phi))}{1-\phi^2}}=\phi
$$
The result shows up similarly in the estimation of dynamic panel data models, namely that there must not be higher-order autocorrelation so that first differencing to remove the fixed effects does not induce correlation between the differenced error terms and the instruments. | What can go wrong using lagged terms as instrumental variables? | Consider a causal $ARMA(1,2)$ process
$$
Y_t=\phi Y_{t-1}+\epsilon_t+\theta_1\epsilon_{t-1}+\theta_2\epsilon_{t-2}
$$
Suppose our interest centers on estimating $\phi$, but we are not aware of the $MA | What can go wrong using lagged terms as instrumental variables?
Consider a causal $ARMA(1,2)$ process
$$
Y_t=\phi Y_{t-1}+\epsilon_t+\theta_1\epsilon_{t-1}+\theta_2\epsilon_{t-2}
$$
Suppose our interest centers on estimating $\phi$, but we are not aware of the $MA$ components (or we just do not know how to fit ARMA models :-)).
One strategy might therefore consist of just running an OLS regression of $Y_t$ on $Y_{t-1}$. That regression would however inconsistently estimate $\phi$, as the regressor $Y_{t-1}$ is correlated with $\epsilon_{t-1}$ and $\epsilon_{t-2}$, which can be seen directly from shifting the ARMA(2,1) model by one period:
$$
Y_{t-1}=\phi Y_{t-2}+\epsilon_{t-1}+\theta_1\epsilon_{t-2}+\theta_2\epsilon_{t-3}
$$
(One might work out the exact plim as for IV below.)
Suppose we instead use IV to estimate $\phi$. The IV estimator of $\phi$ using the lag $Y_{t-2}$ as an instrument for $Y_{t-1}$ is
$$
\hat{\phi}_{IV}=\frac{\sum_tY_{t-2}Y_{t}}{\sum_tY_{t-2}Y_{t-1}}
$$
Its probability limit therefore is
$$
\hat{\phi}_{IV}=\frac{\frac{1}{T}\sum_tY_{t-2}Y_{t}}{\frac{1}{T}\sum_tY_{t-2}Y_{t-1}}\to_p\frac{\gamma_2}{\gamma_1},
$$
where $\gamma_j$ denotes the $j$th autocovariance.
We first find the $MA(\infty)$ representation of the process to find the autocovariances necessary for expressing the probability limit.
Matching coefficients in
$$
(1-\phi L)(\psi_0+\psi_1L+\psi_2L^2+\psi_3L^3+\ldots)=1+\theta_1L+\theta_2L^2
$$
gives
\begin{eqnarray*}
\psi_0&=&1\\
-\phi\psi_0+\psi_1&=&\theta_1\quad\Rightarrow\quad\psi_1=\theta_1+\phi\\
-\phi\psi_1+\psi_2&=&\theta_2\quad\Rightarrow\quad\psi_2=\theta_2+\phi(\theta_1+\phi)\\
-\phi\psi_2+\psi_3&=&0\quad\Rightarrow\quad\psi_3=\phi(\theta_2+\phi(\theta_1+\phi))\\
\psi_j&=&\phi^{j-2}(\theta_2+\phi(\theta_1+\phi))\qquad j>1
\end{eqnarray*}
We may now use this to find $\gamma_1$ and $\gamma_2$.
From the general result on the autocovariance of an $MA(\infty)$ process that $\gamma_k=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+k}$ we conclude that $\gamma_1=\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+1}$. Hence,
\begin{eqnarray*}
\gamma_1&=&\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+(\theta_2+\phi(\theta_1+\phi))\sum_{j=2}^\infty\phi^{j-2}\phi^{j-1}\right]\\
&=&\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+\phi\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right],
\end{eqnarray*}
as $\sum_{j=2}^\infty\phi^{j-2}\phi^{j-1}=\sum_{j=2}^\infty\phi^{2j-3}=\phi\sum_{j=0}^\infty\phi^{2j}$. Similarly,
\begin{eqnarray*}
\gamma_2&=&\sigma^2\sum_{j=0}^{\infty}\psi_j\psi_{j+2}\\
&=&\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+(\theta_2+\phi(\theta_1+\phi))\sum_{j=2}^\infty\phi^{j-2}\phi^{j}\right]\\
&=&\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+\phi^2\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]
\end{eqnarray*}
The IV estimator therefore converges to
$$
\hat{\phi}_{IV}\to_p\frac{\sigma^2\left[\theta_2+\phi(\theta_1+\phi)+(\theta_1+\phi)\phi(\theta_2+\phi(\theta_1+\phi))+\phi^2\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]}{\sigma^2\left[\theta_1+\phi+(\theta_1+\phi)(\theta_2+\phi(\theta_1+\phi))+\phi\frac{(\theta_2+\phi(\theta_1+\phi))}{1-\phi^2}\right]}
$$
This does not equal $\phi$ in general.
Intuitively, the instrument then is not uncorrelated with the error term, as $E(Y_{t-2}\epsilon_{t-2})\neq0$.
If, however, $\theta_2=0$ (i.e., we have an $ARMA(1,1)$) the IV estimator would be consistent for $\phi$:
$$
\hat{\phi}_{IV}\to_p\frac{\phi(\theta_1+\phi)+\phi^2(\theta_1+\phi)^2+\phi^2\frac{(\phi(\theta_1+\phi))}{1-\phi^2}}{\theta_1+\phi+\phi(\theta_1+\phi)^2+\phi\frac{(\phi(\theta_1+\phi))}{1-\phi^2}}=\phi
$$
The result shows up similarly in the estimation of dynamic panel data models, namely that there must not be higher-order autocorrelation so that first differencing to remove the fixed effects does not induce correlation between the differenced error terms and the instruments. | What can go wrong using lagged terms as instrumental variables?
Consider a causal $ARMA(1,2)$ process
$$
Y_t=\phi Y_{t-1}+\epsilon_t+\theta_1\epsilon_{t-1}+\theta_2\epsilon_{t-2}
$$
Suppose our interest centers on estimating $\phi$, but we are not aware of the $MA |
50,495 | Analysing rank-ordered data using mlogit | My experience is still rather limited with mlogit package, but if I read Croissant vignette correctly (see the beginning of sec. 1.2 Model description, page 7), the alt variable in your model is specified as alternative specific with a generic coefficient and NOT as an individual specific covariate---those variables are placed between the pipes. | Analysing rank-ordered data using mlogit | My experience is still rather limited with mlogit package, but if I read Croissant vignette correctly (see the beginning of sec. 1.2 Model description, page 7), the alt variable in your model is speci | Analysing rank-ordered data using mlogit
My experience is still rather limited with mlogit package, but if I read Croissant vignette correctly (see the beginning of sec. 1.2 Model description, page 7), the alt variable in your model is specified as alternative specific with a generic coefficient and NOT as an individual specific covariate---those variables are placed between the pipes. | Analysing rank-ordered data using mlogit
My experience is still rather limited with mlogit package, but if I read Croissant vignette correctly (see the beginning of sec. 1.2 Model description, page 7), the alt variable in your model is speci |
50,496 | Analysing rank-ordered data using mlogit | I used the model format you mentioned at the end and was able to reproduce the LRS listed for example 8.1 in the J Marden text (Analyzing and Modeling Rank Data):
> exData = t(matrix(c(c(1,2,4,3,5), c(2,1,4,3,5), c(2,3,5,4,1), rep(c(2,1,5,3,4), 3)), nrow=5))
> exData
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 4 3 5
[2,] 2 1 4 3 5
[3,] 2 3 5 4 1
[4,] 2 1 5 3 4
[5,] 2 1 5 3 4
[6,] 2 1 5 3 4
> exTbl = as.data.table(exData)[, id := 1:.N][, melt(.SD, id.vars='id')][order(id, variable)][, setnames(.SD, 'value', 'ch')][, setnames(.SD, 'variable', 'alt')]
> exTbl %>% head
id alt ch
1: 1 V1 1
2: 1 V2 2
3: 1 V3 4
4: 1 V4 3
5: 1 V5 5
6: 2 V1 2
> exLogit = mlogit.data(exTbl, shape='long', ranked=T, choice='ch', alt.var='alt', id.var='id')
> exLogit %>% head
id alt ch
1.V1 1 V1 TRUE
1.V2 1 V2 FALSE
1.V3 1 V3 FALSE
1.V4 1 V4 FALSE
1.V5 1 V5 FALSE
2.V2 1 V2 TRUE
> m1 = summary(mlogit(ch ~ alt | -1 | -1, exLogit, reflevel='V5'))
> m1
Call:
mlogit(formula = ch ~ alt | -1 | -1, data = exLogit, reflevel = "V5",
method = "nr", print.level = 0)
Frequencies of alternatives:
V5 V1 V2 V3 V4
0.1667 0.2500 0.2500 0.0833 0.2500
nr method
6 iterations, 0h:0m:0s
g(-H)^-1g = 4.08E-06
successive function values within tolerance limits
Coefficients :
Estimate Std. Error t-value Pr(>|t|)
V1:(intercept) 4.182 1.465 2.86 0.0043 **
V2:(intercept) 4.689 1.514 3.10 0.0020 **
V3:(intercept) -0.726 0.867 -0.84 0.4024
V4:(intercept) 2.061 1.133 1.82 0.0690 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Log-Likelihood: -14.4
> llNull = -sum(log(1:5)) * 6
> llNull
[1] -28.72
> m1$logLik
'log Lik.' -14.4 (df=4)
> LRS = 2 * (m1$logLik - llNull)
> LRS
'log Lik.' 28.64 (df=4)
>
[0] 0:NvimR*
I lifted the method to compute the log likelihood for the null model from another stats package, so it may/may not be correct. Although using it does make the result line up with the example 8.1 in the text, so it's likely correct.
I was surpised that the log likelihood value for the null model is only a function of the number of alternatives and number of subjects, and not influenced by the data observed. The data observed seems to only influence the log likelihood for the alternative model. If someone has an intuitive explanation as to why this is the case, please feel free to add that to this post. | Analysing rank-ordered data using mlogit | I used the model format you mentioned at the end and was able to reproduce the LRS listed for example 8.1 in the J Marden text (Analyzing and Modeling Rank Data):
> exData = t(matrix(c(c(1,2,4,3,5), c | Analysing rank-ordered data using mlogit
I used the model format you mentioned at the end and was able to reproduce the LRS listed for example 8.1 in the J Marden text (Analyzing and Modeling Rank Data):
> exData = t(matrix(c(c(1,2,4,3,5), c(2,1,4,3,5), c(2,3,5,4,1), rep(c(2,1,5,3,4), 3)), nrow=5))
> exData
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 4 3 5
[2,] 2 1 4 3 5
[3,] 2 3 5 4 1
[4,] 2 1 5 3 4
[5,] 2 1 5 3 4
[6,] 2 1 5 3 4
> exTbl = as.data.table(exData)[, id := 1:.N][, melt(.SD, id.vars='id')][order(id, variable)][, setnames(.SD, 'value', 'ch')][, setnames(.SD, 'variable', 'alt')]
> exTbl %>% head
id alt ch
1: 1 V1 1
2: 1 V2 2
3: 1 V3 4
4: 1 V4 3
5: 1 V5 5
6: 2 V1 2
> exLogit = mlogit.data(exTbl, shape='long', ranked=T, choice='ch', alt.var='alt', id.var='id')
> exLogit %>% head
id alt ch
1.V1 1 V1 TRUE
1.V2 1 V2 FALSE
1.V3 1 V3 FALSE
1.V4 1 V4 FALSE
1.V5 1 V5 FALSE
2.V2 1 V2 TRUE
> m1 = summary(mlogit(ch ~ alt | -1 | -1, exLogit, reflevel='V5'))
> m1
Call:
mlogit(formula = ch ~ alt | -1 | -1, data = exLogit, reflevel = "V5",
method = "nr", print.level = 0)
Frequencies of alternatives:
V5 V1 V2 V3 V4
0.1667 0.2500 0.2500 0.0833 0.2500
nr method
6 iterations, 0h:0m:0s
g(-H)^-1g = 4.08E-06
successive function values within tolerance limits
Coefficients :
Estimate Std. Error t-value Pr(>|t|)
V1:(intercept) 4.182 1.465 2.86 0.0043 **
V2:(intercept) 4.689 1.514 3.10 0.0020 **
V3:(intercept) -0.726 0.867 -0.84 0.4024
V4:(intercept) 2.061 1.133 1.82 0.0690 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Log-Likelihood: -14.4
> llNull = -sum(log(1:5)) * 6
> llNull
[1] -28.72
> m1$logLik
'log Lik.' -14.4 (df=4)
> LRS = 2 * (m1$logLik - llNull)
> LRS
'log Lik.' 28.64 (df=4)
>
[0] 0:NvimR*
I lifted the method to compute the log likelihood for the null model from another stats package, so it may/may not be correct. Although using it does make the result line up with the example 8.1 in the text, so it's likely correct.
I was surpised that the log likelihood value for the null model is only a function of the number of alternatives and number of subjects, and not influenced by the data observed. The data observed seems to only influence the log likelihood for the alternative model. If someone has an intuitive explanation as to why this is the case, please feel free to add that to this post. | Analysing rank-ordered data using mlogit
I used the model format you mentioned at the end and was able to reproduce the LRS listed for example 8.1 in the J Marden text (Analyzing and Modeling Rank Data):
> exData = t(matrix(c(c(1,2,4,3,5), c |
50,497 | Use of Random Forests for variable importance as preprocess before another analysis | Note: this answer is incomplete.
Toy Problem:
This is a trivial problem that is typically small in dimension and as accessible as possible to human intuition and learning. Personally, I find this (link, link) demo to be accessible for my intuition and learning. So do the folks at the Max Planck Institute for Biological Cybernetics.
The form of the "non-augmented" data is:
$$ \begin{bmatrix}
Class & X & Y\\
A& x_1 & y_1\\
A& x_2 & y_2\\
\vdots & \vdots & \vdots\\
B& x_n & y_n\\
\end{bmatrix}$$
The "physics" of the "good" class is a spiral starting at the origin while the bad class is uniformly random. The human eye can see that quickly. When evaluating "variable importance" we are trying to reduce the number of columns, but the non-augmented has no columns to reduce, thus we augment with random. There is the problem of overlap, it would be better to reclassify some of the uniform random within a range of "ideal" to class "A".
So here is the code that makes the "non-augmented" data:
#housekeeping
rm(list=ls())
#library
library(randomForest)
#for reproducibility
set.seed(08012015)
#basic
n <- 1:2000
r <- 0.05*n +1
th <- n*(4*pi)/max(n)
#polar to cartesian
x1=r*cos(th)
y1=r*sin(th)
#add noise
x2 <- x1+0.1*r*runif(min = -1,max = 1,n=length(n))
y2 <- y1+0.1*r*runif(min = -1,max = 1,n=length(n))
#append salt and pepper
x3 <- runif(min = min(x2),max = max(x2),n=length(n))
y3 <- runif(min = min(y2),max = max(y2),n=length(n))
X <- c(x2,x3)
Y <- c(y2,y3)
myClass <- as.factor(c(as.vector(matrix(1,nrow=length(y2))),
as.vector(matrix(2,nrow=length(y3))) ))
#plot class "A" derivation
plot(x1,y1,pch=18,type="l",col="Red", lwd=2)
points(x2,y2,pch=18)
points(x3,y3,pch=1,col="Blue")
legend(x = 65,y=65,
legend = c("true","sampled A","sampled B"),
col = c("Red","Black","Blue"),
lty = c(1,-1,-1),
pch=c(-1,18,1))
Here is a plot of the non-augmented data.
Here is the code to augment the "toy" for variable importance detection, and assemble into a single data frame.
#Create bad columns class of uniform randomized good columns
x5 <- sample(x = X, size = length(X),replace = T)
y5 <- sample(x = Y, size = length(Y),replace = T)
#assemble data into frame
data <- data.frame(myClass,
c(X),c(Y),c(x5),c(y5) )
names(data) <- c("myclass","x","y","n1","n2")
First a random forest (not yet with t-tests as in the Tuv reference) is used on all input columns to determine relative variable importance, and to get a sense of sufficient number of trees. It is assumed that more trees are required to get a decent fit using low importance data than with uniformly higher importance data.
#train random forest - I like h2o, but this is textbook Breimann
fit.rf_imp <- randomForest(data[2:5],data$myclass,
ntree = 2000, replace=TRUE, nodesize = 1,
localImp=T )
varImpPlot(fit.rf_imp)
plot(fit.rf_imp)
grid()
importance(fit.rf_imp)
The results for importance (in plot form) are:
The mean decrease in accuracy and mean decrease in gini have a consistent message: "n1 and n2 are low importance columns".
The results for the convergence plot are:
Although somewhat qualitative, it appears that some acceptable level of convergence has occurred by 500 trees. It is also worth noting that the converged error rate is about 22%. This leads to the inference that the "classification error" within the region of "A" is about 1 in 5.
The code for an updated forest, one not including low-importance columns, is:
fit.rf <- randomForest(data[2:3],data$myclass,
ntree = 500, replace=TRUE, nodesize = 1,
localImp=T )
A plot of actual vs. predicted has excellent accuracy. Code to derive the plot follows:
data2 <- predict(fit.rf,newdata=data[data$myclass==1,c(2,3,4,5)],
type="response")
#separate class "1" from training data
idx1a <- which(data[,1]==1)
#separate class "1" from the predicted data
idx1b <- which(data2==1)
#separate class "2" from training data
idx2a <- which(data[,1]==2)
#separate class "2" from the predicted data
idx2b <- which(data2==2)
#show the difference in classes before and after RF based filter
#class "B" aka 2, uniform background
plot(data[idx2a,2],data[idx2a,3])
points(data[idx2b,2],data[idx2b,3],col="Blue")
#class "A" aka 1, red spiral
points(data[idx1a,2],data[idx1a,3])
points(data[idx1b,2],data[idx1b,3],col="Red",pch=18)
The actual plot follows.
For a very simple toy problem, a basic randomForest has been used to determine importance of variables, and to attempt to classify "in" versus "out".
I have an older laptop. It is a Dell Latitude E-7440 with an i7-4600 and 16 GB of RAM running Windows 7. You might have something fancy, or something even older than mine. You could have different OS, R version, or hardware. Your results are likely to differ from mine in absolute scale, but relative scale should still be informative.
Here is the code I used to benchmark the "variable importance" random forest:
res1 <- microbenchmark(randomForest(data[2:5],data$myclass,
ntree = 2000, replace=TRUE, nodesize = 1,
localImp=T ),
times=100L)
print(res1)
and here is the code I used to benchmark the fit of a random forest to the important variables only:
res2 <- microbenchmark(randomForest(data[2:3],data$myclass,
ntree = 500, replace=TRUE, nodesize = 1,
localImp=T ),
times=100L)
print(res2)
The time-result for the variable importance was:
min lq mean median uq max neval
1 9.323244 9.648383 9.967486 9.84808 10.05356 12.12949 100
Over 100 iterations the mean time-to-compute was 9.96 seconds. This is the "time to beat" for "incomparably faster" applied to the toy problem.
The time-result for the reduced model was:
min lq mean median uq max neval
1.515134 1.598504 1.638809 1.634209 1.67372 2.038021 100
When computed over 100 iterations, the mean time-to-compute was 1.64 seconds. Running on the important data, and only for "reasonable" number of trees, reduced the run-time by about 84%.
INCOMPLETE.
References:
http://www.statistik.uni-dortmund.de/useR-2008//slides/Strobl+Zeileis.pdf
https://arxiv.org/pdf/1804.03515.pdf (updated 11/27/2018)
Awaiting:
random Forest on non-toy, with timing. HIVA is not the right data, even though I asked for it. I need an intermediate set.
random Forest + t-test solution on toy, with timing
random Forest + t-test solution on non-toy, with timing
svm solution on toy, with timing
svm solution on non-toy, with timing | Use of Random Forests for variable importance as preprocess before another analysis | Note: this answer is incomplete.
Toy Problem:
This is a trivial problem that is typically small in dimension and as accessible as possible to human intuition and learning. Personally, I find this (li | Use of Random Forests for variable importance as preprocess before another analysis
Note: this answer is incomplete.
Toy Problem:
This is a trivial problem that is typically small in dimension and as accessible as possible to human intuition and learning. Personally, I find this (link, link) demo to be accessible for my intuition and learning. So do the folks at the Max Planck Institute for Biological Cybernetics.
The form of the "non-augmented" data is:
$$ \begin{bmatrix}
Class & X & Y\\
A& x_1 & y_1\\
A& x_2 & y_2\\
\vdots & \vdots & \vdots\\
B& x_n & y_n\\
\end{bmatrix}$$
The "physics" of the "good" class is a spiral starting at the origin while the bad class is uniformly random. The human eye can see that quickly. When evaluating "variable importance" we are trying to reduce the number of columns, but the non-augmented has no columns to reduce, thus we augment with random. There is the problem of overlap, it would be better to reclassify some of the uniform random within a range of "ideal" to class "A".
So here is the code that makes the "non-augmented" data:
#housekeeping
rm(list=ls())
#library
library(randomForest)
#for reproducibility
set.seed(08012015)
#basic
n <- 1:2000
r <- 0.05*n +1
th <- n*(4*pi)/max(n)
#polar to cartesian
x1=r*cos(th)
y1=r*sin(th)
#add noise
x2 <- x1+0.1*r*runif(min = -1,max = 1,n=length(n))
y2 <- y1+0.1*r*runif(min = -1,max = 1,n=length(n))
#append salt and pepper
x3 <- runif(min = min(x2),max = max(x2),n=length(n))
y3 <- runif(min = min(y2),max = max(y2),n=length(n))
X <- c(x2,x3)
Y <- c(y2,y3)
myClass <- as.factor(c(as.vector(matrix(1,nrow=length(y2))),
as.vector(matrix(2,nrow=length(y3))) ))
#plot class "A" derivation
plot(x1,y1,pch=18,type="l",col="Red", lwd=2)
points(x2,y2,pch=18)
points(x3,y3,pch=1,col="Blue")
legend(x = 65,y=65,
legend = c("true","sampled A","sampled B"),
col = c("Red","Black","Blue"),
lty = c(1,-1,-1),
pch=c(-1,18,1))
Here is a plot of the non-augmented data.
Here is the code to augment the "toy" for variable importance detection, and assemble into a single data frame.
#Create bad columns class of uniform randomized good columns
x5 <- sample(x = X, size = length(X),replace = T)
y5 <- sample(x = Y, size = length(Y),replace = T)
#assemble data into frame
data <- data.frame(myClass,
c(X),c(Y),c(x5),c(y5) )
names(data) <- c("myclass","x","y","n1","n2")
First a random forest (not yet with t-tests as in the Tuv reference) is used on all input columns to determine relative variable importance, and to get a sense of sufficient number of trees. It is assumed that more trees are required to get a decent fit using low importance data than with uniformly higher importance data.
#train random forest - I like h2o, but this is textbook Breimann
fit.rf_imp <- randomForest(data[2:5],data$myclass,
ntree = 2000, replace=TRUE, nodesize = 1,
localImp=T )
varImpPlot(fit.rf_imp)
plot(fit.rf_imp)
grid()
importance(fit.rf_imp)
The results for importance (in plot form) are:
The mean decrease in accuracy and mean decrease in gini have a consistent message: "n1 and n2 are low importance columns".
The results for the convergence plot are:
Although somewhat qualitative, it appears that some acceptable level of convergence has occurred by 500 trees. It is also worth noting that the converged error rate is about 22%. This leads to the inference that the "classification error" within the region of "A" is about 1 in 5.
The code for an updated forest, one not including low-importance columns, is:
fit.rf <- randomForest(data[2:3],data$myclass,
ntree = 500, replace=TRUE, nodesize = 1,
localImp=T )
A plot of actual vs. predicted has excellent accuracy. Code to derive the plot follows:
data2 <- predict(fit.rf,newdata=data[data$myclass==1,c(2,3,4,5)],
type="response")
#separate class "1" from training data
idx1a <- which(data[,1]==1)
#separate class "1" from the predicted data
idx1b <- which(data2==1)
#separate class "2" from training data
idx2a <- which(data[,1]==2)
#separate class "2" from the predicted data
idx2b <- which(data2==2)
#show the difference in classes before and after RF based filter
#class "B" aka 2, uniform background
plot(data[idx2a,2],data[idx2a,3])
points(data[idx2b,2],data[idx2b,3],col="Blue")
#class "A" aka 1, red spiral
points(data[idx1a,2],data[idx1a,3])
points(data[idx1b,2],data[idx1b,3],col="Red",pch=18)
The actual plot follows.
For a very simple toy problem, a basic randomForest has been used to determine importance of variables, and to attempt to classify "in" versus "out".
I have an older laptop. It is a Dell Latitude E-7440 with an i7-4600 and 16 GB of RAM running Windows 7. You might have something fancy, or something even older than mine. You could have different OS, R version, or hardware. Your results are likely to differ from mine in absolute scale, but relative scale should still be informative.
Here is the code I used to benchmark the "variable importance" random forest:
res1 <- microbenchmark(randomForest(data[2:5],data$myclass,
ntree = 2000, replace=TRUE, nodesize = 1,
localImp=T ),
times=100L)
print(res1)
and here is the code I used to benchmark the fit of a random forest to the important variables only:
res2 <- microbenchmark(randomForest(data[2:3],data$myclass,
ntree = 500, replace=TRUE, nodesize = 1,
localImp=T ),
times=100L)
print(res2)
The time-result for the variable importance was:
min lq mean median uq max neval
1 9.323244 9.648383 9.967486 9.84808 10.05356 12.12949 100
Over 100 iterations the mean time-to-compute was 9.96 seconds. This is the "time to beat" for "incomparably faster" applied to the toy problem.
The time-result for the reduced model was:
min lq mean median uq max neval
1.515134 1.598504 1.638809 1.634209 1.67372 2.038021 100
When computed over 100 iterations, the mean time-to-compute was 1.64 seconds. Running on the important data, and only for "reasonable" number of trees, reduced the run-time by about 84%.
INCOMPLETE.
References:
http://www.statistik.uni-dortmund.de/useR-2008//slides/Strobl+Zeileis.pdf
https://arxiv.org/pdf/1804.03515.pdf (updated 11/27/2018)
Awaiting:
random Forest on non-toy, with timing. HIVA is not the right data, even though I asked for it. I need an intermediate set.
random Forest + t-test solution on toy, with timing
random Forest + t-test solution on non-toy, with timing
svm solution on toy, with timing
svm solution on non-toy, with timing | Use of Random Forests for variable importance as preprocess before another analysis
Note: this answer is incomplete.
Toy Problem:
This is a trivial problem that is typically small in dimension and as accessible as possible to human intuition and learning. Personally, I find this (li |
50,498 | Which Machine Learning book to choose (APM, MLAP or ISL)? | Opinions about a book is always subjective. I personally liked both books
Applied Predictive Modeling by Kuhn and Johnson
An Introduction to Statistical Learning by Hastie (ISL)
I like ISL better since it explains more statistic knowledge than applications. In addition, the PLUS version ESL is one of the best books in machine learning. And if you are getting familiar with the notation in ISL book, you can get deeper by looking at the ESL book.
The Applied Predictive Modeling seems no too much math but a lot of applications, you can play with the R CARET library to learn it quickly.
My suggestion, want to go deeper? read the ISL book then ESL book.
If you just want to roughly understand what are predictive models, play with R CARET library on UCI data or check some Kaggle Scripts. | Which Machine Learning book to choose (APM, MLAP or ISL)? | Opinions about a book is always subjective. I personally liked both books
Applied Predictive Modeling by Kuhn and Johnson
An Introduction to Statistical Learning by Hastie (ISL)
I like ISL better si | Which Machine Learning book to choose (APM, MLAP or ISL)?
Opinions about a book is always subjective. I personally liked both books
Applied Predictive Modeling by Kuhn and Johnson
An Introduction to Statistical Learning by Hastie (ISL)
I like ISL better since it explains more statistic knowledge than applications. In addition, the PLUS version ESL is one of the best books in machine learning. And if you are getting familiar with the notation in ISL book, you can get deeper by looking at the ESL book.
The Applied Predictive Modeling seems no too much math but a lot of applications, you can play with the R CARET library to learn it quickly.
My suggestion, want to go deeper? read the ISL book then ESL book.
If you just want to roughly understand what are predictive models, play with R CARET library on UCI data or check some Kaggle Scripts. | Which Machine Learning book to choose (APM, MLAP or ISL)?
Opinions about a book is always subjective. I personally liked both books
Applied Predictive Modeling by Kuhn and Johnson
An Introduction to Statistical Learning by Hastie (ISL)
I like ISL better si |
50,499 | What is so mysterious about machine learning? | E. g. neural networks function exactly like black boxes. We know general grounds on which they work and how to train them. But we don't know what features does neural network compute on hidden layers. Well, we can guess and test our guesses. But there is no guarantee that we well succeed in our guessing or that discovered features will describe some known characteristics of the phenomenon we are using neural network for.
It is near to impossible to convert trained neural network to a bunch of if-conditions. And if you perform such conversion, then god help you with figuring out meaningful names for features neural network learned in it's hidden layers. | What is so mysterious about machine learning? | E. g. neural networks function exactly like black boxes. We know general grounds on which they work and how to train them. But we don't know what features does neural network compute on hidden layers. | What is so mysterious about machine learning?
E. g. neural networks function exactly like black boxes. We know general grounds on which they work and how to train them. But we don't know what features does neural network compute on hidden layers. Well, we can guess and test our guesses. But there is no guarantee that we well succeed in our guessing or that discovered features will describe some known characteristics of the phenomenon we are using neural network for.
It is near to impossible to convert trained neural network to a bunch of if-conditions. And if you perform such conversion, then god help you with figuring out meaningful names for features neural network learned in it's hidden layers. | What is so mysterious about machine learning?
E. g. neural networks function exactly like black boxes. We know general grounds on which they work and how to train them. But we don't know what features does neural network compute on hidden layers. |
50,500 | Using Keras LSTM RNN for variable length sequence prediction | I see there was an issue filed last year about this. The author recommends zero-padding or batches of size 1:
Zero-padding
X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)
model.fit(X, y, batch_size=32, nb_epoch=10)
Batches of size 1
for seq, label in zip(sequences, y):
model.train(np.array([seq]), [label]) | Using Keras LSTM RNN for variable length sequence prediction | I see there was an issue filed last year about this. The author recommends zero-padding or batches of size 1:
Zero-padding
X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)
model. | Using Keras LSTM RNN for variable length sequence prediction
I see there was an issue filed last year about this. The author recommends zero-padding or batches of size 1:
Zero-padding
X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)
model.fit(X, y, batch_size=32, nb_epoch=10)
Batches of size 1
for seq, label in zip(sequences, y):
model.train(np.array([seq]), [label]) | Using Keras LSTM RNN for variable length sequence prediction
I see there was an issue filed last year about this. The author recommends zero-padding or batches of size 1:
Zero-padding
X = keras.preprocessing.sequence.pad_sequences(sequences, maxlen=100)
model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.