idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
54,101
Motivations for experiment design in statistical learning?
This is an interesting question. The following stored google search gives many interesting hits, and both ways: Machine learning used in experimental design and experimental design used in machine learning. Basically experimental design is about planning the collection of data. That must be useful in statistical learning/machine learning, as you can get much better results from your analysis with better data. One obvious application is planning of simulation experiments, as in this case the data collection is completely under your control. You could do worse than start with this excellent book by Box, Hunter & Hunter. Look also at this list. This interesting-looking paper asks to rethink experimental design as algorithm design. So use that required course to learn not only the classics, but also peek into some applications in the fields you mentioned, such as Bayesian experimental design, combinatorics?, Markov decision processes, stochastic processes. Active learning seems to be a buzzword for combining learning with design ... reinforcement learning much the same! That viewpoint is supported by this Wikipedia article. Computerized adaptive testing can be seen as a forerunner of active learning, and is certainly using some experimental design. Some explanation of how that works can be found here: Statistical interpretation of Maximum Entropy Distribution. While at it, the tag experiment-design covers many posts in here, too many still in need of answers&upvotes. So going trough that, answering, upvoting would be a great learning experience ...
Motivations for experiment design in statistical learning?
This is an interesting question. The following stored google search gives many interesting hits, and both ways: Machine learning used in experimental design and experimental design used in machine lea
Motivations for experiment design in statistical learning? This is an interesting question. The following stored google search gives many interesting hits, and both ways: Machine learning used in experimental design and experimental design used in machine learning. Basically experimental design is about planning the collection of data. That must be useful in statistical learning/machine learning, as you can get much better results from your analysis with better data. One obvious application is planning of simulation experiments, as in this case the data collection is completely under your control. You could do worse than start with this excellent book by Box, Hunter & Hunter. Look also at this list. This interesting-looking paper asks to rethink experimental design as algorithm design. So use that required course to learn not only the classics, but also peek into some applications in the fields you mentioned, such as Bayesian experimental design, combinatorics?, Markov decision processes, stochastic processes. Active learning seems to be a buzzword for combining learning with design ... reinforcement learning much the same! That viewpoint is supported by this Wikipedia article. Computerized adaptive testing can be seen as a forerunner of active learning, and is certainly using some experimental design. Some explanation of how that works can be found here: Statistical interpretation of Maximum Entropy Distribution. While at it, the tag experiment-design covers many posts in here, too many still in need of answers&upvotes. So going trough that, answering, upvoting would be a great learning experience ...
Motivations for experiment design in statistical learning? This is an interesting question. The following stored google search gives many interesting hits, and both ways: Machine learning used in experimental design and experimental design used in machine lea
54,102
Study design question: What's the best design to assess harm of an exposure?
I start with my methodological thoughts and I offer some footnotes with thoughts that came to my mind on the ethics. Take both of these with a large grain of salt, because we know very little on your specific case. That makes both methodological advice, as well as comments on the ethics difficult and my remarks could completely miss key considerations. Randomization (either to steroids vs. alternative treatment*, or with respect to delaying immunotherapy**) in theory is your best bet for really establishing causation. If you truly cannot do that, then think about why a prospective study is needed; one reason might be that it's easier to get information on all possible confounders and exposures than in a retrospective design, if so, make sure that you really get this information. Just being able to write "in this prospective study..." in a publication is usually not considered an adequate reason for a prospective study. Alternatives to prospective studies include, for example, retrospective cohorts or case-control studies. If you do not randomize, you will end up somehow matching patients (either into small groups or strata) or adjusting for confounders in some manner. You may run into some serious difficulties here. Firstly, it it's about steroids or not, then it might simply be the case that the medical conditions that required steroid treatment lead to a worse prognosis and if nearly all patients with these medical conditions (or most of the ones with the worst severity of the condition) get steroids, there might be no realistic way of adjusting this away, or finding truly matched patients with the same (or equally severe) history that did get and did not get steroids. It might also be the other way around: the worse/the more life-threatening the melanoma, the more prone patients might be to get conditions that require steroids (e.g. due to the melanoma or due to previous treatments for them). Thus, one big question is whether there are alternative treatments instead of steroids that are used for the conditions for which the steroids are used. If there are and if the choice of treatment is based on somewhat random physician preferences, then that's the best situation for a non-randomized study. If there are not, you will have a really hard time (=it may not be possible) disentangling things. Secondly, when one looks at the time of initiation of immunotherapy (if you have the theory that the longer ago the steroids were used the better), then "longer ago" vs. "more recently" might still be a serious confounding factor (e.g. "more recently" might mean that the patient has not fully recovered and this affects their prognosis) just like what is discussed in the previous paragraph. * It is always a difficult judgement whether randomization is ethical. In part, whether giving steroids is still ethical will depend on the strength of evidence that is already available. However, if you believe you can still find patients that would get steroids for a prospective study, it seems there is disagreement on this and/or there might be a population where the benefit/risk is still considered acceptable by some physicians. This obviously needs careful consideration, but usually pretty compelling data is needed to truly change clinical practice. Additionally, sometimes the problem might be the other way around - there might be patients where withholding steroids is not ethical. ** Delaying a potentially life-saving cancer therapy in a randomized fashion of course has serious ethical implications, too. Thus, this may not be a good target for intervention, either. The timing could be something to look at in an observational fashion, because one could then have patients with a more similar medical history. However, see second caveat.
Study design question: What's the best design to assess harm of an exposure?
I start with my methodological thoughts and I offer some footnotes with thoughts that came to my mind on the ethics. Take both of these with a large grain of salt, because we know very little on your
Study design question: What's the best design to assess harm of an exposure? I start with my methodological thoughts and I offer some footnotes with thoughts that came to my mind on the ethics. Take both of these with a large grain of salt, because we know very little on your specific case. That makes both methodological advice, as well as comments on the ethics difficult and my remarks could completely miss key considerations. Randomization (either to steroids vs. alternative treatment*, or with respect to delaying immunotherapy**) in theory is your best bet for really establishing causation. If you truly cannot do that, then think about why a prospective study is needed; one reason might be that it's easier to get information on all possible confounders and exposures than in a retrospective design, if so, make sure that you really get this information. Just being able to write "in this prospective study..." in a publication is usually not considered an adequate reason for a prospective study. Alternatives to prospective studies include, for example, retrospective cohorts or case-control studies. If you do not randomize, you will end up somehow matching patients (either into small groups or strata) or adjusting for confounders in some manner. You may run into some serious difficulties here. Firstly, it it's about steroids or not, then it might simply be the case that the medical conditions that required steroid treatment lead to a worse prognosis and if nearly all patients with these medical conditions (or most of the ones with the worst severity of the condition) get steroids, there might be no realistic way of adjusting this away, or finding truly matched patients with the same (or equally severe) history that did get and did not get steroids. It might also be the other way around: the worse/the more life-threatening the melanoma, the more prone patients might be to get conditions that require steroids (e.g. due to the melanoma or due to previous treatments for them). Thus, one big question is whether there are alternative treatments instead of steroids that are used for the conditions for which the steroids are used. If there are and if the choice of treatment is based on somewhat random physician preferences, then that's the best situation for a non-randomized study. If there are not, you will have a really hard time (=it may not be possible) disentangling things. Secondly, when one looks at the time of initiation of immunotherapy (if you have the theory that the longer ago the steroids were used the better), then "longer ago" vs. "more recently" might still be a serious confounding factor (e.g. "more recently" might mean that the patient has not fully recovered and this affects their prognosis) just like what is discussed in the previous paragraph. * It is always a difficult judgement whether randomization is ethical. In part, whether giving steroids is still ethical will depend on the strength of evidence that is already available. However, if you believe you can still find patients that would get steroids for a prospective study, it seems there is disagreement on this and/or there might be a population where the benefit/risk is still considered acceptable by some physicians. This obviously needs careful consideration, but usually pretty compelling data is needed to truly change clinical practice. Additionally, sometimes the problem might be the other way around - there might be patients where withholding steroids is not ethical. ** Delaying a potentially life-saving cancer therapy in a randomized fashion of course has serious ethical implications, too. Thus, this may not be a good target for intervention, either. The timing could be something to look at in an observational fashion, because one could then have patients with a more similar medical history. However, see second caveat.
Study design question: What's the best design to assess harm of an exposure? I start with my methodological thoughts and I offer some footnotes with thoughts that came to my mind on the ethics. Take both of these with a large grain of salt, because we know very little on your
54,103
Study design question: What's the best design to assess harm of an exposure?
We have retrospective data suggesting worse survival for patients who received steroids, even when controlling for the indication for the steroids. Therefore it's not ethical to randomize patients to steroids, This is exactly why you have to do randomized controlled trials. The result is not surprising: someone who needs steroids is going to be much sicker than someone who doesn't. It's prevalent case bias. To the best of my knowledge, immunotherapies don't interact with steroids, but rather people on immunotherapy have intermittent neutropenia and are more prone to infection, and it becomes increasingly challenging to differentiate actual infection from the treatment effects of Rituximab, etc. However, if oncologists are not diligent in pathology, their patients are treated for inflammatory conditions using what would be the standard of care in most cases. Famous example is: lung infection (pneumonia) vs non-infectious pneumonitis. The best solution is to Identify standard of care, and revise protocol to make sure that appropriate diagnostic steps are made regarding treatment. Require investigators to report pathology data, and other hemology like WBC, cRP, etc. In situations where investigators have inconclusive pathology, perform a randomized design to use steroidal vs. non-steroidal treatment. Follow for time to first worsening of adverse event, or death. Test for differences with log rank or $G \rho \gamma$ weighted survival
Study design question: What's the best design to assess harm of an exposure?
We have retrospective data suggesting worse survival for patients who received steroids, even when controlling for the indication for the steroids. Therefore it's not ethical to randomize patients to
Study design question: What's the best design to assess harm of an exposure? We have retrospective data suggesting worse survival for patients who received steroids, even when controlling for the indication for the steroids. Therefore it's not ethical to randomize patients to steroids, This is exactly why you have to do randomized controlled trials. The result is not surprising: someone who needs steroids is going to be much sicker than someone who doesn't. It's prevalent case bias. To the best of my knowledge, immunotherapies don't interact with steroids, but rather people on immunotherapy have intermittent neutropenia and are more prone to infection, and it becomes increasingly challenging to differentiate actual infection from the treatment effects of Rituximab, etc. However, if oncologists are not diligent in pathology, their patients are treated for inflammatory conditions using what would be the standard of care in most cases. Famous example is: lung infection (pneumonia) vs non-infectious pneumonitis. The best solution is to Identify standard of care, and revise protocol to make sure that appropriate diagnostic steps are made regarding treatment. Require investigators to report pathology data, and other hemology like WBC, cRP, etc. In situations where investigators have inconclusive pathology, perform a randomized design to use steroidal vs. non-steroidal treatment. Follow for time to first worsening of adverse event, or death. Test for differences with log rank or $G \rho \gamma$ weighted survival
Study design question: What's the best design to assess harm of an exposure? We have retrospective data suggesting worse survival for patients who received steroids, even when controlling for the indication for the steroids. Therefore it's not ethical to randomize patients to
54,104
Study design question: What's the best design to assess harm of an exposure?
The scholarly literature guiding you into the best way to test a scientific hypothesis of comparative clinical effectiveness is quite large. Have a look for instance at Rosenbaum's Design of Observational Studies: https://www.amazon.com/Design-Observational-Studies-Springer-Statistics-ebook/dp/B00DZ0PT76/. Having said this, I think some informal guidance can indeed be provided. First, carefully decide what you want to test/measure and which patients you want to inform when the study is completed and the data collected. This will tell you which patients to include and which endpoints to focus on. Second, ensure best research practice (irrespective of the presence/lack of randomization): explicit selection criteria, formal data collection process, validated outcome ascertainment. Third, figure out the sample size: this may depend on pragmatic issues, funding, or a formal power analysis, but in general terms the more patients (from several centers), the better. Regarding analysis, there is plenty of options. Most experts would argue however that propensity matching, inverse probability of treatment weighting, or instrumental variable analysis represent the most advanced and less likely to be biased ones. Bottomline: a prospective observational study is always a good start to confirm a promising retrospective one, but remember that association is not causation, and in the modern era of industrial medicine most treatments need to be supported by randomized trial data...
Study design question: What's the best design to assess harm of an exposure?
The scholarly literature guiding you into the best way to test a scientific hypothesis of comparative clinical effectiveness is quite large. Have a look for instance at Rosenbaum's Design of Observati
Study design question: What's the best design to assess harm of an exposure? The scholarly literature guiding you into the best way to test a scientific hypothesis of comparative clinical effectiveness is quite large. Have a look for instance at Rosenbaum's Design of Observational Studies: https://www.amazon.com/Design-Observational-Studies-Springer-Statistics-ebook/dp/B00DZ0PT76/. Having said this, I think some informal guidance can indeed be provided. First, carefully decide what you want to test/measure and which patients you want to inform when the study is completed and the data collected. This will tell you which patients to include and which endpoints to focus on. Second, ensure best research practice (irrespective of the presence/lack of randomization): explicit selection criteria, formal data collection process, validated outcome ascertainment. Third, figure out the sample size: this may depend on pragmatic issues, funding, or a formal power analysis, but in general terms the more patients (from several centers), the better. Regarding analysis, there is plenty of options. Most experts would argue however that propensity matching, inverse probability of treatment weighting, or instrumental variable analysis represent the most advanced and less likely to be biased ones. Bottomline: a prospective observational study is always a good start to confirm a promising retrospective one, but remember that association is not causation, and in the modern era of industrial medicine most treatments need to be supported by randomized trial data...
Study design question: What's the best design to assess harm of an exposure? The scholarly literature guiding you into the best way to test a scientific hypothesis of comparative clinical effectiveness is quite large. Have a look for instance at Rosenbaum's Design of Observati
54,105
Family-wise Type I error OLS regression
If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question. However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models. Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example: Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction); If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values; Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant. For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter. Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.
Family-wise Type I error OLS regression
If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please includ
Family-wise Type I error OLS regression If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question. However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models. Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example: Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction); If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values; Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant. For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter. Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.
Family-wise Type I error OLS regression If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please includ
54,106
Why are polynomial activation functions not used
There has been some work which experiments with quadratic activations -- see "neural tensor networks" but in general a disadvantage of second order and higher polynomials is that they don't have a bounded derivative, which could lead to exploding gradients.
Why are polynomial activation functions not used
There has been some work which experiments with quadratic activations -- see "neural tensor networks" but in general a disadvantage of second order and higher polynomials is that they don't have a bou
Why are polynomial activation functions not used There has been some work which experiments with quadratic activations -- see "neural tensor networks" but in general a disadvantage of second order and higher polynomials is that they don't have a bounded derivative, which could lead to exploding gradients.
Why are polynomial activation functions not used There has been some work which experiments with quadratic activations -- see "neural tensor networks" but in general a disadvantage of second order and higher polynomials is that they don't have a bou
54,107
Why are polynomial activation functions not used
Nutshell So Polynomial activation functions don't work, since they fail to have the main property which makes neural networks interesting. Mathematical Reason Actually, there is a more rigorous reason why they are not used. In this paper, it is shown that the collection of all feed-forward neural networks can approximate any (reasonable) function if and only if the activation function is not a polynomial. Explicit Counter-Example: As an example, the simplest polynomial functions (which are non-constant) affine affine functions. If affine functions could be used (ie the universal approximation peropty were to hold) then linear regressions could approximate any continuous function. Which isn't the case.
Why are polynomial activation functions not used
Nutshell So Polynomial activation functions don't work, since they fail to have the main property which makes neural networks interesting. Mathematical Reason Actually, there is a more rigorous reas
Why are polynomial activation functions not used Nutshell So Polynomial activation functions don't work, since they fail to have the main property which makes neural networks interesting. Mathematical Reason Actually, there is a more rigorous reason why they are not used. In this paper, it is shown that the collection of all feed-forward neural networks can approximate any (reasonable) function if and only if the activation function is not a polynomial. Explicit Counter-Example: As an example, the simplest polynomial functions (which are non-constant) affine affine functions. If affine functions could be used (ie the universal approximation peropty were to hold) then linear regressions could approximate any continuous function. Which isn't the case.
Why are polynomial activation functions not used Nutshell So Polynomial activation functions don't work, since they fail to have the main property which makes neural networks interesting. Mathematical Reason Actually, there is a more rigorous reas
54,108
How to make sense out of integration over discrete data points?
$X_i$ is continuous a random variable, with pdf $f_{X_i}(x_i;\theta)$, and the expectation requires an integral. The integral limits contain the domain of $X_i$. Not $i$ from $1$ to $n$. The $n$ samples you have are just realizations of $X_i$, i.e. $X_1,X_2,...,X_n$. You're not integrating/summing across these variables. You're integrating for a particular $i$, let's say $X_2$, and obtain an expression for the expected value of interest.
How to make sense out of integration over discrete data points?
$X_i$ is continuous a random variable, with pdf $f_{X_i}(x_i;\theta)$, and the expectation requires an integral. The integral limits contain the domain of $X_i$. Not $i$ from $1$ to $n$. The $n$ sampl
How to make sense out of integration over discrete data points? $X_i$ is continuous a random variable, with pdf $f_{X_i}(x_i;\theta)$, and the expectation requires an integral. The integral limits contain the domain of $X_i$. Not $i$ from $1$ to $n$. The $n$ samples you have are just realizations of $X_i$, i.e. $X_1,X_2,...,X_n$. You're not integrating/summing across these variables. You're integrating for a particular $i$, let's say $X_2$, and obtain an expression for the expected value of interest.
How to make sense out of integration over discrete data points? $X_i$ is continuous a random variable, with pdf $f_{X_i}(x_i;\theta)$, and the expectation requires an integral. The integral limits contain the domain of $X_i$. Not $i$ from $1$ to $n$. The $n$ sampl
54,109
How to make sense out of integration over discrete data points?
A full understanding of this issue requires a theory of integration over probability distributions, not just functions. However, even in such an abstract theory it's possible to visualize the integrals as areas under curves. The universal principle is that in any "reasonable" theory of integration, it should be possible to integrate by parts. Consider the usual integral formulation of an expectation of a function $S$ for a distribution $F$ with density function $f(x) = F^\prime(x).$ This is given by $$E_X[S(X)] = \int_{-\infty}^\infty S(x) f(x) \mathrm{d}x.$$ Let's suppose $S$ has two properties, neither of which severely limits the theory: $S$ is differentiable and The limiting values of $S(x)F(x)$ at $-\infty$ and $S(x)(1-F(x))$ at $\infty$ are zero. (This is equivalent to assuming $S$ has an expectation.) The first enables us to apply integration by parts while the second enables us to cope with the infinite limits of integration. To do this, we will need to break the integral into two at some convenient (finite) value; for simplicity, let's break it at zero. In the negative region, write $f(x) = F^\prime(x)$ but in the positive region, $f(x) = -\frac{d}{dx}(1-F(x)).$ Integrating each integral separately by parts gives $$\eqalign{ E_X[S(X)] &= &\int_{-\infty}^0 S(x) f(x) \mathrm{d}x + \int_0^\infty S(x) f(x) \mathrm{d}x \\ &= &\left(S(x)F(x)\left|_{-\infty}^0\right. - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x\right) + \\&&\left(-S(x)(1-F(x))\left|_0^\infty\right. + \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x\right) \\ &= &\int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x.\tag{*} }$$ We may picture this process by drawing the areas under consideration, ignoring the factor of $S^\prime (x)$ for the moment: The left image graphs the density function $f,$ the middle graphs the distribution function $F,$ and the right graphs the function $F$ for negative values of $x$ and $1-F$ for positive values. When you scale the heights of the right hand graph by the values of $S^\prime(x),$ the expectation is the corresponding (signed) area under the curve. Turn now to a distribution without a density, such as a discrete distribution. Here are corresponding graphs for a distribution that puts probability $1-p$ on the value $-1$ and $p$ on the value $1$ (a Rademacher distribution): (The plot of the density $f$ is omitted because, although it exists as a density, it does not exist as a function and therefore has no graph.) As an example of how $(*)$ works, let's compute an expectation for this distribution. The integrals are finite because when $x \lt -1,$ $F(x)=0$ and when $x \ge 1,$ $1-F(x)=0.$ Thus: $$\eqalign{ E[S] &= \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x \\ &= \int_0^1 S^\prime(x)(1 - (1-p)) \mathrm{d}x - \int_{-1}^0 S^\prime(x) (1-p)\mathrm{d}x\\ &=(1 - (1-p))S(x)\left|_0^1\right. - (1-p) S(x)\left|_{-1}^0 \right. \\ &= (1-p)S(-1) + pS(1). }$$ This is the sum of the values of $S$ (at $\pm 1$) multiplied by their probabilities. A generalization of this calculation shows that this integral is precisely a sum of values multiplied by probabilities for any discrete distribution: When $F$ is a discrete distribution supported at values $x_1,x_2,x_3, \ldots,$ with corresponding probabilities $p_1, p_2, p_3, \ldots,$ then the expression $(*)$ is $$E[S(X)] = \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x = \sum_{i=1}^\infty S(x_i)p_i.$$ The integrals can be interpreted as signed areas, even though $F$ has no density function. Indeed, when $S^\prime$ is piecewise continuous, the integrals can be interpreted as Riemann integrals.
How to make sense out of integration over discrete data points?
A full understanding of this issue requires a theory of integration over probability distributions, not just functions. However, even in such an abstract theory it's possible to visualize the integra
How to make sense out of integration over discrete data points? A full understanding of this issue requires a theory of integration over probability distributions, not just functions. However, even in such an abstract theory it's possible to visualize the integrals as areas under curves. The universal principle is that in any "reasonable" theory of integration, it should be possible to integrate by parts. Consider the usual integral formulation of an expectation of a function $S$ for a distribution $F$ with density function $f(x) = F^\prime(x).$ This is given by $$E_X[S(X)] = \int_{-\infty}^\infty S(x) f(x) \mathrm{d}x.$$ Let's suppose $S$ has two properties, neither of which severely limits the theory: $S$ is differentiable and The limiting values of $S(x)F(x)$ at $-\infty$ and $S(x)(1-F(x))$ at $\infty$ are zero. (This is equivalent to assuming $S$ has an expectation.) The first enables us to apply integration by parts while the second enables us to cope with the infinite limits of integration. To do this, we will need to break the integral into two at some convenient (finite) value; for simplicity, let's break it at zero. In the negative region, write $f(x) = F^\prime(x)$ but in the positive region, $f(x) = -\frac{d}{dx}(1-F(x)).$ Integrating each integral separately by parts gives $$\eqalign{ E_X[S(X)] &= &\int_{-\infty}^0 S(x) f(x) \mathrm{d}x + \int_0^\infty S(x) f(x) \mathrm{d}x \\ &= &\left(S(x)F(x)\left|_{-\infty}^0\right. - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x\right) + \\&&\left(-S(x)(1-F(x))\left|_0^\infty\right. + \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x\right) \\ &= &\int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x.\tag{*} }$$ We may picture this process by drawing the areas under consideration, ignoring the factor of $S^\prime (x)$ for the moment: The left image graphs the density function $f,$ the middle graphs the distribution function $F,$ and the right graphs the function $F$ for negative values of $x$ and $1-F$ for positive values. When you scale the heights of the right hand graph by the values of $S^\prime(x),$ the expectation is the corresponding (signed) area under the curve. Turn now to a distribution without a density, such as a discrete distribution. Here are corresponding graphs for a distribution that puts probability $1-p$ on the value $-1$ and $p$ on the value $1$ (a Rademacher distribution): (The plot of the density $f$ is omitted because, although it exists as a density, it does not exist as a function and therefore has no graph.) As an example of how $(*)$ works, let's compute an expectation for this distribution. The integrals are finite because when $x \lt -1,$ $F(x)=0$ and when $x \ge 1,$ $1-F(x)=0.$ Thus: $$\eqalign{ E[S] &= \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x \\ &= \int_0^1 S^\prime(x)(1 - (1-p)) \mathrm{d}x - \int_{-1}^0 S^\prime(x) (1-p)\mathrm{d}x\\ &=(1 - (1-p))S(x)\left|_0^1\right. - (1-p) S(x)\left|_{-1}^0 \right. \\ &= (1-p)S(-1) + pS(1). }$$ This is the sum of the values of $S$ (at $\pm 1$) multiplied by their probabilities. A generalization of this calculation shows that this integral is precisely a sum of values multiplied by probabilities for any discrete distribution: When $F$ is a discrete distribution supported at values $x_1,x_2,x_3, \ldots,$ with corresponding probabilities $p_1, p_2, p_3, \ldots,$ then the expression $(*)$ is $$E[S(X)] = \int_0^{\infty} S^\prime(x) (1-F(x)) \mathrm{d}x - \int_{-\infty}^0 S^\prime(x) F(x) \mathrm{d}x = \sum_{i=1}^\infty S(x_i)p_i.$$ The integrals can be interpreted as signed areas, even though $F$ has no density function. Indeed, when $S^\prime$ is piecewise continuous, the integrals can be interpreted as Riemann integrals.
How to make sense out of integration over discrete data points? A full understanding of this issue requires a theory of integration over probability distributions, not just functions. However, even in such an abstract theory it's possible to visualize the integra
54,110
How to make sense out of integration over discrete data points?
This proof corresponds to the case of a single data point (so $n=1$ in this context), where the distribution of the random variable $X_i$ is continuous, so it has a probability density function $f$. The proof uses the integral form from the law of the unconscious statistician, which holds that the expected value of the score function is an integral of that function multiplied by the density of $X_i$, taken over the full range of that random variable. If $X_i$ were instead assumed to be a discrete random variable, instead of a continuous random variable, then the expected value would be a sum taken with respect to the mass function, instead of an integral taken with respect to the density function.
How to make sense out of integration over discrete data points?
This proof corresponds to the case of a single data point (so $n=1$ in this context), where the distribution of the random variable $X_i$ is continuous, so it has a probability density function $f$.
How to make sense out of integration over discrete data points? This proof corresponds to the case of a single data point (so $n=1$ in this context), where the distribution of the random variable $X_i$ is continuous, so it has a probability density function $f$. The proof uses the integral form from the law of the unconscious statistician, which holds that the expected value of the score function is an integral of that function multiplied by the density of $X_i$, taken over the full range of that random variable. If $X_i$ were instead assumed to be a discrete random variable, instead of a continuous random variable, then the expected value would be a sum taken with respect to the mass function, instead of an integral taken with respect to the density function.
How to make sense out of integration over discrete data points? This proof corresponds to the case of a single data point (so $n=1$ in this context), where the distribution of the random variable $X_i$ is continuous, so it has a probability density function $f$.
54,111
How to make sense out of integration over discrete data points?
The proof you are examining starts by assuming $f(x_i; θ)$ is "a regular pdf." A pdf, or probability density function, is, by definition a continuous (i.e. not discrete) function. Since $X_i$ is continuous (hence pdf), you would use an integral to obtain the expected value of a function of $X_i$ by the Law of the Unconscious Statistician.
How to make sense out of integration over discrete data points?
The proof you are examining starts by assuming $f(x_i; θ)$ is "a regular pdf." A pdf, or probability density function, is, by definition a continuous (i.e. not discrete) function. Since $X_i$ is con
How to make sense out of integration over discrete data points? The proof you are examining starts by assuming $f(x_i; θ)$ is "a regular pdf." A pdf, or probability density function, is, by definition a continuous (i.e. not discrete) function. Since $X_i$ is continuous (hence pdf), you would use an integral to obtain the expected value of a function of $X_i$ by the Law of the Unconscious Statistician.
How to make sense out of integration over discrete data points? The proof you are examining starts by assuming $f(x_i; θ)$ is "a regular pdf." A pdf, or probability density function, is, by definition a continuous (i.e. not discrete) function. Since $X_i$ is con
54,112
Kalman filter parameter estimation
Everything you will ever need regarding estimation of parameters in a state space model is in this document: https://cran.r-project.org/web/packages/MARSS/vignettes/EMDerivation.pdf Kalman Filter/Smoother assumes that the parameters are known in advance so that the unobserved state can be estimated. The initial values for the state can be user supplied or a diffuse initialization approach can be used. For this the standard reference is: https://www.amazon.com/Time-Analysis-State-Space-Methods/dp/019964117X For the matrix parameters and noise co-variance estimation the standard procedure is Expectation Maximization which is described in detail in the first reference, and also in https://www.stat.pitt.edu/stoffer/tsa4/tsa4.pdf chapter 6. Archived version of tsa4.pdf (Time Series Analysis and Its Applications): https://web.archive.org/web/20210401070804/https://www.stat.pitt.edu/stoffer/tsa4/tsa4.pdf
Kalman filter parameter estimation
Everything you will ever need regarding estimation of parameters in a state space model is in this document: https://cran.r-project.org/web/packages/MARSS/vignettes/EMDerivation.pdf Kalman Filter/Smoo
Kalman filter parameter estimation Everything you will ever need regarding estimation of parameters in a state space model is in this document: https://cran.r-project.org/web/packages/MARSS/vignettes/EMDerivation.pdf Kalman Filter/Smoother assumes that the parameters are known in advance so that the unobserved state can be estimated. The initial values for the state can be user supplied or a diffuse initialization approach can be used. For this the standard reference is: https://www.amazon.com/Time-Analysis-State-Space-Methods/dp/019964117X For the matrix parameters and noise co-variance estimation the standard procedure is Expectation Maximization which is described in detail in the first reference, and also in https://www.stat.pitt.edu/stoffer/tsa4/tsa4.pdf chapter 6. Archived version of tsa4.pdf (Time Series Analysis and Its Applications): https://web.archive.org/web/20210401070804/https://www.stat.pitt.edu/stoffer/tsa4/tsa4.pdf
Kalman filter parameter estimation Everything you will ever need regarding estimation of parameters in a state space model is in this document: https://cran.r-project.org/web/packages/MARSS/vignettes/EMDerivation.pdf Kalman Filter/Smoo
54,113
Kalman filter parameter estimation
As mentioned in other answers, you need values for the parameters in all system matrices ($F$, $H$, $Q$) in order to run the Kalman filter. However, you may have a state-space model with unknown parameters that you need to estimate. In order to do that, you may use the Kalman filter: running the Kalman filter with arbitrary values of the parameters will produce, as a byproduct, the likelihood. You can then embed the Kalman filter in an optimizing routine which tries different values so that the likelihood is maximized. Answering to other queries I have given detailed examples using package dlm in R. (Alternatives are packages MARSS and KFAS, among others.)
Kalman filter parameter estimation
As mentioned in other answers, you need values for the parameters in all system matrices ($F$, $H$, $Q$) in order to run the Kalman filter. However, you may have a state-space model with unknown param
Kalman filter parameter estimation As mentioned in other answers, you need values for the parameters in all system matrices ($F$, $H$, $Q$) in order to run the Kalman filter. However, you may have a state-space model with unknown parameters that you need to estimate. In order to do that, you may use the Kalman filter: running the Kalman filter with arbitrary values of the parameters will produce, as a byproduct, the likelihood. You can then embed the Kalman filter in an optimizing routine which tries different values so that the likelihood is maximized. Answering to other queries I have given detailed examples using package dlm in R. (Alternatives are packages MARSS and KFAS, among others.)
Kalman filter parameter estimation As mentioned in other answers, you need values for the parameters in all system matrices ($F$, $H$, $Q$) in order to run the Kalman filter. However, you may have a state-space model with unknown param
54,114
Kalman filter parameter estimation
In the usual state space model, the only things that are estimated are the state and its variance-covariance matrix (at each time point) - whether filtering or smoothing only changes what information you're conditioning on, but either way you end up with estimates of those things. The $H$'s (and $F$, $Q$ and $R$ in your notation) are known/set/measured exactly/pre-specified, not estimated; it's part of your model for how observations are related to the state vector (and how state vectors evolve over time, etc). $H$ is akin to the predictors in a regression model in that sense (i.e. that you don't estimate the $X$ matrix in regression)
Kalman filter parameter estimation
In the usual state space model, the only things that are estimated are the state and its variance-covariance matrix (at each time point) - whether filtering or smoothing only changes what information
Kalman filter parameter estimation In the usual state space model, the only things that are estimated are the state and its variance-covariance matrix (at each time point) - whether filtering or smoothing only changes what information you're conditioning on, but either way you end up with estimates of those things. The $H$'s (and $F$, $Q$ and $R$ in your notation) are known/set/measured exactly/pre-specified, not estimated; it's part of your model for how observations are related to the state vector (and how state vectors evolve over time, etc). $H$ is akin to the predictors in a regression model in that sense (i.e. that you don't estimate the $X$ matrix in regression)
Kalman filter parameter estimation In the usual state space model, the only things that are estimated are the state and its variance-covariance matrix (at each time point) - whether filtering or smoothing only changes what information
54,115
Kalman filter parameter estimation
To address the question of initial values, I would suggest you read Time Series Analysis by Durbin and Koopman (2012) who go into great detail on the exact diffuse initialisation procedure (this is implemented in the R package KFAS). Which is probably what you want to use in the case of a non-stationary model. For a stationary model, you can easily enough solve for the steady state and initialise with that. For a mixed model, and it's a little more tricky and often overlooked - suggest you refer to Doan (2010) Practical Issues with State-Space Models with Mixed Stationary and Non-Stationary Dynamics
Kalman filter parameter estimation
To address the question of initial values, I would suggest you read Time Series Analysis by Durbin and Koopman (2012) who go into great detail on the exact diffuse initialisation procedure (this is im
Kalman filter parameter estimation To address the question of initial values, I would suggest you read Time Series Analysis by Durbin and Koopman (2012) who go into great detail on the exact diffuse initialisation procedure (this is implemented in the R package KFAS). Which is probably what you want to use in the case of a non-stationary model. For a stationary model, you can easily enough solve for the steady state and initialise with that. For a mixed model, and it's a little more tricky and often overlooked - suggest you refer to Doan (2010) Practical Issues with State-Space Models with Mixed Stationary and Non-Stationary Dynamics
Kalman filter parameter estimation To address the question of initial values, I would suggest you read Time Series Analysis by Durbin and Koopman (2012) who go into great detail on the exact diffuse initialisation procedure (this is im
54,116
Kalman filter parameter estimation
Other answerers mentioned EM, which is the most traditional approach. There are many others, however. you can do direct likelihood optimization, as various others have also remarked: run filter/smoother, obtain likelihood, iterate/optimize. Gradient expressions are available, see, e.g., Bayesian filtering and smoothing by Simo Särkkä, Chapter 12. (I have implemented this in my personal little Matlab Kalman filter/smoother toolbox, I'm sure it's also available elsewhere.) Notice that in this approach, you can use arbitrary combinations of unknown parameters, exploit known structure elements if you know part of or the functional form of one of the unknown matrices, impose constraints on the optimization scheme, etc. you can also exploit other gradient-based optimization schemes; see, e.g., Fitting a Kalman smoother to data (Barratt, Boyd; 2020) for an approach using a proximal gradient method; python toolbox available here. The optimization problem can, in general, be non-convex. You can ignore that and hope for the best and/or use some kind of global optimization strategy, use multiple starting points, use some stochastic optimization scheme, etc. If you do have some reasonable prior guess about possible parameter values, exploiting that in the initialization and/or providing appropriate constraints can greatly help with convergence.
Kalman filter parameter estimation
Other answerers mentioned EM, which is the most traditional approach. There are many others, however. you can do direct likelihood optimization, as various others have also remarked: run filter/smoot
Kalman filter parameter estimation Other answerers mentioned EM, which is the most traditional approach. There are many others, however. you can do direct likelihood optimization, as various others have also remarked: run filter/smoother, obtain likelihood, iterate/optimize. Gradient expressions are available, see, e.g., Bayesian filtering and smoothing by Simo Särkkä, Chapter 12. (I have implemented this in my personal little Matlab Kalman filter/smoother toolbox, I'm sure it's also available elsewhere.) Notice that in this approach, you can use arbitrary combinations of unknown parameters, exploit known structure elements if you know part of or the functional form of one of the unknown matrices, impose constraints on the optimization scheme, etc. you can also exploit other gradient-based optimization schemes; see, e.g., Fitting a Kalman smoother to data (Barratt, Boyd; 2020) for an approach using a proximal gradient method; python toolbox available here. The optimization problem can, in general, be non-convex. You can ignore that and hope for the best and/or use some kind of global optimization strategy, use multiple starting points, use some stochastic optimization scheme, etc. If you do have some reasonable prior guess about possible parameter values, exploiting that in the initialization and/or providing appropriate constraints can greatly help with convergence.
Kalman filter parameter estimation Other answerers mentioned EM, which is the most traditional approach. There are many others, however. you can do direct likelihood optimization, as various others have also remarked: run filter/smoot
54,117
name for histogram of nominal p-values under the null
This idea of the uniform distribution for P-values is fairly new in statistics education and practice. I don't know if anyone has yet made up a name for the related histograms that has come into general use. Below I just call them "Null P-value" histograms. It is important to note that this uniform distribution for P-values holds only if the null hypothesis is true, the test statistic is continuous, and all of the assumptions for the test are met. Ordinarily, the test statistic must be exact and continuous, as in a one-sample t test or ANOVA. Tests involving discrete distributions and certain approximations have useful P-values for hypothesis testing, but often the P-values are not uniformly distributed across the interval $(0,1).$ Below are a few examples. All tests shown are standard tests in R, with P-values 'extracted' using $ notation. Code for the histogram is shown only in the first example; except for the header the code is the same in all examples. Shapiro-Wilk test for normality: $H_0$ true because data are normal. Too many P-values near 1. set.seed(1212) pv = replicate(10^5, shapiro.test(rnorm(20))$p.val) mean(pv < .05) [1] 0.04924 hist(pv, prob=T, col="skyblue2", main="Shapiro-Wilk Null P-values") curve(dunif(x), add=T, col="red", n=10001) One-sample Wilcoxon test: $H_0$ is true because population sampled has median 0. Discrete rank-based test statistic. set.seed(1212) pv = replicate(10^5, wilcox.test(rnorm(20), mu=0)$p.val) mean(pv < .05) [1] 0.04905 Binomial test: Discrete test statistic, $H_0$ true because $p = 1/2.$ Because of discreteness a test at exactly the 5% level is not available. set.seed(1213) pv = replicate(10^5, binom.test(rbinom(1,20,.5), 20, p=.5, alt="two")$p.val) mean(pv < .05) [1] 0.04169 Pooled 2-sample t test: Assumptions not met because variances unequal. $H_0$ true because means equal. This test rejects more often than 5% of the time. (Note: In R, the default two-sample t.test is the Welch test; the parameter var.eq=T invokes a pooled test.) set.seed(1213) pv = replicate(10^5, t.test(rnorm(20,100,2), rnorm(10,100,20), var.eq=T)$p.val) mean(pv < .05) [1] 0.18476 Welch 2-sample t test: P-value has uniform distribution on $(0,1).$ Continuous test statistic. Assumptions met. $H_0$ true. Technically an approximate test, but very nearly exact. set.seed(1214) pv = replicate(10^5, t.test(rnorm(20,100,2), rnorm(10,100,20))$p.val) mean(pv < .05) [1] 0.04939 Reference: Murcoch DJ, Tsai Y-L, Adcock J: P-values are random variables (2008), The American Statistician, 242-245, has several histograms similar to those shown here. This paper contains an early emphasis, if not the first, on regarding P-values as random variables, using Monte Carlo simulation to obtain their distributions in various cases, and the standard uniform distribution of P-values from continuous test statistics under the null hypothesis. The caption of Figure 2 in that paper refers to "Histograms of p-values under the null hypothesis." An earlier paper in the same journal, Sackrowitz H & Samuel-Cahn E (1999), P-values as random variables---Expected P-values, 326-333, does not contain such histograms.
name for histogram of nominal p-values under the null
This idea of the uniform distribution for P-values is fairly new in statistics education and practice. I don't know if anyone has yet made up a name for the related histograms that has come into gener
name for histogram of nominal p-values under the null This idea of the uniform distribution for P-values is fairly new in statistics education and practice. I don't know if anyone has yet made up a name for the related histograms that has come into general use. Below I just call them "Null P-value" histograms. It is important to note that this uniform distribution for P-values holds only if the null hypothesis is true, the test statistic is continuous, and all of the assumptions for the test are met. Ordinarily, the test statistic must be exact and continuous, as in a one-sample t test or ANOVA. Tests involving discrete distributions and certain approximations have useful P-values for hypothesis testing, but often the P-values are not uniformly distributed across the interval $(0,1).$ Below are a few examples. All tests shown are standard tests in R, with P-values 'extracted' using $ notation. Code for the histogram is shown only in the first example; except for the header the code is the same in all examples. Shapiro-Wilk test for normality: $H_0$ true because data are normal. Too many P-values near 1. set.seed(1212) pv = replicate(10^5, shapiro.test(rnorm(20))$p.val) mean(pv < .05) [1] 0.04924 hist(pv, prob=T, col="skyblue2", main="Shapiro-Wilk Null P-values") curve(dunif(x), add=T, col="red", n=10001) One-sample Wilcoxon test: $H_0$ is true because population sampled has median 0. Discrete rank-based test statistic. set.seed(1212) pv = replicate(10^5, wilcox.test(rnorm(20), mu=0)$p.val) mean(pv < .05) [1] 0.04905 Binomial test: Discrete test statistic, $H_0$ true because $p = 1/2.$ Because of discreteness a test at exactly the 5% level is not available. set.seed(1213) pv = replicate(10^5, binom.test(rbinom(1,20,.5), 20, p=.5, alt="two")$p.val) mean(pv < .05) [1] 0.04169 Pooled 2-sample t test: Assumptions not met because variances unequal. $H_0$ true because means equal. This test rejects more often than 5% of the time. (Note: In R, the default two-sample t.test is the Welch test; the parameter var.eq=T invokes a pooled test.) set.seed(1213) pv = replicate(10^5, t.test(rnorm(20,100,2), rnorm(10,100,20), var.eq=T)$p.val) mean(pv < .05) [1] 0.18476 Welch 2-sample t test: P-value has uniform distribution on $(0,1).$ Continuous test statistic. Assumptions met. $H_0$ true. Technically an approximate test, but very nearly exact. set.seed(1214) pv = replicate(10^5, t.test(rnorm(20,100,2), rnorm(10,100,20))$p.val) mean(pv < .05) [1] 0.04939 Reference: Murcoch DJ, Tsai Y-L, Adcock J: P-values are random variables (2008), The American Statistician, 242-245, has several histograms similar to those shown here. This paper contains an early emphasis, if not the first, on regarding P-values as random variables, using Monte Carlo simulation to obtain their distributions in various cases, and the standard uniform distribution of P-values from continuous test statistics under the null hypothesis. The caption of Figure 2 in that paper refers to "Histograms of p-values under the null hypothesis." An earlier paper in the same journal, Sackrowitz H & Samuel-Cahn E (1999), P-values as random variables---Expected P-values, 326-333, does not contain such histograms.
name for histogram of nominal p-values under the null This idea of the uniform distribution for P-values is fairly new in statistics education and practice. I don't know if anyone has yet made up a name for the related histograms that has come into gener
54,118
name for histogram of nominal p-values under the null
...is there a name for this type of histogram/method for assessing nominal p-values? The true distribution of a quantity under a (simple) null hypothesis is called the null distribution of that quantity. There is no specific name for the histogram of a Monte-Carlo simulation of the distribution of the p-value. It would usually be named by description: the histogram of a Monte Carlo simulation of the null distribution of the p-value.
name for histogram of nominal p-values under the null
...is there a name for this type of histogram/method for assessing nominal p-values? The true distribution of a quantity under a (simple) null hypothesis is called the null distribution of that quant
name for histogram of nominal p-values under the null ...is there a name for this type of histogram/method for assessing nominal p-values? The true distribution of a quantity under a (simple) null hypothesis is called the null distribution of that quantity. There is no specific name for the histogram of a Monte-Carlo simulation of the distribution of the p-value. It would usually be named by description: the histogram of a Monte Carlo simulation of the null distribution of the p-value.
name for histogram of nominal p-values under the null ...is there a name for this type of histogram/method for assessing nominal p-values? The true distribution of a quantity under a (simple) null hypothesis is called the null distribution of that quant
54,119
How to get top features that contribute to anomalies in Isolation forest
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. SHAP values and the shap Python library can be used for this. Shap has built-in support for scikit-learn IsolationForest since October 2019. import shap from sklearn.ensemble import IsolationForest # Load data and train Anomaly Detector as usual X_train, X_test, ... est = IsolationForest() est.fit(...) # Create shap values and plot them X_explain = X_test shap_values = shap.TreeExplainer(est).shap_values(X_explain) shap.summary_plot(shap_values, X_explain) Here is an example of a plot I did for one IsolationForest model that I had, which was time-series. You can also get partial dependence plots for a particular feature, or a plot showing the feature contributions for a single X instance. Examples for this is given in the shap project README.
How to get top features that contribute to anomalies in Isolation forest
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to get top features that contribute to anomalies in Isolation forest Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. SHAP values and the shap Python library can be used for this. Shap has built-in support for scikit-learn IsolationForest since October 2019. import shap from sklearn.ensemble import IsolationForest # Load data and train Anomaly Detector as usual X_train, X_test, ... est = IsolationForest() est.fit(...) # Create shap values and plot them X_explain = X_test shap_values = shap.TreeExplainer(est).shap_values(X_explain) shap.summary_plot(shap_values, X_explain) Here is an example of a plot I did for one IsolationForest model that I had, which was time-series. You can also get partial dependence plots for a particular feature, or a plot showing the feature contributions for a single X instance. Examples for this is given in the shap project README.
How to get top features that contribute to anomalies in Isolation forest Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
54,120
How to get top features that contribute to anomalies in Isolation forest
One possible describing feature importance in unsupervised outlier detecion is described in Contextual Outlier Interpretation. Similar as in the Lime approach, local linearity is assumed and by sampling a data points around the outlier of interest a classification problem is generated. The authors suggest to apply a SVM with linear kernel and use estimeited weights for feature importance.
How to get top features that contribute to anomalies in Isolation forest
One possible describing feature importance in unsupervised outlier detecion is described in Contextual Outlier Interpretation. Similar as in the Lime approach, local linearity is assumed and by sampli
How to get top features that contribute to anomalies in Isolation forest One possible describing feature importance in unsupervised outlier detecion is described in Contextual Outlier Interpretation. Similar as in the Lime approach, local linearity is assumed and by sampling a data points around the outlier of interest a classification problem is generated. The authors suggest to apply a SVM with linear kernel and use estimeited weights for feature importance.
How to get top features that contribute to anomalies in Isolation forest One possible describing feature importance in unsupervised outlier detecion is described in Contextual Outlier Interpretation. Similar as in the Lime approach, local linearity is assumed and by sampli
54,121
What is the acceptable level of concurvity?
This is a late bump to a relatively old question, but may be of help to future visitors. In Noam Ross' course on GAMs, GAMs in R, Chapter 2, section 10 "checking concurvity" says that observing a value over 0.8 for the worst case requires closer inspection of the model. So in the image you pasted there's only one variable over 0.8 (VV_PROM_S60), but there are many very close to that threshold, so you may want to check those too. The suggested way to have a closer look at the model is by using concurvity(model, full = FALSE) and carefully analyzing the pairwise concurvities.
What is the acceptable level of concurvity?
This is a late bump to a relatively old question, but may be of help to future visitors. In Noam Ross' course on GAMs, GAMs in R, Chapter 2, section 10 "checking concurvity" says that observing a valu
What is the acceptable level of concurvity? This is a late bump to a relatively old question, but may be of help to future visitors. In Noam Ross' course on GAMs, GAMs in R, Chapter 2, section 10 "checking concurvity" says that observing a value over 0.8 for the worst case requires closer inspection of the model. So in the image you pasted there's only one variable over 0.8 (VV_PROM_S60), but there are many very close to that threshold, so you may want to check those too. The suggested way to have a closer look at the model is by using concurvity(model, full = FALSE) and carefully analyzing the pairwise concurvities.
What is the acceptable level of concurvity? This is a late bump to a relatively old question, but may be of help to future visitors. In Noam Ross' course on GAMs, GAMs in R, Chapter 2, section 10 "checking concurvity" says that observing a valu
54,122
What is the acceptable level of concurvity?
I would like to get more clarity on this as well, but I did find one paper (Tree traits influence response to fire severity in the western Oregon Cascades, USA; 2019) that used the mgcv package where the authors used a cutoff of 0.3. It sounded like they applied that cutoff to all three measures of concurvity. Of course, even they said it was an arbitrary cutoff.
What is the acceptable level of concurvity?
I would like to get more clarity on this as well, but I did find one paper (Tree traits influence response to fire severity in the western Oregon Cascades, USA; 2019) that used the mgcv package where th
What is the acceptable level of concurvity? I would like to get more clarity on this as well, but I did find one paper (Tree traits influence response to fire severity in the western Oregon Cascades, USA; 2019) that used the mgcv package where the authors used a cutoff of 0.3. It sounded like they applied that cutoff to all three measures of concurvity. Of course, even they said it was an arbitrary cutoff.
What is the acceptable level of concurvity? I would like to get more clarity on this as well, but I did find one paper (Tree traits influence response to fire severity in the western Oregon Cascades, USA; 2019) that used the mgcv package where th
54,123
What is the acceptable level of concurvity?
Another source, from a doctoral dissertation Multi-city time series analyses of air pollution and mortality data using generalized geoadditive mixed models by Lung-Chang Chien. Free online access. So far, there is no strict criterion to identify the level of concurvity which can severely affect model fitting. Ramsay et al. (2003a) suggested using 0.5 to be the cutoff point. If the level of concurvity in nonparametric or semiparametric models is greater than 0.5, it is necessary to seek in order to eliminate or reduce this problem.
What is the acceptable level of concurvity?
Another source, from a doctoral dissertation Multi-city time series analyses of air pollution and mortality data using generalized geoadditive mixed models by Lung-Chang Chien. Free online access. So
What is the acceptable level of concurvity? Another source, from a doctoral dissertation Multi-city time series analyses of air pollution and mortality data using generalized geoadditive mixed models by Lung-Chang Chien. Free online access. So far, there is no strict criterion to identify the level of concurvity which can severely affect model fitting. Ramsay et al. (2003a) suggested using 0.5 to be the cutoff point. If the level of concurvity in nonparametric or semiparametric models is greater than 0.5, it is necessary to seek in order to eliminate or reduce this problem.
What is the acceptable level of concurvity? Another source, from a doctoral dissertation Multi-city time series analyses of air pollution and mortality data using generalized geoadditive mixed models by Lung-Chang Chien. Free online access. So
54,124
Need help understanding what a natural log transformation is actually doing and why specific transformations are required for linear regression [duplicate]
There's a lot here to break down. I hate to say it, but some of the advice in your course is quite misguided and wrong. What is that transformation actually doing? I don't mean the nitty gritty math, but what is it doing conceptually? The math here is pretty simple. You have a bunch of measurements of people's age that you would like to use as a feature in predicting some other measurement (looks like the probability of something happening). You're simply creating a new feature which is the logarithm of the original feature. I'll explain why you would want to do this below. For linear and logistic regression, for example, you ideally want to make sure that: the relationship between input variables and output variables is approximately linear – why? This is a structural assumption of the linear and logistic regression models. I'll focus on linear regression, because its a bit simpler, but the same thing holds for logistic regression. The linear regression model makes predictions by building a formula based on the data you feed into the algorithm. All prediction models work this way, but linear regression is distinguished by building the simplest possible formula. If $y$ is the thing you are trying to predict, and $x_1, x_2, \ldots$ are the features you are using to predict it, then the linear regression formula is: $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_k x_k$$ Here, the $\beta_i$'s are just numbers, and the job of the algorithm is to determine what numbers work best. Notice that if you vary one of the $x$'s, and look at how the output $y$ changes as a result, you'll get a line. This is a direct consequence of the way the linear regression model works. If you want it to give you sensible results, then you need to make sure this drawing lines assumption is at least approximately true. the input variables are approximately normal in distribution- why? This is simply wrong. Linear regression works fine even if the distribution of the input variables is highly non-normal. What is important is the relationship between the inputs and outputs, not the distribution of the inputs themselves. This is what I meant by the advice the course being misguided. You don't transform input variables because their distribution is skew, you transform them so that the linear shape the model is trying to draw through your data is reasonable For example, here is a scatterplot I found online of a country's GDP vs. its average life expectancy (attribution is in the image): Clearly, drawing a line through the scatter plot is completely unreasonable, so the linear regression equation: $$ \text{Life Expectancy} = \beta_0 + \beta_1 \text{GDP} $$ is a bad choice for the data. On the other hand, it looks like a logarithmic relationship is reasonable, so something like: $$ \text{Life Expectancy} = \beta_0 + \beta_1 \log(\text{GDP}) $$ looks like it would work a lot better. This is the type of situation where transforming the GDP measurements with a logarithm is a good idea. But it has nothing to do with the distribution of GDP. You can't tell it's a good idea by drawing a histogram of GDP, it's about the relationship between GDP and life expectancy. the output variable is constant variance (that is, the variance of the output variable is independent of the input variables – why? This is a deeper issue of a different nature than the others. For prediction models, it doesn't really matter, so if you're focusing on learning to build good predictive models don't worry about it for now. As a summary, this assumption is intended to support the computation of the sampling distribution of parameter estimates. For example, if you want to say something like "the probability that I would collect data in which the relationship between log(GDP) and Life Expectancy is greater than what I actually observed, even when the there is truly no relationship, is very, very small" you need to be able to compute the sampling distribution of the parameter estimates. There are various assumptions that allow this to be done, and this constant variance assumption is one them. That said, if you're only trying to make predictions, this isn't really relevant. And in no case is the distribution of the input data assumed to be normal, that's just a misconception.
Need help understanding what a natural log transformation is actually doing and why specific transfo
There's a lot here to break down. I hate to say it, but some of the advice in your course is quite misguided and wrong. What is that transformation actually doing? I don't mean the nitty gritty math
Need help understanding what a natural log transformation is actually doing and why specific transformations are required for linear regression [duplicate] There's a lot here to break down. I hate to say it, but some of the advice in your course is quite misguided and wrong. What is that transformation actually doing? I don't mean the nitty gritty math, but what is it doing conceptually? The math here is pretty simple. You have a bunch of measurements of people's age that you would like to use as a feature in predicting some other measurement (looks like the probability of something happening). You're simply creating a new feature which is the logarithm of the original feature. I'll explain why you would want to do this below. For linear and logistic regression, for example, you ideally want to make sure that: the relationship between input variables and output variables is approximately linear – why? This is a structural assumption of the linear and logistic regression models. I'll focus on linear regression, because its a bit simpler, but the same thing holds for logistic regression. The linear regression model makes predictions by building a formula based on the data you feed into the algorithm. All prediction models work this way, but linear regression is distinguished by building the simplest possible formula. If $y$ is the thing you are trying to predict, and $x_1, x_2, \ldots$ are the features you are using to predict it, then the linear regression formula is: $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_k x_k$$ Here, the $\beta_i$'s are just numbers, and the job of the algorithm is to determine what numbers work best. Notice that if you vary one of the $x$'s, and look at how the output $y$ changes as a result, you'll get a line. This is a direct consequence of the way the linear regression model works. If you want it to give you sensible results, then you need to make sure this drawing lines assumption is at least approximately true. the input variables are approximately normal in distribution- why? This is simply wrong. Linear regression works fine even if the distribution of the input variables is highly non-normal. What is important is the relationship between the inputs and outputs, not the distribution of the inputs themselves. This is what I meant by the advice the course being misguided. You don't transform input variables because their distribution is skew, you transform them so that the linear shape the model is trying to draw through your data is reasonable For example, here is a scatterplot I found online of a country's GDP vs. its average life expectancy (attribution is in the image): Clearly, drawing a line through the scatter plot is completely unreasonable, so the linear regression equation: $$ \text{Life Expectancy} = \beta_0 + \beta_1 \text{GDP} $$ is a bad choice for the data. On the other hand, it looks like a logarithmic relationship is reasonable, so something like: $$ \text{Life Expectancy} = \beta_0 + \beta_1 \log(\text{GDP}) $$ looks like it would work a lot better. This is the type of situation where transforming the GDP measurements with a logarithm is a good idea. But it has nothing to do with the distribution of GDP. You can't tell it's a good idea by drawing a histogram of GDP, it's about the relationship between GDP and life expectancy. the output variable is constant variance (that is, the variance of the output variable is independent of the input variables – why? This is a deeper issue of a different nature than the others. For prediction models, it doesn't really matter, so if you're focusing on learning to build good predictive models don't worry about it for now. As a summary, this assumption is intended to support the computation of the sampling distribution of parameter estimates. For example, if you want to say something like "the probability that I would collect data in which the relationship between log(GDP) and Life Expectancy is greater than what I actually observed, even when the there is truly no relationship, is very, very small" you need to be able to compute the sampling distribution of the parameter estimates. There are various assumptions that allow this to be done, and this constant variance assumption is one them. That said, if you're only trying to make predictions, this isn't really relevant. And in no case is the distribution of the input data assumed to be normal, that's just a misconception.
Need help understanding what a natural log transformation is actually doing and why specific transfo There's a lot here to break down. I hate to say it, but some of the advice in your course is quite misguided and wrong. What is that transformation actually doing? I don't mean the nitty gritty math
54,125
What's the point in using identity matrix as weighting matrix in GMM?
Yes, getting a first step estimator is the canonical use. Of course, the error terms in $$S = \frac{1}{n}\sum_i\epsilon_i^2x_ix_i'$$ are not observable, so that you need to replace them with something feasible. As the efficient GMM estimator depends on $\hat S$, you first need some feasible preliminary estimator such as the one using $I$ as the weighting matrix. There may be some further interesting considerations in a multiple equation setup, in which misspecification in one equation can "pollute" the entire system. You can avoid that risk through a less efficient, but more robust block-diagonal weighting matrix, of which $I$ would be an example.
What's the point in using identity matrix as weighting matrix in GMM?
Yes, getting a first step estimator is the canonical use. Of course, the error terms in $$S = \frac{1}{n}\sum_i\epsilon_i^2x_ix_i'$$ are not observable, so that you need to replace them with something
What's the point in using identity matrix as weighting matrix in GMM? Yes, getting a first step estimator is the canonical use. Of course, the error terms in $$S = \frac{1}{n}\sum_i\epsilon_i^2x_ix_i'$$ are not observable, so that you need to replace them with something feasible. As the efficient GMM estimator depends on $\hat S$, you first need some feasible preliminary estimator such as the one using $I$ as the weighting matrix. There may be some further interesting considerations in a multiple equation setup, in which misspecification in one equation can "pollute" the entire system. You can avoid that risk through a less efficient, but more robust block-diagonal weighting matrix, of which $I$ would be an example.
What's the point in using identity matrix as weighting matrix in GMM? Yes, getting a first step estimator is the canonical use. Of course, the error terms in $$S = \frac{1}{n}\sum_i\epsilon_i^2x_ix_i'$$ are not observable, so that you need to replace them with something
54,126
What's the point in using identity matrix as weighting matrix in GMM?
This second answer addresses the question posed in the comment to the first answer as to why the specific choice of $W$ results in an efficient GMM estimator. The efficient weighting matrix results from the general one by setting $W=S^{-1}$ to get an asymptotic variance \begin{eqnarray} \mathrm{Avar}(\widehat{\delta}(\widehat{S}))&=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\Sigma_{xz}'S^{-1}SS^{-1}\Sigma_{xz}(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\notag\\ &=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\Sigma_{xz}'S^{-1}\Sigma_{xz}(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\notag\\ &=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\label{avareffgmm}%\\[-4ex] \end{eqnarray} We therefore need to show that the difference between the general asymptotic variance and the one with the specific (to be shown) efficient weighting matrix is p.d.: $$ (\Sigma_{xz}'W\Sigma_{xz})^{-1}\Sigma_{xz}'WSW\Sigma_{xz}(\Sigma_{xz}'W\Sigma_{xz})^{-1}-(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\geqslant0$$ Linear algebra (see Thm. 1.24, Magnus/Neudecker 1988, i.e. $A-B\geqslant0\Leftrightarrow B^{-1}-A^{-1}\geqslant0$, much like $3>2$, but $1/2>1/3$) tells us that this condition is equivalent to $$ Q:=\Sigma_{xz}'S^{-1}\Sigma_{xz}-\Sigma_{xz}'W\Sigma_{xz}(\Sigma_{xz}'WSW\Sigma_{xz})^{-1}\Sigma_{xz}'W\Sigma_{xz}\geqslant 0$$ As $S$ is p.d., $S^{-1}$ can be decomposed as $S^{-1}=C'C$. Further define $H=C\Sigma_{xz}$ and $G=C'^{-1}W\Sigma_{xz}$. Then, \begin{eqnarray*} Q&=&\Sigma_{xz}'C'C\Sigma_{xz}-\Sigma_{xz}'W\Sigma_{xz}(\Sigma_{xz}'WC^{-1}C'^{-1}W\Sigma_{xz})^{-1}\Sigma_{xz}'W\Sigma_{xz}\\ &=&H'H-\Sigma_{xz}'W\Sigma_{xz}(G'G)^{-1}\Sigma_{xz}'W\Sigma_{xz}\\ &=&H'H-\Sigma_{xz}'C'C'^{-1}W\Sigma_{xz}(G'G)^{-1}\Sigma_{xz}'WC^{-1}C\Sigma_{xz}\\ &=&H'H-H'G(G'G)^{-1}G'H\\ &=&H'(I-G(G'G)^{-1}G')H\\[-4ex] \end{eqnarray*} The matrix in brackets is, as usual, symmetric and idempotent and therefore p.s.d. Thus, for an arbitrary $a$, \begin{eqnarray*} a'Qa&=&a'H'(I-G(G'G)^{-1}G')Ha\\ &=:&c'(I-G(G'G)^{-1}G')c\geqslant0 \end{eqnarray*}
What's the point in using identity matrix as weighting matrix in GMM?
This second answer addresses the question posed in the comment to the first answer as to why the specific choice of $W$ results in an efficient GMM estimator. The efficient weighting matrix results fr
What's the point in using identity matrix as weighting matrix in GMM? This second answer addresses the question posed in the comment to the first answer as to why the specific choice of $W$ results in an efficient GMM estimator. The efficient weighting matrix results from the general one by setting $W=S^{-1}$ to get an asymptotic variance \begin{eqnarray} \mathrm{Avar}(\widehat{\delta}(\widehat{S}))&=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\Sigma_{xz}'S^{-1}SS^{-1}\Sigma_{xz}(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\notag\\ &=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\Sigma_{xz}'S^{-1}\Sigma_{xz}(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\notag\\ &=&(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\label{avareffgmm}%\\[-4ex] \end{eqnarray} We therefore need to show that the difference between the general asymptotic variance and the one with the specific (to be shown) efficient weighting matrix is p.d.: $$ (\Sigma_{xz}'W\Sigma_{xz})^{-1}\Sigma_{xz}'WSW\Sigma_{xz}(\Sigma_{xz}'W\Sigma_{xz})^{-1}-(\Sigma_{xz}'S^{-1}\Sigma_{xz})^{-1}\geqslant0$$ Linear algebra (see Thm. 1.24, Magnus/Neudecker 1988, i.e. $A-B\geqslant0\Leftrightarrow B^{-1}-A^{-1}\geqslant0$, much like $3>2$, but $1/2>1/3$) tells us that this condition is equivalent to $$ Q:=\Sigma_{xz}'S^{-1}\Sigma_{xz}-\Sigma_{xz}'W\Sigma_{xz}(\Sigma_{xz}'WSW\Sigma_{xz})^{-1}\Sigma_{xz}'W\Sigma_{xz}\geqslant 0$$ As $S$ is p.d., $S^{-1}$ can be decomposed as $S^{-1}=C'C$. Further define $H=C\Sigma_{xz}$ and $G=C'^{-1}W\Sigma_{xz}$. Then, \begin{eqnarray*} Q&=&\Sigma_{xz}'C'C\Sigma_{xz}-\Sigma_{xz}'W\Sigma_{xz}(\Sigma_{xz}'WC^{-1}C'^{-1}W\Sigma_{xz})^{-1}\Sigma_{xz}'W\Sigma_{xz}\\ &=&H'H-\Sigma_{xz}'W\Sigma_{xz}(G'G)^{-1}\Sigma_{xz}'W\Sigma_{xz}\\ &=&H'H-\Sigma_{xz}'C'C'^{-1}W\Sigma_{xz}(G'G)^{-1}\Sigma_{xz}'WC^{-1}C\Sigma_{xz}\\ &=&H'H-H'G(G'G)^{-1}G'H\\ &=&H'(I-G(G'G)^{-1}G')H\\[-4ex] \end{eqnarray*} The matrix in brackets is, as usual, symmetric and idempotent and therefore p.s.d. Thus, for an arbitrary $a$, \begin{eqnarray*} a'Qa&=&a'H'(I-G(G'G)^{-1}G')Ha\\ &=:&c'(I-G(G'G)^{-1}G')c\geqslant0 \end{eqnarray*}
What's the point in using identity matrix as weighting matrix in GMM? This second answer addresses the question posed in the comment to the first answer as to why the specific choice of $W$ results in an efficient GMM estimator. The efficient weighting matrix results fr
54,127
Definition of Statistic
A statistic is a function that maps from the set of outcomes of the observable values to a real number. Thus, with $n$ data points, a statistic will be a function $s: \mathbb{R}^n\rightarrow \mathbb{R}$ as in your second form. However, it is also possible to view the statistic in its random sense by taking the appropriate composition of function with the original random variables. (Remember that each random variable $X_i: \Omega \rightarrow \mathbb{R}$ is a measurable function that maps from the sample space to the real numbers.) That is, you can form the random variable $S: \Omega \rightarrow \mathbb{R}$ as: $$S(\omega) = s(X_1(\omega), ..., X_n(\omega)).$$ The random variable $S$ is the random version of the statistic $s$. Both are often referred to as "statistics", but it is important to bear in mind that $S$ is a composition with the functions for the observable random variables.
Definition of Statistic
A statistic is a function that maps from the set of outcomes of the observable values to a real number. Thus, with $n$ data points, a statistic will be a function $s: \mathbb{R}^n\rightarrow \mathbb{
Definition of Statistic A statistic is a function that maps from the set of outcomes of the observable values to a real number. Thus, with $n$ data points, a statistic will be a function $s: \mathbb{R}^n\rightarrow \mathbb{R}$ as in your second form. However, it is also possible to view the statistic in its random sense by taking the appropriate composition of function with the original random variables. (Remember that each random variable $X_i: \Omega \rightarrow \mathbb{R}$ is a measurable function that maps from the sample space to the real numbers.) That is, you can form the random variable $S: \Omega \rightarrow \mathbb{R}$ as: $$S(\omega) = s(X_1(\omega), ..., X_n(\omega)).$$ The random variable $S$ is the random version of the statistic $s$. Both are often referred to as "statistics", but it is important to bear in mind that $S$ is a composition with the functions for the observable random variables.
Definition of Statistic A statistic is a function that maps from the set of outcomes of the observable values to a real number. Thus, with $n$ data points, a statistic will be a function $s: \mathbb{R}^n\rightarrow \mathbb{
54,128
What's a mean field variational family?
Loosely speaking, the mean field family defines a specific class of joint distributions. So $z$ here is actually a parameter vector of length m. That means that $q(z)$ describes a joint distribution over all of the individual z's, and can be written as $$q(z) = q(z_1, z_2, \ldots, z_m)$$ We can use the chain rule to factorize this: $$ = q(z_1)q(z_2|z_1)\ldots q(z_m|z_1, z_2, \ldots z_{m-1})$$ Now, for this joint distribution to be in the mean field family, we make a simplifying assumption and assume that all of the $z_i$s are independent from each other. I'll note here that this assumes that the $z_i$'s under the variational distributions are independent; the true joint $p(z_1, \ldots z_m)$ is almost certainly going to have some dependence among the variables. In this sense, we are trading off accuracy (throwing away all covariances) for some computational benefits. Now, if we make that independence assumption, we can see that the joint reduces down to $$q(z) = q(z_1)q(z_2)\ldots q(z_m) = \prod_{i=1}^m q(z_i)$$ Which is the form that the mean field family takes. As for your question about how this won't reduce to a constant, I'm not entirely sure what you mean. All of the $z_i$'s are random variables, so I don't see how this could become a constant.
What's a mean field variational family?
Loosely speaking, the mean field family defines a specific class of joint distributions. So $z$ here is actually a parameter vector of length m. That means that $q(z)$ describes a joint distribution
What's a mean field variational family? Loosely speaking, the mean field family defines a specific class of joint distributions. So $z$ here is actually a parameter vector of length m. That means that $q(z)$ describes a joint distribution over all of the individual z's, and can be written as $$q(z) = q(z_1, z_2, \ldots, z_m)$$ We can use the chain rule to factorize this: $$ = q(z_1)q(z_2|z_1)\ldots q(z_m|z_1, z_2, \ldots z_{m-1})$$ Now, for this joint distribution to be in the mean field family, we make a simplifying assumption and assume that all of the $z_i$s are independent from each other. I'll note here that this assumes that the $z_i$'s under the variational distributions are independent; the true joint $p(z_1, \ldots z_m)$ is almost certainly going to have some dependence among the variables. In this sense, we are trading off accuracy (throwing away all covariances) for some computational benefits. Now, if we make that independence assumption, we can see that the joint reduces down to $$q(z) = q(z_1)q(z_2)\ldots q(z_m) = \prod_{i=1}^m q(z_i)$$ Which is the form that the mean field family takes. As for your question about how this won't reduce to a constant, I'm not entirely sure what you mean. All of the $z_i$'s are random variables, so I don't see how this could become a constant.
What's a mean field variational family? Loosely speaking, the mean field family defines a specific class of joint distributions. So $z$ here is actually a parameter vector of length m. That means that $q(z)$ describes a joint distribution
54,129
Is the sum of trends of two time series the trend of the sum of the time series?
Is the sum of trends of two time series the trend of the sum of the time series? - as a general question, it depends: if the estimator is linear in the data then yes, but in general, no. On the specific question of using Theil-Sen slope (median pairwise slope) as a trend estimate: When using a Theil-Sen slope estimate, the trend of the sum can be quite different from the sum of the trends (the Theil-Sen slope is not linear in the observations). Here's a small example: Series A: -1.15 2.19 1.32 1.40 2.04 Series B: -2.63 -0.98 -1.23 -1.68 -5.86 The Theil-Sen slope for the first (unit-spaced) series is 0.5, the Theil-Sen slope for the second series is -0.4 (as in your question) but the slope for the sum is negative (indeed, it's -0.5575, more negative that for the second series).
Is the sum of trends of two time series the trend of the sum of the time series?
Is the sum of trends of two time series the trend of the sum of the time series? - as a general question, it depends: if the estimator is linear in the data then yes, but in general, no. On the spec
Is the sum of trends of two time series the trend of the sum of the time series? Is the sum of trends of two time series the trend of the sum of the time series? - as a general question, it depends: if the estimator is linear in the data then yes, but in general, no. On the specific question of using Theil-Sen slope (median pairwise slope) as a trend estimate: When using a Theil-Sen slope estimate, the trend of the sum can be quite different from the sum of the trends (the Theil-Sen slope is not linear in the observations). Here's a small example: Series A: -1.15 2.19 1.32 1.40 2.04 Series B: -2.63 -0.98 -1.23 -1.68 -5.86 The Theil-Sen slope for the first (unit-spaced) series is 0.5, the Theil-Sen slope for the second series is -0.4 (as in your question) but the slope for the sum is negative (indeed, it's -0.5575, more negative that for the second series).
Is the sum of trends of two time series the trend of the sum of the time series? Is the sum of trends of two time series the trend of the sum of the time series? - as a general question, it depends: if the estimator is linear in the data then yes, but in general, no. On the spec
54,130
Coefficient Significance in Regression with Arima Errors
The forecast package does forecasting. For that purpose, the significance of variables is irrelevant. What matters is whether a variable is useful for forecasting. The AIC is a good guide for selecting variables for forecasting, so the package minimizes the AIC. If you really want to do a significance test on a variable, just compute the t-statistics from the output. In the example provided, the t-statistic for income is 0.2028/0.0461 = 4.4. The p-value is 2*(1-pt(0.2028/0.0461, NROW(fpp2::uschange)-5)) = 1.8e-5
Coefficient Significance in Regression with Arima Errors
The forecast package does forecasting. For that purpose, the significance of variables is irrelevant. What matters is whether a variable is useful for forecasting. The AIC is a good guide for selectin
Coefficient Significance in Regression with Arima Errors The forecast package does forecasting. For that purpose, the significance of variables is irrelevant. What matters is whether a variable is useful for forecasting. The AIC is a good guide for selecting variables for forecasting, so the package minimizes the AIC. If you really want to do a significance test on a variable, just compute the t-statistics from the output. In the example provided, the t-statistic for income is 0.2028/0.0461 = 4.4. The p-value is 2*(1-pt(0.2028/0.0461, NROW(fpp2::uschange)-5)) = 1.8e-5
Coefficient Significance in Regression with Arima Errors The forecast package does forecasting. For that purpose, the significance of variables is irrelevant. What matters is whether a variable is useful for forecasting. The AIC is a good guide for selectin
54,131
Coefficient Significance in Regression with Arima Errors
If you aren't interested in realistic(ie wide) confidence limits then this is ok, but if you want good confidence limits then this has a negative impact. The AR/MA parameters are significant, but the ACF/PACF didn't warrant them and in essence mathematically they cancel each other out so no harm done....for this example. The variance just with the causal is .359252 and by adding 3 more parameters it becomes .3219 which is a non-significant reduction. If there had been an outlier in the last period then this will have a big impact in the forecast. There are outliers in this example and if the forecast was right after one of these it would have consequences. SPSS's Temporal Causal Model discusses this capability as well. Here is the model from Autobox(a software I am affiliated with) with a denominator lag and numerator operators on the variable income and 16 outlier variables.
Coefficient Significance in Regression with Arima Errors
If you aren't interested in realistic(ie wide) confidence limits then this is ok, but if you want good confidence limits then this has a negative impact. The AR/MA parameters are significant, but the
Coefficient Significance in Regression with Arima Errors If you aren't interested in realistic(ie wide) confidence limits then this is ok, but if you want good confidence limits then this has a negative impact. The AR/MA parameters are significant, but the ACF/PACF didn't warrant them and in essence mathematically they cancel each other out so no harm done....for this example. The variance just with the causal is .359252 and by adding 3 more parameters it becomes .3219 which is a non-significant reduction. If there had been an outlier in the last period then this will have a big impact in the forecast. There are outliers in this example and if the forecast was right after one of these it would have consequences. SPSS's Temporal Causal Model discusses this capability as well. Here is the model from Autobox(a software I am affiliated with) with a denominator lag and numerator operators on the variable income and 16 outlier variables.
Coefficient Significance in Regression with Arima Errors If you aren't interested in realistic(ie wide) confidence limits then this is ok, but if you want good confidence limits then this has a negative impact. The AR/MA parameters are significant, but the
54,132
How is the Akaike information criterion (AIC) affected by sample size?
There is no particular meaning to AIC for comparison between different data sets. Yes, the AIC value can change for increased $n$. However, AIC is self-referential, which means that one can only compare different models using the SAME data set, not different data sets. This is also tricky, for example, it applies to probably detecting better nested models (models that are in a set/subset format, that is, when all of the models tested can be obtained by eliminating parameters from the most inclusive model). Some experts suggest that AIC also applies to probably detecting better non-nested models, but there are counterexamples, see this Q/A. Perhaps a more meaningful question, that the OP question above is only indirectly implying, is "How well AIC can discriminate between two models when the sample is larger?" and the answer to that is apparently better for increasing $n$. This latter is not unexpected in the sense that AIC is only asymptotically correct, e.g., from Wikipedia, "We ... choose the candidate model that minimized the information loss. We cannot choose with certainty (Sic, italics are mine), because we do not know f (Sic, the unknown data generating process). Akaike (1974) showed, however, that we can estimate, via AIC, how much more (or less) information is lost by g1 than by g2. The estimate, though, is only valid asymptotically; if the number of data points is small, then some correction is often necessary (see AICc...)." Now arbitrary examples of how AIC changes. The first change we consider examines how AIC varies using the same random standard normal variate but different seeds. Shown is a histogram of 1000 repetitions of (normal distribution) model AIC values each from 100 random standard normal outcomes. This shows a distribution for which normalcy is not excluded with $\mu \to -497.672,\sigma \to 48.5034$. This illustrates that a mean AIC value for 1000 independent repetitions of $n=100$ is an educated guess for location of AIC. Next, we apply this "educated guess" and fit it to show the trend: This plot shows how mean AIC values (from 1000 independent trials) change when the number of random outcomes in each trial is $n=5,10,15,...,95,100$. This appears to be approximately a cubic with an SE of 1 AIC unit (R$^2=0.999964$). The meaning of this is like the sound of one hand clapping; all we have done is find a result that is consistent with AIC being a better discriminator for increasing $n$; without comparing to a second model for each trial we cannot detect anything. The only question remaining is why the AIC values increase for more data in the OP's question. Some software packages will sometimes show $-$AIC values in tables so that more is better, as opposed to less is better, but use the AIC values themselves for discriminating between models.
How is the Akaike information criterion (AIC) affected by sample size?
There is no particular meaning to AIC for comparison between different data sets. Yes, the AIC value can change for increased $n$. However, AIC is self-referential, which means that one can only compa
How is the Akaike information criterion (AIC) affected by sample size? There is no particular meaning to AIC for comparison between different data sets. Yes, the AIC value can change for increased $n$. However, AIC is self-referential, which means that one can only compare different models using the SAME data set, not different data sets. This is also tricky, for example, it applies to probably detecting better nested models (models that are in a set/subset format, that is, when all of the models tested can be obtained by eliminating parameters from the most inclusive model). Some experts suggest that AIC also applies to probably detecting better non-nested models, but there are counterexamples, see this Q/A. Perhaps a more meaningful question, that the OP question above is only indirectly implying, is "How well AIC can discriminate between two models when the sample is larger?" and the answer to that is apparently better for increasing $n$. This latter is not unexpected in the sense that AIC is only asymptotically correct, e.g., from Wikipedia, "We ... choose the candidate model that minimized the information loss. We cannot choose with certainty (Sic, italics are mine), because we do not know f (Sic, the unknown data generating process). Akaike (1974) showed, however, that we can estimate, via AIC, how much more (or less) information is lost by g1 than by g2. The estimate, though, is only valid asymptotically; if the number of data points is small, then some correction is often necessary (see AICc...)." Now arbitrary examples of how AIC changes. The first change we consider examines how AIC varies using the same random standard normal variate but different seeds. Shown is a histogram of 1000 repetitions of (normal distribution) model AIC values each from 100 random standard normal outcomes. This shows a distribution for which normalcy is not excluded with $\mu \to -497.672,\sigma \to 48.5034$. This illustrates that a mean AIC value for 1000 independent repetitions of $n=100$ is an educated guess for location of AIC. Next, we apply this "educated guess" and fit it to show the trend: This plot shows how mean AIC values (from 1000 independent trials) change when the number of random outcomes in each trial is $n=5,10,15,...,95,100$. This appears to be approximately a cubic with an SE of 1 AIC unit (R$^2=0.999964$). The meaning of this is like the sound of one hand clapping; all we have done is find a result that is consistent with AIC being a better discriminator for increasing $n$; without comparing to a second model for each trial we cannot detect anything. The only question remaining is why the AIC values increase for more data in the OP's question. Some software packages will sometimes show $-$AIC values in tables so that more is better, as opposed to less is better, but use the AIC values themselves for discriminating between models.
How is the Akaike information criterion (AIC) affected by sample size? There is no particular meaning to AIC for comparison between different data sets. Yes, the AIC value can change for increased $n$. However, AIC is self-referential, which means that one can only compa
54,133
GLMM for count data using square root link in lme4
It looks very much like you have a case of complete separation: there is only one landform (ridge) that has seedlings, while other had no seedlings at al large estimates ($|\hat \beta|>10$), and ridiculously large standard error estimates. Basically what's happening is that the baseline level ("abandoned") has an expected number of counts equal to zero for all plots, so the intercept $\beta_0$ - which is the expected log(counts) for the baseline level - should be estimated as $-\infty$ ... which messes up the Wald estimation of the uncertainty (the approximate, fast method that summary() uses). You can read more about complete separation elsewhere; it is more typically discussed in the context of logistic regression (in part because logistic regression is more widely used than count regression ...) Solutions: your square-root-link solution is reasonable (in this case the intercept is expected $\sqrt{\textrm{counts}}$ in the baseline level, which is zero rather than $-\infty$); it will change the assumed distribution of the random effects slightly (i.e., Normal on the square-root rather than on the log scale), but that wouldn't worry me too much. If you had continuous covariates or interactions in the model, it would also change the interpretation of the fixed effects. you could use some kind of penalization (most conveniently in a Bayesian framework), as described in my answer to the linked question (and here, search for "complete separation") to keep the parameters reasonable.
GLMM for count data using square root link in lme4
It looks very much like you have a case of complete separation: there is only one landform (ridge) that has seedlings, while other had no seedlings at al large estimates ($|\hat \beta|>10$), and r
GLMM for count data using square root link in lme4 It looks very much like you have a case of complete separation: there is only one landform (ridge) that has seedlings, while other had no seedlings at al large estimates ($|\hat \beta|>10$), and ridiculously large standard error estimates. Basically what's happening is that the baseline level ("abandoned") has an expected number of counts equal to zero for all plots, so the intercept $\beta_0$ - which is the expected log(counts) for the baseline level - should be estimated as $-\infty$ ... which messes up the Wald estimation of the uncertainty (the approximate, fast method that summary() uses). You can read more about complete separation elsewhere; it is more typically discussed in the context of logistic regression (in part because logistic regression is more widely used than count regression ...) Solutions: your square-root-link solution is reasonable (in this case the intercept is expected $\sqrt{\textrm{counts}}$ in the baseline level, which is zero rather than $-\infty$); it will change the assumed distribution of the random effects slightly (i.e., Normal on the square-root rather than on the log scale), but that wouldn't worry me too much. If you had continuous covariates or interactions in the model, it would also change the interpretation of the fixed effects. you could use some kind of penalization (most conveniently in a Bayesian framework), as described in my answer to the linked question (and here, search for "complete separation") to keep the parameters reasonable.
GLMM for count data using square root link in lme4 It looks very much like you have a case of complete separation: there is only one landform (ridge) that has seedlings, while other had no seedlings at al large estimates ($|\hat \beta|>10$), and r
54,134
GLMM for count data using square root link in lme4
Indeed this seems to be a separation issue. To account for these cases, in my GLMMadaptive package you can include a penalty for the fixed-effects coefficients in the form of a Students-t density (i.e., for large enough df equivalent to ridge regression). For a worked example, have a look at the last section of this vignette.
GLMM for count data using square root link in lme4
Indeed this seems to be a separation issue. To account for these cases, in my GLMMadaptive package you can include a penalty for the fixed-effects coefficients in the form of a Students-t density (i.e
GLMM for count data using square root link in lme4 Indeed this seems to be a separation issue. To account for these cases, in my GLMMadaptive package you can include a penalty for the fixed-effects coefficients in the form of a Students-t density (i.e., for large enough df equivalent to ridge regression). For a worked example, have a look at the last section of this vignette.
GLMM for count data using square root link in lme4 Indeed this seems to be a separation issue. To account for these cases, in my GLMMadaptive package you can include a penalty for the fixed-effects coefficients in the form of a Students-t density (i.e
54,135
Sufficient statistic when $X\sim U(\theta,2 \theta)$
Regarding 1., note that interpretation of a sufficient statistic is: "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". So, we need both the statistics to provide the "most information" possible about the value of the parameter. Not necessarily "full information". Regarding 2. Note that your likelihood can be written as $$ f_{\theta} (x_1, \dots, x_n) = \dfrac{1}{\theta^n} I\left( \dfrac{\max x_i}{2} \leq \theta \leq \min x_i \right)\,.$$ The MLE is the statistic that maximizes the likelihood. You can show that the likelihood is a decreasing function of $\theta$. So the likelihood is maximized on the smallest value $\theta$ is allowed to take, which is $(\max x_i)/2$
Sufficient statistic when $X\sim U(\theta,2 \theta)$
Regarding 1., note that interpretation of a sufficient statistic is: "no other statistic that can be calculated from the same sample provides any additional information as to the value of the paramete
Sufficient statistic when $X\sim U(\theta,2 \theta)$ Regarding 1., note that interpretation of a sufficient statistic is: "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". So, we need both the statistics to provide the "most information" possible about the value of the parameter. Not necessarily "full information". Regarding 2. Note that your likelihood can be written as $$ f_{\theta} (x_1, \dots, x_n) = \dfrac{1}{\theta^n} I\left( \dfrac{\max x_i}{2} \leq \theta \leq \min x_i \right)\,.$$ The MLE is the statistic that maximizes the likelihood. You can show that the likelihood is a decreasing function of $\theta$. So the likelihood is maximized on the smallest value $\theta$ is allowed to take, which is $(\max x_i)/2$
Sufficient statistic when $X\sim U(\theta,2 \theta)$ Regarding 1., note that interpretation of a sufficient statistic is: "no other statistic that can be calculated from the same sample provides any additional information as to the value of the paramete
54,136
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$
Your method assumes that $X$ is a continuous random variable, which is not stated as a condition of the problem. It is possible to get the result in a more general case, so long as there is some non-zero probability that $X \neq \theta$. From your stated form for $\psi$ you have: $$\psi(X-\theta) = \mathbb{I}(X < \theta) - p \cdot \mathbb{I}(X \neq \theta).$$ Hence, taking the expected value you get the function: $$\begin{equation} \begin{aligned} \mu(\theta) \equiv \mathbb{E}(\psi(X-\theta)) &= \mathbb{P}(X < \theta) - p \cdot \mathbb{P}(X \neq \theta). \\[6pt] \end{aligned} \end{equation}$$ Now, for the case where $\mathbb{P}(X \neq \theta)=0$ you have $\mu(\theta)=0$ for any $p \in \mathbb{R}$, so the implication you want to prove is not present in that case. For the case where $\mathbb{P}(X \neq \theta)>0$ the expected value condition $\mu(\theta)=0$ implies that: $$p = \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)}.$$ With a bit of algebra you can obtain the required inequalities. The first is established by: $$\begin{equation} \begin{aligned} p = \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)} \geqslant\mathbb{P}(X < \theta). \end{aligned} \end{equation}$$ The second is established as: $$\begin{equation} \begin{aligned} p &= \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)} \\[6pt] &\leqslant \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \cdot \mathbb{P}(X > \theta)}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \cdot (1-\mathbb{P}(X < \theta)-\mathbb{P}(X = \theta))}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) - \mathbb{P}(X = \theta)\mathbb{P}(X < \theta)-\mathbb{P}(X = \theta)^2}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{(\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta)) \cdot (1 - \mathbb{P}(X = \theta))}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \\[8pt] &= \mathbb{P}(X \leqslant \theta). \\[6pt] \end{aligned} \end{equation}$$
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$
Your method assumes that $X$ is a continuous random variable, which is not stated as a condition of the problem. It is possible to get the result in a more general case, so long as there is some non-
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$ Your method assumes that $X$ is a continuous random variable, which is not stated as a condition of the problem. It is possible to get the result in a more general case, so long as there is some non-zero probability that $X \neq \theta$. From your stated form for $\psi$ you have: $$\psi(X-\theta) = \mathbb{I}(X < \theta) - p \cdot \mathbb{I}(X \neq \theta).$$ Hence, taking the expected value you get the function: $$\begin{equation} \begin{aligned} \mu(\theta) \equiv \mathbb{E}(\psi(X-\theta)) &= \mathbb{P}(X < \theta) - p \cdot \mathbb{P}(X \neq \theta). \\[6pt] \end{aligned} \end{equation}$$ Now, for the case where $\mathbb{P}(X \neq \theta)=0$ you have $\mu(\theta)=0$ for any $p \in \mathbb{R}$, so the implication you want to prove is not present in that case. For the case where $\mathbb{P}(X \neq \theta)>0$ the expected value condition $\mu(\theta)=0$ implies that: $$p = \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)}.$$ With a bit of algebra you can obtain the required inequalities. The first is established by: $$\begin{equation} \begin{aligned} p = \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)} \geqslant\mathbb{P}(X < \theta). \end{aligned} \end{equation}$$ The second is established as: $$\begin{equation} \begin{aligned} p &= \frac{\mathbb{P}(X < \theta)}{1-\mathbb{P}(X = \theta)} \\[6pt] &\leqslant \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \cdot \mathbb{P}(X > \theta)}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \cdot (1-\mathbb{P}(X < \theta)-\mathbb{P}(X = \theta))}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) - \mathbb{P}(X = \theta)\mathbb{P}(X < \theta)-\mathbb{P}(X = \theta)^2}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \frac{(\mathbb{P}(X < \theta) + \mathbb{P}(X = \theta)) \cdot (1 - \mathbb{P}(X = \theta))}{1-\mathbb{P}(X = \theta)} \\[6pt] &= \mathbb{P}(X < \theta) + \mathbb{P}(X = \theta) \\[8pt] &= \mathbb{P}(X \leqslant \theta). \\[6pt] \end{aligned} \end{equation}$$
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$ Your method assumes that $X$ is a continuous random variable, which is not stated as a condition of the problem. It is possible to get the result in a more general case, so long as there is some non-
54,137
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$
There are many ways to approach this problem. The point of the following is to take you through a process of analyzing the question, performing the requisite calculations as simply and easily as possible, developing a strategy to carry out the proof, and applying that strategy. A section of concluding remarks highlights what has been achieved by this method. Analysis Recall that a random variable $X$ assigns a number to each element $\omega$ of a sample space $\Omega.$ The expression $Y=\psi(X-\theta)$ is another way to assign numbers to elements of $\Omega;$ namely, for each $\omega,$ compute $\psi(X(\omega)-\theta).$ Notice that the resulting number is one of at most three values: $-p, 0,$ and $1-p.$ As you will see, we will be able to compute the probabilities of any subset of these values. This makes $Y$ a random variable, too: and it's a discrete one. This simplifies the calculations we might need to do. At this point it is evident we need to accomplish two things: (1) we need to compute an expectation and (2) we will need to manipulate inequalities algebraically. Let's take these in turn. Preliminary calculations The expectation of $Y$ can be found from the very definition: multiply its values by its probabilities. Let's tabulate them: For $X \lt \theta,$ $Y=\psi(X-\theta) = 1-p.$ This has probability $\Pr(X\lt \theta).$ For $X=\theta,$ $Y=\psi(0) = 0.$ This has probability $\Pr(X=\theta).$ For $X \gt \theta,$ $Y = -p.$ This has probability $\Pr(X \gt \theta).$ The expectation of $Y$ is the sum of its values times their probabilities: $$\mathbb{E}(\psi(X-\theta)) = \mathbb{E}[Y] = (1-p)\Pr(X\lt\theta) + 0\Pr(X=\theta) + (-p)\Pr(X \gt \theta).\tag{1}$$ Strategy for the proof The simplest way to carry out the required demonstration is to show its contrapositive: that is, if $\Pr(X \lt \theta)\gt p$ or $\Pr(X \le \theta) \lt p,$ we need to conclude that $\mathbb{E}(\psi(X-\theta))$ is not zero. In the first case where $\Pr(X\lt \theta)\gt p,$ the additivity of mutually exclusive events and the axiom of unit measure--two of the axioms of probability--guarantee that $$\eqalign{ \Pr(X \gt \theta) &= 1 - (\Pr(X\lt\theta) + \Pr(X=\theta)) \\ &\le 1 - \Pr(X\lt \theta)\\ & \lt 1-p. }$$ Substituting these two inequalities into $(1)$ gives $$\eqalign{ \mathbb{E}(\psi(X-\theta)) &= (1-p)\Pr(X\lt\theta) -p\Pr(X \gt \theta) \\ &\gt (1-p)p - p(1-p) = 0, }$$ proving the expectation cannot be zero. The demonstration in the second case parallels this one, QED. Comments Because this proof is completely elementary--it relies only on the definition of expectation and axioms of probability--it reveals how little needs to be assumed and how general the result is: It is not necessary to assume $0\lt p\lt 1:$ the assertion that was proved is true for all $p.$ This demonstration works even when $p=0$ or $1-p=0$ (whereas other attempts might fail for these values, in case they appear in the denominators of any fractions). It was not necessary to assume $X$ has a density $f.$ No higher mathematical concepts, such as integration, were needed. Indeed, we did not even have to use a distribution function for $X$: we worked directly with the relevant probabilities.
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$
There are many ways to approach this problem. The point of the following is to take you through a process of analyzing the question, performing the requisite calculations as simply and easily as possi
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$ There are many ways to approach this problem. The point of the following is to take you through a process of analyzing the question, performing the requisite calculations as simply and easily as possible, developing a strategy to carry out the proof, and applying that strategy. A section of concluding remarks highlights what has been achieved by this method. Analysis Recall that a random variable $X$ assigns a number to each element $\omega$ of a sample space $\Omega.$ The expression $Y=\psi(X-\theta)$ is another way to assign numbers to elements of $\Omega;$ namely, for each $\omega,$ compute $\psi(X(\omega)-\theta).$ Notice that the resulting number is one of at most three values: $-p, 0,$ and $1-p.$ As you will see, we will be able to compute the probabilities of any subset of these values. This makes $Y$ a random variable, too: and it's a discrete one. This simplifies the calculations we might need to do. At this point it is evident we need to accomplish two things: (1) we need to compute an expectation and (2) we will need to manipulate inequalities algebraically. Let's take these in turn. Preliminary calculations The expectation of $Y$ can be found from the very definition: multiply its values by its probabilities. Let's tabulate them: For $X \lt \theta,$ $Y=\psi(X-\theta) = 1-p.$ This has probability $\Pr(X\lt \theta).$ For $X=\theta,$ $Y=\psi(0) = 0.$ This has probability $\Pr(X=\theta).$ For $X \gt \theta,$ $Y = -p.$ This has probability $\Pr(X \gt \theta).$ The expectation of $Y$ is the sum of its values times their probabilities: $$\mathbb{E}(\psi(X-\theta)) = \mathbb{E}[Y] = (1-p)\Pr(X\lt\theta) + 0\Pr(X=\theta) + (-p)\Pr(X \gt \theta).\tag{1}$$ Strategy for the proof The simplest way to carry out the required demonstration is to show its contrapositive: that is, if $\Pr(X \lt \theta)\gt p$ or $\Pr(X \le \theta) \lt p,$ we need to conclude that $\mathbb{E}(\psi(X-\theta))$ is not zero. In the first case where $\Pr(X\lt \theta)\gt p,$ the additivity of mutually exclusive events and the axiom of unit measure--two of the axioms of probability--guarantee that $$\eqalign{ \Pr(X \gt \theta) &= 1 - (\Pr(X\lt\theta) + \Pr(X=\theta)) \\ &\le 1 - \Pr(X\lt \theta)\\ & \lt 1-p. }$$ Substituting these two inequalities into $(1)$ gives $$\eqalign{ \mathbb{E}(\psi(X-\theta)) &= (1-p)\Pr(X\lt\theta) -p\Pr(X \gt \theta) \\ &\gt (1-p)p - p(1-p) = 0, }$$ proving the expectation cannot be zero. The demonstration in the second case parallels this one, QED. Comments Because this proof is completely elementary--it relies only on the definition of expectation and axioms of probability--it reveals how little needs to be assumed and how general the result is: It is not necessary to assume $0\lt p\lt 1:$ the assertion that was proved is true for all $p.$ This demonstration works even when $p=0$ or $1-p=0$ (whereas other attempts might fail for these values, in case they appear in the denominators of any fractions). It was not necessary to assume $X$ has a density $f.$ No higher mathematical concepts, such as integration, were needed. Indeed, we did not even have to use a distribution function for $X$: we worked directly with the relevant probabilities.
Show that if $E\psi(x-\theta)= 0 $ then $P(X< \theta) \leq p \leq P(X \leq \theta)$ There are many ways to approach this problem. The point of the following is to take you through a process of analyzing the question, performing the requisite calculations as simply and easily as possi
54,138
Why Standard Deviation is more popular than Mean Absolute Deviation? [duplicate]
Historically, Laplace started with the expected absolute deviation from the expectation and got mired into computational issues, beyond the Laplace (or double exponential) distribution, while Legendre and Gauss advocated the expected square difference from the expectation, which is more naturally connected with the Normal or Gaussian distribution. Portnoy and Koenker wrote a nice paper called the Gaussian Hare and the Laplacian Tortoise (!) on that issue, including a parody of the Hare and the Tortoise with Laplace's and Gauss' heads: The issue is covered in depth in this earlier (2015) X Validated question. (Which makes the current one a potential duplicate.)
Why Standard Deviation is more popular than Mean Absolute Deviation? [duplicate]
Historically, Laplace started with the expected absolute deviation from the expectation and got mired into computational issues, beyond the Laplace (or double exponential) distribution, while Legendre
Why Standard Deviation is more popular than Mean Absolute Deviation? [duplicate] Historically, Laplace started with the expected absolute deviation from the expectation and got mired into computational issues, beyond the Laplace (or double exponential) distribution, while Legendre and Gauss advocated the expected square difference from the expectation, which is more naturally connected with the Normal or Gaussian distribution. Portnoy and Koenker wrote a nice paper called the Gaussian Hare and the Laplacian Tortoise (!) on that issue, including a parody of the Hare and the Tortoise with Laplace's and Gauss' heads: The issue is covered in depth in this earlier (2015) X Validated question. (Which makes the current one a potential duplicate.)
Why Standard Deviation is more popular than Mean Absolute Deviation? [duplicate] Historically, Laplace started with the expected absolute deviation from the expectation and got mired into computational issues, beyond the Laplace (or double exponential) distribution, while Legendre
54,139
How to represent the probability of a point belonging to a cluster?
In general, this is a challenging problem, especially given the constraint that the relative positions in 2D space should be retained. In the absence of that constraint, I would recommend a stacked bar plot. With thin bars and a sorted dataset, colours can easily be used to indicate the probability of belonging to different clusters for a fairly substantial number of points. Plots such as these are common in population genetics and can convey a fair amount of useful information, such as in this example. If we are to stick with the constraint of retaining relative positions in 2 dimensions, I can think of one solution that would work for modest-sized datasets with a small number of clusters. For these cases, you can plot each point as a small pie; the segments of the pie denote the probability of belonging to each cluster. Here is a worked example using 3 clusters # Loading required libraries library(e1071) library(ggplot2) library(scatterpie) # Generating data frame dat <- data.frame(a = c(rnorm(50, mean = 10, sd = 3), rnorm(50, mean = 20, sd = 3), rnorm(50, mean = 30, sd = 3)), b = c(rnorm(50, mean = 10, sd = 5), rnorm(50, mean = 20, sd = 3), rnorm(50, mean = 30, sd = 3))) # Identifying clusters and calculating cluster probabilities using # fuzzy c-means clustering clustdat <- cmeans(dat, centers = 3) # Adding cluster information to dataset dat$clusters <- as.factor(clustdat$cluster) dat$A <- clustdat$membership[,1] dat$B <- clustdat$membership[,2] dat$C <- clustdat$membership[,3] # Plotting ggplot() + geom_scatterpie(aes(a, b, group = clusters), data = dat, cols = LETTERS[1:3]) Note that this may be useful with >2 dimensions as well, by combining this with some sort of dimension reduction technique (for plotting - the clustering can be done in multidimensional space).
How to represent the probability of a point belonging to a cluster?
In general, this is a challenging problem, especially given the constraint that the relative positions in 2D space should be retained. In the absence of that constraint, I would recommend a stacked b
How to represent the probability of a point belonging to a cluster? In general, this is a challenging problem, especially given the constraint that the relative positions in 2D space should be retained. In the absence of that constraint, I would recommend a stacked bar plot. With thin bars and a sorted dataset, colours can easily be used to indicate the probability of belonging to different clusters for a fairly substantial number of points. Plots such as these are common in population genetics and can convey a fair amount of useful information, such as in this example. If we are to stick with the constraint of retaining relative positions in 2 dimensions, I can think of one solution that would work for modest-sized datasets with a small number of clusters. For these cases, you can plot each point as a small pie; the segments of the pie denote the probability of belonging to each cluster. Here is a worked example using 3 clusters # Loading required libraries library(e1071) library(ggplot2) library(scatterpie) # Generating data frame dat <- data.frame(a = c(rnorm(50, mean = 10, sd = 3), rnorm(50, mean = 20, sd = 3), rnorm(50, mean = 30, sd = 3)), b = c(rnorm(50, mean = 10, sd = 5), rnorm(50, mean = 20, sd = 3), rnorm(50, mean = 30, sd = 3))) # Identifying clusters and calculating cluster probabilities using # fuzzy c-means clustering clustdat <- cmeans(dat, centers = 3) # Adding cluster information to dataset dat$clusters <- as.factor(clustdat$cluster) dat$A <- clustdat$membership[,1] dat$B <- clustdat$membership[,2] dat$C <- clustdat$membership[,3] # Plotting ggplot() + geom_scatterpie(aes(a, b, group = clusters), data = dat, cols = LETTERS[1:3]) Note that this may be useful with >2 dimensions as well, by combining this with some sort of dimension reduction technique (for plotting - the clustering can be done in multidimensional space).
How to represent the probability of a point belonging to a cluster? In general, this is a challenging problem, especially given the constraint that the relative positions in 2D space should be retained. In the absence of that constraint, I would recommend a stacked b
54,140
How to represent the probability of a point belonging to a cluster?
Maybe you don't need to exactly encode the distribution. Define a color for "mixed", e.g., gray. Then interpolate between your cluster palette and gray depending on the difference between $p_\max$ and the second largest probability.
How to represent the probability of a point belonging to a cluster?
Maybe you don't need to exactly encode the distribution. Define a color for "mixed", e.g., gray. Then interpolate between your cluster palette and gray depending on the difference between $p_\max$ and
How to represent the probability of a point belonging to a cluster? Maybe you don't need to exactly encode the distribution. Define a color for "mixed", e.g., gray. Then interpolate between your cluster palette and gray depending on the difference between $p_\max$ and the second largest probability.
How to represent the probability of a point belonging to a cluster? Maybe you don't need to exactly encode the distribution. Define a color for "mixed", e.g., gray. Then interpolate between your cluster palette and gray depending on the difference between $p_\max$ and
54,141
To choose between linear or generalised mixed effects model, what is the most important thing to consider?
Linear mixed effects models are for continuous variables. Generalised ones are for non continuous, e.g., binomial. This is not true. See the wiki page for generalized linear models. E.g., the gamma and exponential distribution are generalized linaer models and both are continuous. The difference is that you allow for other distribution than the normal distribution with generalized linear models. We have a task in which subjects can get each item correct or incorrect. I'd say that is binomial at the level of subject at least. Yes that is binomial data. Other member of the teams says the most important thing to make this decision is the research question, which is "how many items out of N they will get correct at each test", and suggested to treat the variable as continuous and use a linear model. I assume that you have fixed number of $n_i$ trials for each subject $i$. In that case it is a fraction that can have value ${0, 1/n_i, 2/n_i, \dots, 1}$. So you should use the binomial distribution as Dimitris Rizopoulos writes. Also, we got many 0s (almost 80% in the last of 3 tests), so maybe we shouldn't even use binomial but zero inflated binomial. This would be important if we decide to use the brms package for R. As far as I gather, you have some number of subjects, $k$, who each make some number of guesses, $n_1,\dots,n_k$. Then you model $E(y_i/n_i)$ where $y_i$ is the number of correct guesses from subject $i$. Assuming that you have no covaraites then the model with random effects could be $$g(E(y_i/n_i)) = \mu + \epsilon_i,\qquad \epsilon_i\sim N(0,\sigma^2)$$ where $g$ is a link function (e.g., logit), $\mu$ is logit of the probability of a subject guessing correct when the random effect is zero, and $\epsilon_i$ is the random effect of subject $i$. Notice that this model easily yield "a lot of zeroes" if the $\mu$ is sufficiently small and you use the logit link function. Hence, a lot zeroes may not be good argument to use a zero inflated binomial in this case.
To choose between linear or generalised mixed effects model, what is the most important thing to con
Linear mixed effects models are for continuous variables. Generalised ones are for non continuous, e.g., binomial. This is not true. See the wiki page for generalized linear models. E.g., the gamma a
To choose between linear or generalised mixed effects model, what is the most important thing to consider? Linear mixed effects models are for continuous variables. Generalised ones are for non continuous, e.g., binomial. This is not true. See the wiki page for generalized linear models. E.g., the gamma and exponential distribution are generalized linaer models and both are continuous. The difference is that you allow for other distribution than the normal distribution with generalized linear models. We have a task in which subjects can get each item correct or incorrect. I'd say that is binomial at the level of subject at least. Yes that is binomial data. Other member of the teams says the most important thing to make this decision is the research question, which is "how many items out of N they will get correct at each test", and suggested to treat the variable as continuous and use a linear model. I assume that you have fixed number of $n_i$ trials for each subject $i$. In that case it is a fraction that can have value ${0, 1/n_i, 2/n_i, \dots, 1}$. So you should use the binomial distribution as Dimitris Rizopoulos writes. Also, we got many 0s (almost 80% in the last of 3 tests), so maybe we shouldn't even use binomial but zero inflated binomial. This would be important if we decide to use the brms package for R. As far as I gather, you have some number of subjects, $k$, who each make some number of guesses, $n_1,\dots,n_k$. Then you model $E(y_i/n_i)$ where $y_i$ is the number of correct guesses from subject $i$. Assuming that you have no covaraites then the model with random effects could be $$g(E(y_i/n_i)) = \mu + \epsilon_i,\qquad \epsilon_i\sim N(0,\sigma^2)$$ where $g$ is a link function (e.g., logit), $\mu$ is logit of the probability of a subject guessing correct when the random effect is zero, and $\epsilon_i$ is the random effect of subject $i$. Notice that this model easily yield "a lot of zeroes" if the $\mu$ is sufficiently small and you use the logit link function. Hence, a lot zeroes may not be good argument to use a zero inflated binomial in this case.
To choose between linear or generalised mixed effects model, what is the most important thing to con Linear mixed effects models are for continuous variables. Generalised ones are for non continuous, e.g., binomial. This is not true. See the wiki page for generalized linear models. E.g., the gamma a
54,142
To choose between linear or generalised mixed effects model, what is the most important thing to consider?
The number of successes out of N trials is a Binomial distribution. Hence, it seems that you should go for a mixed-effects logistic regression.
To choose between linear or generalised mixed effects model, what is the most important thing to con
The number of successes out of N trials is a Binomial distribution. Hence, it seems that you should go for a mixed-effects logistic regression.
To choose between linear or generalised mixed effects model, what is the most important thing to consider? The number of successes out of N trials is a Binomial distribution. Hence, it seems that you should go for a mixed-effects logistic regression.
To choose between linear or generalised mixed effects model, what is the most important thing to con The number of successes out of N trials is a Binomial distribution. Hence, it seems that you should go for a mixed-effects logistic regression.
54,143
Can PMF have value greater than 1?
Your question has 2 parts Probability Mass Function: It have discrete values and we count only those values for probability. So F(x):Pr(R=x) is only the probabilty which is always less than 1 Probability Density Distribution: Here we don't have discrete values and if we consider a point Pr(of single point)=0. Hence we consider area during continous distribution counting. Now since we don't know the exact points(x) for probability distribution there can be cases when probability can be concentrated on a small points rather than the asked points(x). And we also know that area under such PDF curve should be 1. Hence if x is less than 1 then in order to keep area sum 1 value of F(x) should be greater than 1
Can PMF have value greater than 1?
Your question has 2 parts Probability Mass Function: It have discrete values and we count only those values for probability. So F(x):Pr(R=x) is only the probabilty which is always less than 1 Probabi
Can PMF have value greater than 1? Your question has 2 parts Probability Mass Function: It have discrete values and we count only those values for probability. So F(x):Pr(R=x) is only the probabilty which is always less than 1 Probability Density Distribution: Here we don't have discrete values and if we consider a point Pr(of single point)=0. Hence we consider area during continous distribution counting. Now since we don't know the exact points(x) for probability distribution there can be cases when probability can be concentrated on a small points rather than the asked points(x). And we also know that area under such PDF curve should be 1. Hence if x is less than 1 then in order to keep area sum 1 value of F(x) should be greater than 1
Can PMF have value greater than 1? Your question has 2 parts Probability Mass Function: It have discrete values and we count only those values for probability. So F(x):Pr(R=x) is only the probabilty which is always less than 1 Probabi
54,144
Can PMF have value greater than 1?
No, a probability mass function cannot have a value above 1. Quite simply, all the values of the probability mass function must sum to 1. Also, they must be non-negative. From here it follows that, if one of the values exceeded 1, the whole sum would exceed 1. And that is not allowed.
Can PMF have value greater than 1?
No, a probability mass function cannot have a value above 1. Quite simply, all the values of the probability mass function must sum to 1. Also, they must be non-negative. From here it follows that, if
Can PMF have value greater than 1? No, a probability mass function cannot have a value above 1. Quite simply, all the values of the probability mass function must sum to 1. Also, they must be non-negative. From here it follows that, if one of the values exceeded 1, the whole sum would exceed 1. And that is not allowed.
Can PMF have value greater than 1? No, a probability mass function cannot have a value above 1. Quite simply, all the values of the probability mass function must sum to 1. Also, they must be non-negative. From here it follows that, if
54,145
Can PMF have value greater than 1?
By PMF, I assume you mean what is usually called the pdf. For a continuous distribution the answer is yes. What has to be true is that: $$\int_{-\infty}^{\infty}p(x) dx = 1 $$ Imagine a normal distribution, centered at zero (so a mean of zero), and a timy standard deviation (say, .01). Almost all of the points on that curve that will contribute much to that integral will be between -0.1 and 0.1. So, thinking geometrically and approximating with a rectangle, the height is going to be at least something like 5 (it will actually be a lot bigger, because most of the integral will come from points between -.04 and .04). The pdf if a non-continuous variable can never be more than 1, since (1) the sum of them must all add to 1 and (2) they must all be non-negative. The largest it can be is if there is only one possible outcome, which then has P(x) = 1. By the way, the height (greater than 1) has no real meaning in itself, only as part of the integral. It's really tempting for people, when moving from discrete to continuous, to plot out a normal curve with mean = 0 and stdev = 1. If I recall, the high point has a value of around 0.4. But that doesn't mean you will get zero with probability 0.4; in fact, the probability of getting exactly zero (or any other specific number) is zero. But if you change the bounds of the integral to be $x_1$ and $x_2$, you get the probability of getting a number between $x_1$ and $x_2$.
Can PMF have value greater than 1?
By PMF, I assume you mean what is usually called the pdf. For a continuous distribution the answer is yes. What has to be true is that: $$\int_{-\infty}^{\infty}p(x) dx = 1 $$ Imagine a normal dis
Can PMF have value greater than 1? By PMF, I assume you mean what is usually called the pdf. For a continuous distribution the answer is yes. What has to be true is that: $$\int_{-\infty}^{\infty}p(x) dx = 1 $$ Imagine a normal distribution, centered at zero (so a mean of zero), and a timy standard deviation (say, .01). Almost all of the points on that curve that will contribute much to that integral will be between -0.1 and 0.1. So, thinking geometrically and approximating with a rectangle, the height is going to be at least something like 5 (it will actually be a lot bigger, because most of the integral will come from points between -.04 and .04). The pdf if a non-continuous variable can never be more than 1, since (1) the sum of them must all add to 1 and (2) they must all be non-negative. The largest it can be is if there is only one possible outcome, which then has P(x) = 1. By the way, the height (greater than 1) has no real meaning in itself, only as part of the integral. It's really tempting for people, when moving from discrete to continuous, to plot out a normal curve with mean = 0 and stdev = 1. If I recall, the high point has a value of around 0.4. But that doesn't mean you will get zero with probability 0.4; in fact, the probability of getting exactly zero (or any other specific number) is zero. But if you change the bounds of the integral to be $x_1$ and $x_2$, you get the probability of getting a number between $x_1$ and $x_2$.
Can PMF have value greater than 1? By PMF, I assume you mean what is usually called the pdf. For a continuous distribution the answer is yes. What has to be true is that: $$\int_{-\infty}^{\infty}p(x) dx = 1 $$ Imagine a normal dis
54,146
mlogit package fails to recover synthetic mixed logit model
You appear to have hit upon an unlucky combination of optimization parameters, specifically, with respect to the Halton pseudo-random sequence, which may possibly have a bug in it. BFGS appears to be stopping prematurely with R = 300, but not with other significantly smaller or larger values. Fortunately, you don't need large values of R (or Halton at all) in this case. On my initial run through your code, I got the same results as you did, with runtime statistics indicating that 8 iterations were required for BFGS to converge. I then changed R in the function call to equal 30: > m.mixed <- mlogit(choice ~ price + time + bus | 0, + data=logit.data, + rpar= c(bus = 'n'), + R = 30, halton = NA) > > summary(m.mixed) Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 30, halton = NA) Frequencies of alternatives: car red.bus train 0.1317 0.4084 0.4599 bfgs method 22 iterations, 0h:1m:28s g'(-H)^-1g = 3.83E-07 gradient close to zero Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.988457 0.026861 -36.7989 <2e-16 *** time -0.990255 0.032661 -30.3195 <2e-16 *** bus -0.118121 0.227826 -0.5185 0.6041 sd.bus 10.369252 0.846377 12.2513 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Log-Likelihood: -8743.9 Note that not only are the coefficient estimates correct, in the sense of being reasonably close to the actual values given their standard errors, the runtime is 40% that of the R = 300 case, although requiring 22 BFGS iterations instead of 8. With R = 100 22 iterations were also needed, the runtime increased by a little over 30% to about half required in the initial case, and the coefficient estimates were essentially the same as in the R = 30 run: Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.989468 0.026877 -36.8144 <2e-16 *** time -0.991518 0.032679 -30.3410 <2e-16 *** bus -0.108744 0.226858 -0.4793 0.6317 sd.bus 10.316022 0.841978 12.2521 <2e-16 *** Avoiding the default Halton sequence altogether also fixed the problem: > m.mixed <- mlogit(choice ~ price + time + bus | 0, + data=logit.data, + rpar= c(bus = 'n'), + R = 300, halton = NULL) > > summary(m.mixed) Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 300, halton = NULL) *** blah blah blah *** Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.988426 0.026859 -36.8006 <2e-16 *** time -0.990478 0.032655 -30.3314 <2e-16 *** bus -0.143370 0.229167 -0.6256 0.5316 sd.bus 10.382572 0.847424 12.2519 <2e-16 *** The runtime in this case was still 10% lower than in the Halton case, and BFGS required 23 iterations to converge. That 8 iterations in the R = 300 case is definitely an outlier. Setting R = 400 and halton = NA also generated "correct" results, however, R = 299 broke the estimation again, BFGS requiring 12 iterations and the estimate of sd.bus = 15.248.... EDIT: I also tried a different seed with 4000 samples, but R = 300 and halton = NA still generated bad results, even worse ones than in the original case as it happened. Reparameterizing the call to specify the Halton prime and drop parameters gave erratic results; prime=11 worked, but prime=29 failed miserably with R=300. I then went through the mlogit R code (thanks for finding it, @khoda!), but the Halton sequence code works correctly. Multiple other tests, combined with the OP's tests noted in comments, leads me to conclude that Halton doesn't work consistently well in the one-dimensional case, at least for this problem. In actual practice, where the true parameters are unavailable for comparison with the estimates, it would be necessary to try several different parameterizations of halton and R in the mlogit call, and check for consistency of the results (and the value of the log likelihood, I suspect). Avoiding Halton altogether and specifying an increasing sequence of R values for the random number generator until stable estimates are achieved is an alternative that would also likely be workable, runtime considerations aside.
mlogit package fails to recover synthetic mixed logit model
You appear to have hit upon an unlucky combination of optimization parameters, specifically, with respect to the Halton pseudo-random sequence, which may possibly have a bug in it. BFGS appears to be
mlogit package fails to recover synthetic mixed logit model You appear to have hit upon an unlucky combination of optimization parameters, specifically, with respect to the Halton pseudo-random sequence, which may possibly have a bug in it. BFGS appears to be stopping prematurely with R = 300, but not with other significantly smaller or larger values. Fortunately, you don't need large values of R (or Halton at all) in this case. On my initial run through your code, I got the same results as you did, with runtime statistics indicating that 8 iterations were required for BFGS to converge. I then changed R in the function call to equal 30: > m.mixed <- mlogit(choice ~ price + time + bus | 0, + data=logit.data, + rpar= c(bus = 'n'), + R = 30, halton = NA) > > summary(m.mixed) Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 30, halton = NA) Frequencies of alternatives: car red.bus train 0.1317 0.4084 0.4599 bfgs method 22 iterations, 0h:1m:28s g'(-H)^-1g = 3.83E-07 gradient close to zero Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.988457 0.026861 -36.7989 <2e-16 *** time -0.990255 0.032661 -30.3195 <2e-16 *** bus -0.118121 0.227826 -0.5185 0.6041 sd.bus 10.369252 0.846377 12.2513 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Log-Likelihood: -8743.9 Note that not only are the coefficient estimates correct, in the sense of being reasonably close to the actual values given their standard errors, the runtime is 40% that of the R = 300 case, although requiring 22 BFGS iterations instead of 8. With R = 100 22 iterations were also needed, the runtime increased by a little over 30% to about half required in the initial case, and the coefficient estimates were essentially the same as in the R = 30 run: Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.989468 0.026877 -36.8144 <2e-16 *** time -0.991518 0.032679 -30.3410 <2e-16 *** bus -0.108744 0.226858 -0.4793 0.6317 sd.bus 10.316022 0.841978 12.2521 <2e-16 *** Avoiding the default Halton sequence altogether also fixed the problem: > m.mixed <- mlogit(choice ~ price + time + bus | 0, + data=logit.data, + rpar= c(bus = 'n'), + R = 300, halton = NULL) > > summary(m.mixed) Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 300, halton = NULL) *** blah blah blah *** Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.988426 0.026859 -36.8006 <2e-16 *** time -0.990478 0.032655 -30.3314 <2e-16 *** bus -0.143370 0.229167 -0.6256 0.5316 sd.bus 10.382572 0.847424 12.2519 <2e-16 *** The runtime in this case was still 10% lower than in the Halton case, and BFGS required 23 iterations to converge. That 8 iterations in the R = 300 case is definitely an outlier. Setting R = 400 and halton = NA also generated "correct" results, however, R = 299 broke the estimation again, BFGS requiring 12 iterations and the estimate of sd.bus = 15.248.... EDIT: I also tried a different seed with 4000 samples, but R = 300 and halton = NA still generated bad results, even worse ones than in the original case as it happened. Reparameterizing the call to specify the Halton prime and drop parameters gave erratic results; prime=11 worked, but prime=29 failed miserably with R=300. I then went through the mlogit R code (thanks for finding it, @khoda!), but the Halton sequence code works correctly. Multiple other tests, combined with the OP's tests noted in comments, leads me to conclude that Halton doesn't work consistently well in the one-dimensional case, at least for this problem. In actual practice, where the true parameters are unavailable for comparison with the estimates, it would be necessary to try several different parameterizations of halton and R in the mlogit call, and check for consistency of the results (and the value of the log likelihood, I suspect). Avoiding Halton altogether and specifying an increasing sequence of R values for the random number generator until stable estimates are achieved is an alternative that would also likely be workable, runtime considerations aside.
mlogit package fails to recover synthetic mixed logit model You appear to have hit upon an unlucky combination of optimization parameters, specifically, with respect to the Halton pseudo-random sequence, which may possibly have a bug in it. BFGS appears to be
54,147
mlogit package fails to recover synthetic mixed logit model
I believe that the BFGS implementation is the culprit here. My first two clues were: Calling mlogit() with the argument method='bhhh' instead of the default bfgs resulted in much more accurate estimates. When I obtained inaccurate estimates from bfgs, the stop condition for the optimizer was last step couldn't find higher value, suggesting that the BFGS step was not an ascent direction. I followed the methodology found in L-BFGS-B FORTRAN SUBROUTINES FOR LARGE SCALE BOUND CONSTRAINED OPTIMIZATION, which I quote here: If the line search is unable to find a point with a sufficiently lower value of the objective after 20 evaluations of the objective function we conclude that the current direction is not useful. In this case all correction vectors are discarded and the iteration is restarted along the steepest descent direction I updated the mlogit.optim() in the mlogit/R/mlogit.tools.R function so that the BFGS approximation of the inverse Hessian is reset to the identity if an ascent step is not found in the line search. I capped the maximum number of reset to 10 (in these tests, I never hit the max). The update looks like this (the first line in this copy/paste is unchanged): # eval the function and compute the gradient and the hessian x <- eval(f, parent.frame()) if (is.null(x)){ if(method == 'bfgs' && num.bfgs.reset < 10) { num.bfgs.reset <- num.bfgs.reset + 1 Hm1 <- diag(nrow(Hm1)) x <- oldx next # try again } else { ## x is null if steptol is reached code = 3 break } } Running the same simulation as before, I get a much more accurate estimate of the standard deviation: Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 300, halton = NA, method = "bfgs") Frequencies of alternatives: car red.bus train 0.1317 0.4084 0.4599 bfgs method 21 iterations, 0h:3m:50s g'(-H)^-1g = 1.8E-06 successive function values within tolerance limits Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.989864 0.026878 -36.8276 <2e-16 *** time -0.991994 0.032682 -30.3530 <2e-16 *** bus -0.118717 0.228376 -0.5198 0.6032 sd.bus 10.357554 0.847985 12.2143 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Log-Likelihood: -8743.2 random coefficients Min. 1st Qu. Median Mean 3rd Qu. Max. bus -Inf -7.104781 -0.1187173 -0.1187173 6.867347 Inf I get similar results if I change R, generate new datasets, etc.
mlogit package fails to recover synthetic mixed logit model
I believe that the BFGS implementation is the culprit here. My first two clues were: Calling mlogit() with the argument method='bhhh' instead of the default bfgs resulted in much more accurate estima
mlogit package fails to recover synthetic mixed logit model I believe that the BFGS implementation is the culprit here. My first two clues were: Calling mlogit() with the argument method='bhhh' instead of the default bfgs resulted in much more accurate estimates. When I obtained inaccurate estimates from bfgs, the stop condition for the optimizer was last step couldn't find higher value, suggesting that the BFGS step was not an ascent direction. I followed the methodology found in L-BFGS-B FORTRAN SUBROUTINES FOR LARGE SCALE BOUND CONSTRAINED OPTIMIZATION, which I quote here: If the line search is unable to find a point with a sufficiently lower value of the objective after 20 evaluations of the objective function we conclude that the current direction is not useful. In this case all correction vectors are discarded and the iteration is restarted along the steepest descent direction I updated the mlogit.optim() in the mlogit/R/mlogit.tools.R function so that the BFGS approximation of the inverse Hessian is reset to the identity if an ascent step is not found in the line search. I capped the maximum number of reset to 10 (in these tests, I never hit the max). The update looks like this (the first line in this copy/paste is unchanged): # eval the function and compute the gradient and the hessian x <- eval(f, parent.frame()) if (is.null(x)){ if(method == 'bfgs' && num.bfgs.reset < 10) { num.bfgs.reset <- num.bfgs.reset + 1 Hm1 <- diag(nrow(Hm1)) x <- oldx next # try again } else { ## x is null if steptol is reached code = 3 break } } Running the same simulation as before, I get a much more accurate estimate of the standard deviation: Call: mlogit(formula = choice ~ price + time + bus | 0, data = logit.data, rpar = c(bus = "n"), R = 300, halton = NA, method = "bfgs") Frequencies of alternatives: car red.bus train 0.1317 0.4084 0.4599 bfgs method 21 iterations, 0h:3m:50s g'(-H)^-1g = 1.8E-06 successive function values within tolerance limits Coefficients : Estimate Std. Error z-value Pr(>|z|) price -0.989864 0.026878 -36.8276 <2e-16 *** time -0.991994 0.032682 -30.3530 <2e-16 *** bus -0.118717 0.228376 -0.5198 0.6032 sd.bus 10.357554 0.847985 12.2143 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Log-Likelihood: -8743.2 random coefficients Min. 1st Qu. Median Mean 3rd Qu. Max. bus -Inf -7.104781 -0.1187173 -0.1187173 6.867347 Inf I get similar results if I change R, generate new datasets, etc.
mlogit package fails to recover synthetic mixed logit model I believe that the BFGS implementation is the culprit here. My first two clues were: Calling mlogit() with the argument method='bhhh' instead of the default bfgs resulted in much more accurate estima
54,148
Lavaan SEM Ordinal and Categorical variables
Yes, there are special ways to handle ordinal and binary variables in Lavaan, you can enter them as numeric variables then when you use the sem() function you specify which are ordinal using the ordered argument. I wrote up a longer response but then came across this link... That should give you everything you need to know.
Lavaan SEM Ordinal and Categorical variables
Yes, there are special ways to handle ordinal and binary variables in Lavaan, you can enter them as numeric variables then when you use the sem() function you specify which are ordinal using the order
Lavaan SEM Ordinal and Categorical variables Yes, there are special ways to handle ordinal and binary variables in Lavaan, you can enter them as numeric variables then when you use the sem() function you specify which are ordinal using the ordered argument. I wrote up a longer response but then came across this link... That should give you everything you need to know.
Lavaan SEM Ordinal and Categorical variables Yes, there are special ways to handle ordinal and binary variables in Lavaan, you can enter them as numeric variables then when you use the sem() function you specify which are ordinal using the order
54,149
Lavaan SEM Ordinal and Categorical variables
As both @Dimitris Rizopoulos and @Jeremy Miles say, it is possible to fit an SEM using categorical data (i.e., which includes your dichotomous and ordinal variables). There are generally two methods used to go about doing this$^1$. The first is the direct method, which treats categorical data as continuous and, as a result, estimates model parameters using the sample correlation/covariance matrix. The second method goes by many names, but here I will refer to the underlying latent response method since it invokes the presence of an underlying latent response/variable through the use of tetrachoric/polychoric correlations to estimate model parameters. Below is some code for fitting your model using each method ## Lavaan Model # Note model is the same across both methods. model <- ' socio =~ x1 + x2 + x3 eco =~ x4 + x5 + x7 eco ~ social ' ## Fitting models # Fit using the direct method fit1 <- sem(model, data=dataset2) # Fit using the underlying latent response method fit2 <- sem(model, data=dataset2, ordered = colnames(dataset2)) So, to answer your first question, yes, SEM models may be fit using dichotomous and/or ordinary data, and there are multiple ways to do so (for more information, see Wirth & Edwards 2007). Regarding whether the categorical nature of your data is related to your convergence issues, I agree with @Jeremy Miles that we need more information to answer this question. One possible reason for this I can think of from reading your question is that it could be related to your data. Variables such as employment and gender in particular, are seldom used as indicator variables in latent variable models. While such variables likely display meaningful bivariate associations (e.g., I am sure all variables load onto the socio have meaningful bivariate associations), I am not sure whether a measurement model is a correct way to account for dependencies between variables that load onto the same latent construct. Put differently, it may be a better idea to use something like path analysis to test the relationships among x1-x6, instead of using a SEM$^2$. $^1$Note that within each method, there are still important decisions to make, such as the choice of estimation method. For more information on this, see Wirth & Edwards 2007. $^2$ Note that path analysis is simply SEM without the measurement model, and therefore can be estimated using most (if not all) SEM software packages such as lavaan and mplus. References Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: current approaches and future directions. Psychological methods, 12(1), 58.
Lavaan SEM Ordinal and Categorical variables
As both @Dimitris Rizopoulos and @Jeremy Miles say, it is possible to fit an SEM using categorical data (i.e., which includes your dichotomous and ordinal variables). There are generally two methods u
Lavaan SEM Ordinal and Categorical variables As both @Dimitris Rizopoulos and @Jeremy Miles say, it is possible to fit an SEM using categorical data (i.e., which includes your dichotomous and ordinal variables). There are generally two methods used to go about doing this$^1$. The first is the direct method, which treats categorical data as continuous and, as a result, estimates model parameters using the sample correlation/covariance matrix. The second method goes by many names, but here I will refer to the underlying latent response method since it invokes the presence of an underlying latent response/variable through the use of tetrachoric/polychoric correlations to estimate model parameters. Below is some code for fitting your model using each method ## Lavaan Model # Note model is the same across both methods. model <- ' socio =~ x1 + x2 + x3 eco =~ x4 + x5 + x7 eco ~ social ' ## Fitting models # Fit using the direct method fit1 <- sem(model, data=dataset2) # Fit using the underlying latent response method fit2 <- sem(model, data=dataset2, ordered = colnames(dataset2)) So, to answer your first question, yes, SEM models may be fit using dichotomous and/or ordinary data, and there are multiple ways to do so (for more information, see Wirth & Edwards 2007). Regarding whether the categorical nature of your data is related to your convergence issues, I agree with @Jeremy Miles that we need more information to answer this question. One possible reason for this I can think of from reading your question is that it could be related to your data. Variables such as employment and gender in particular, are seldom used as indicator variables in latent variable models. While such variables likely display meaningful bivariate associations (e.g., I am sure all variables load onto the socio have meaningful bivariate associations), I am not sure whether a measurement model is a correct way to account for dependencies between variables that load onto the same latent construct. Put differently, it may be a better idea to use something like path analysis to test the relationships among x1-x6, instead of using a SEM$^2$. $^1$Note that within each method, there are still important decisions to make, such as the choice of estimation method. For more information on this, see Wirth & Edwards 2007. $^2$ Note that path analysis is simply SEM without the measurement model, and therefore can be estimated using most (if not all) SEM software packages such as lavaan and mplus. References Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: current approaches and future directions. Psychological methods, 12(1), 58.
Lavaan SEM Ordinal and Categorical variables As both @Dimitris Rizopoulos and @Jeremy Miles say, it is possible to fit an SEM using categorical data (i.e., which includes your dichotomous and ordinal variables). There are generally two methods u
54,150
Lavaan SEM Ordinal and Categorical variables
Ordinal and binary variables are fine in SEM. The fact that the model does not converge is (mostly) unrelated. We need more information to diagnose that.
Lavaan SEM Ordinal and Categorical variables
Ordinal and binary variables are fine in SEM. The fact that the model does not converge is (mostly) unrelated. We need more information to diagnose that.
Lavaan SEM Ordinal and Categorical variables Ordinal and binary variables are fine in SEM. The fact that the model does not converge is (mostly) unrelated. We need more information to diagnose that.
Lavaan SEM Ordinal and Categorical variables Ordinal and binary variables are fine in SEM. The fact that the model does not converge is (mostly) unrelated. We need more information to diagnose that.
54,151
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
I have found the answer on my question which can be explained geometrically very well. We know that the complementary condition of the KKT-conditions says: $$\alpha\geq0, \alpha(y_i(w^Tx_i + b) - 1) = 0$$ Therefore, in a KKT-Point at least one of the following cases happens: Case 1: $\alpha_i=0$ Case 2: $y_i(w^Tx_i +b) - 1 =0$ Furthermore, we know that the hyperplanes of the margins of the SVM have the following equations: $H_1 = \{x: w^Tx + b = 1\}$ $H_{-1} = \{x:w^Tx + b = -1\}$ using the margins the following halfspaces are created: $H_1^+ = \{x: w^Tx + b > 1\}$ $H_{-1}^- = \{x:w^Tx + b < -1\}$ Thus for any $x_i:y_i=1$ and $x_i:y_i=-1$ that is correctly classified and does lie in the inner part of the correct halfspace we have: $$y_i(w^Tx_i +b) -1 > 0$$ Therefore, for these points "Case 2" is violated and therefore "Case 1" i.e., $\alpha_i = 0$ must be true, which means that $\alpha_i = 0$ for all points that are correctly classified and lie in the inner part of their halfspace. Hence, $\alpha_i$ can only be unequal to $0$, if "Case 2" is true i.e. $y_i(w^Tx_i+b) - 1 = 0$. And this is just true for $x \in H_1$ or $x \in H_{-1}$ which are the points that lie on on the hyperplanes of the margins. And this points are limited. Therefore, for the most points $\alpha_i = 0$, except the points that lie in the margin which are limited.
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
I have found the answer on my question which can be explained geometrically very well. We know that the complementary condition of the KKT-conditions says: $$\alpha\geq0, \alpha(y_i(w^Tx_i + b) - 1) =
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? I have found the answer on my question which can be explained geometrically very well. We know that the complementary condition of the KKT-conditions says: $$\alpha\geq0, \alpha(y_i(w^Tx_i + b) - 1) = 0$$ Therefore, in a KKT-Point at least one of the following cases happens: Case 1: $\alpha_i=0$ Case 2: $y_i(w^Tx_i +b) - 1 =0$ Furthermore, we know that the hyperplanes of the margins of the SVM have the following equations: $H_1 = \{x: w^Tx + b = 1\}$ $H_{-1} = \{x:w^Tx + b = -1\}$ using the margins the following halfspaces are created: $H_1^+ = \{x: w^Tx + b > 1\}$ $H_{-1}^- = \{x:w^Tx + b < -1\}$ Thus for any $x_i:y_i=1$ and $x_i:y_i=-1$ that is correctly classified and does lie in the inner part of the correct halfspace we have: $$y_i(w^Tx_i +b) -1 > 0$$ Therefore, for these points "Case 2" is violated and therefore "Case 1" i.e., $\alpha_i = 0$ must be true, which means that $\alpha_i = 0$ for all points that are correctly classified and lie in the inner part of their halfspace. Hence, $\alpha_i$ can only be unequal to $0$, if "Case 2" is true i.e. $y_i(w^Tx_i+b) - 1 = 0$. And this is just true for $x \in H_1$ or $x \in H_{-1}$ which are the points that lie on on the hyperplanes of the margins. And this points are limited. Therefore, for the most points $\alpha_i = 0$, except the points that lie in the margin which are limited.
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? I have found the answer on my question which can be explained geometrically very well. We know that the complementary condition of the KKT-conditions says: $$\alpha\geq0, \alpha(y_i(w^Tx_i + b) - 1) =
54,152
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
A support is actually a vector whose $\alpha$ is non-zero. It is a definition, there is nothing to prove from the equation here.
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
A support is actually a vector whose $\alpha$ is non-zero. It is a definition, there is nothing to prove from the equation here.
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? A support is actually a vector whose $\alpha$ is non-zero. It is a definition, there is nothing to prove from the equation here.
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? A support is actually a vector whose $\alpha$ is non-zero. It is a definition, there is nothing to prove from the equation here.
54,153
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
Support vectors can be defined as those vectors that lie on the positive or negative hyperplane, i.e. those vectors for which $y_i (w^Tx_i + b) -1=0$. For non-support vectors, $y_i (w^Tx_i + b) -1$ is non zero. The dual form of the Lagrange multiplier is given by: $L_p = min_{w,b}max_{\alpha\geq 0} \left(\quad\dfrac{1}{2}||(w)||^2 - \sum_i \alpha_i(y_i (w^Tx_i + b) -1)\right)$ Thus, for non-support vectors, $\alpha_i = 0$, else the inner optimization will "blow up"
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha?
Support vectors can be defined as those vectors that lie on the positive or negative hyperplane, i.e. those vectors for which $y_i (w^Tx_i + b) -1=0$. For non-support vectors, $y_i (w^Tx_i + b) -1$ is
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? Support vectors can be defined as those vectors that lie on the positive or negative hyperplane, i.e. those vectors for which $y_i (w^Tx_i + b) -1=0$. For non-support vectors, $y_i (w^Tx_i + b) -1$ is non zero. The dual form of the Lagrange multiplier is given by: $L_p = min_{w,b}max_{\alpha\geq 0} \left(\quad\dfrac{1}{2}||(w)||^2 - \sum_i \alpha_i(y_i (w^Tx_i + b) -1)\right)$ Thus, for non-support vectors, $\alpha_i = 0$, else the inner optimization will "blow up"
SVM: Why alpha for non support vector is zero and why most vectors have zero alpha? Support vectors can be defined as those vectors that lie on the positive or negative hyperplane, i.e. those vectors for which $y_i (w^Tx_i + b) -1=0$. For non-support vectors, $y_i (w^Tx_i + b) -1$ is
54,154
Computing Highest Density Region given multivariate normal distribution with dimension $d$ > 3
The highest density region of an $N(0,H)$ random variable is an ellipsoid centered at its mean, $0$, and oriented per the covariance matrix $H$. The cutoff value for the ellipsoid can be determined from the Chi-square with $d$ degrees of freedom. Let $y =$ value such that Chi-square with $d$ degrees of freedom $\le 0.95$. Then the highest density region capturing $0.95$ probability of the $N(0,H)$ is $$x: x^TH^{-1}x \le y$$ For instance, when $d = 10, y = 18.307038$.
Computing Highest Density Region given multivariate normal distribution with dimension $d$ > 3
The highest density region of an $N(0,H)$ random variable is an ellipsoid centered at its mean, $0$, and oriented per the covariance matrix $H$. The cutoff value for the ellipsoid can be determined f
Computing Highest Density Region given multivariate normal distribution with dimension $d$ > 3 The highest density region of an $N(0,H)$ random variable is an ellipsoid centered at its mean, $0$, and oriented per the covariance matrix $H$. The cutoff value for the ellipsoid can be determined from the Chi-square with $d$ degrees of freedom. Let $y =$ value such that Chi-square with $d$ degrees of freedom $\le 0.95$. Then the highest density region capturing $0.95$ probability of the $N(0,H)$ is $$x: x^TH^{-1}x \le y$$ For instance, when $d = 10, y = 18.307038$.
Computing Highest Density Region given multivariate normal distribution with dimension $d$ > 3 The highest density region of an $N(0,H)$ random variable is an ellipsoid centered at its mean, $0$, and oriented per the covariance matrix $H$. The cutoff value for the ellipsoid can be determined f
54,155
Does decision tree need to use the same feature to split in the same layer?
The second option. There's no reason to constrain a tree to split on the same variable at all nodes at a given level.
Does decision tree need to use the same feature to split in the same layer?
The second option. There's no reason to constrain a tree to split on the same variable at all nodes at a given level.
Does decision tree need to use the same feature to split in the same layer? The second option. There's no reason to constrain a tree to split on the same variable at all nodes at a given level.
Does decision tree need to use the same feature to split in the same layer? The second option. There's no reason to constrain a tree to split on the same variable at all nodes at a given level.
54,156
Does decision tree need to use the same feature to split in the same layer?
Actually, there is a reason to do such a thing. Moreover, sometimes both a feature and a threshold are the same for the whole level. Read about oblivious decision trees used in CatBoost algorithm - https://towardsdatascience.com/introduction-to-gradient-boosting-on-decision-trees-with-catboost-d511a9ccbd14 Such trees predict faster (cause you can avoid using if-statement for each object separately) and they could solve some problems better (XOR problem, as a toy example)
Does decision tree need to use the same feature to split in the same layer?
Actually, there is a reason to do such a thing. Moreover, sometimes both a feature and a threshold are the same for the whole level. Read about oblivious decision trees used in CatBoost algorithm - ht
Does decision tree need to use the same feature to split in the same layer? Actually, there is a reason to do such a thing. Moreover, sometimes both a feature and a threshold are the same for the whole level. Read about oblivious decision trees used in CatBoost algorithm - https://towardsdatascience.com/introduction-to-gradient-boosting-on-decision-trees-with-catboost-d511a9ccbd14 Such trees predict faster (cause you can avoid using if-statement for each object separately) and they could solve some problems better (XOR problem, as a toy example)
Does decision tree need to use the same feature to split in the same layer? Actually, there is a reason to do such a thing. Moreover, sometimes both a feature and a threshold are the same for the whole level. Read about oblivious decision trees used in CatBoost algorithm - ht
54,157
Constant of Laplace approximation
This example is indeed rather poorly conducted and full of typos, apologies from one author!!! First, there is no genuine $n$ (or related sample size) in the picture so the Laplace approximation cannot be universally adequate. The only thing that can replace $n$ in the integral $$ \int_a^b \dfrac{x^{\alpha-1}}{\Gamma(\alpha)\beta^\alpha}e^{-x/\beta}\text{d}x$$ is $(b-a)^{-1}$, in the sense that the approximation becomes better and better as $(b-a)$ goes to zero. This is under the provision that the mode $\hat{x}_\theta$ belongs to the interval $(a,b)$. Second, as you correctly wondered, the constant $\Gamma(\alpha)\beta^\alpha$ in the denominator has truly and inexplicably (!) been forgotten in the final formula, which should thus be \begin{align}\int_a^b &\dfrac{x^{\alpha-1}}{\Gamma(\alpha)\beta^\alpha}e^{-x/\beta}\text{d}x\,\approx \\ &\dfrac{\hat{x}_\theta^{\alpha-1}e^{-\hat{x}_\theta/\beta}}{\Gamma(\alpha)\beta^\alpha}\sqrt{\frac{2\pi \hat{x}_\theta^2}{\alpha-1}}\left\{\Phi\left(\sqrt{\frac{\alpha-1}{2\pi \hat{x}_\theta^2}} (b-\hat{x}_\theta)\right)-\Phi\left(\sqrt{\frac{\alpha-1}{2\pi \hat{x}_\theta^2}} (a-\hat{x}_\theta)\right)\right\}\end{align} Third, the formula at the top of page 110 is missing two minus ($-$) signs, as it should in truth be $$h(x)\approx -\frac{\hat{x}_\theta}{\beta}+(\alpha-1)\log \hat{x}_\theta-\frac{\alpha-1}{2\hat{x}_\theta^2}(x-\hat{x}_\theta)^2$$ With these multiple corrections, the table comparing the Laplace version with the exact coverage can be reconstructed, for instance via the following R code: xop <- function(al,be){(al-1)*be} lapl <- function(al,be,a,b){ hatx=xop(al,be) hatx^{al-1}*exp(-hatx/be)*sqrt(2*pi)*hatx/sqrt(al-1)* (pnorm(sqrt(al-1)*(b-hatx)/hatx)-pnorm(sqrt(al-1)*(a-hatx)/hatx))/ be^al/factorial(al-1) } xact <- function(al,be,a,b){ pgamma(b,al,1/be)-pgamma(a,al,1/be)} > xact(5,2,7,9) [1] 0.1933414 > lapl(5,2,7,9) [1] 0.1933507 Once again, my most sincere apologies for this poor attention to details in this example!
Constant of Laplace approximation
This example is indeed rather poorly conducted and full of typos, apologies from one author!!! First, there is no genuine $n$ (or related sample size) in the picture so the Laplace approximation cann
Constant of Laplace approximation This example is indeed rather poorly conducted and full of typos, apologies from one author!!! First, there is no genuine $n$ (or related sample size) in the picture so the Laplace approximation cannot be universally adequate. The only thing that can replace $n$ in the integral $$ \int_a^b \dfrac{x^{\alpha-1}}{\Gamma(\alpha)\beta^\alpha}e^{-x/\beta}\text{d}x$$ is $(b-a)^{-1}$, in the sense that the approximation becomes better and better as $(b-a)$ goes to zero. This is under the provision that the mode $\hat{x}_\theta$ belongs to the interval $(a,b)$. Second, as you correctly wondered, the constant $\Gamma(\alpha)\beta^\alpha$ in the denominator has truly and inexplicably (!) been forgotten in the final formula, which should thus be \begin{align}\int_a^b &\dfrac{x^{\alpha-1}}{\Gamma(\alpha)\beta^\alpha}e^{-x/\beta}\text{d}x\,\approx \\ &\dfrac{\hat{x}_\theta^{\alpha-1}e^{-\hat{x}_\theta/\beta}}{\Gamma(\alpha)\beta^\alpha}\sqrt{\frac{2\pi \hat{x}_\theta^2}{\alpha-1}}\left\{\Phi\left(\sqrt{\frac{\alpha-1}{2\pi \hat{x}_\theta^2}} (b-\hat{x}_\theta)\right)-\Phi\left(\sqrt{\frac{\alpha-1}{2\pi \hat{x}_\theta^2}} (a-\hat{x}_\theta)\right)\right\}\end{align} Third, the formula at the top of page 110 is missing two minus ($-$) signs, as it should in truth be $$h(x)\approx -\frac{\hat{x}_\theta}{\beta}+(\alpha-1)\log \hat{x}_\theta-\frac{\alpha-1}{2\hat{x}_\theta^2}(x-\hat{x}_\theta)^2$$ With these multiple corrections, the table comparing the Laplace version with the exact coverage can be reconstructed, for instance via the following R code: xop <- function(al,be){(al-1)*be} lapl <- function(al,be,a,b){ hatx=xop(al,be) hatx^{al-1}*exp(-hatx/be)*sqrt(2*pi)*hatx/sqrt(al-1)* (pnorm(sqrt(al-1)*(b-hatx)/hatx)-pnorm(sqrt(al-1)*(a-hatx)/hatx))/ be^al/factorial(al-1) } xact <- function(al,be,a,b){ pgamma(b,al,1/be)-pgamma(a,al,1/be)} > xact(5,2,7,9) [1] 0.1933414 > lapl(5,2,7,9) [1] 0.1933507 Once again, my most sincere apologies for this poor attention to details in this example!
Constant of Laplace approximation This example is indeed rather poorly conducted and full of typos, apologies from one author!!! First, there is no genuine $n$ (or related sample size) in the picture so the Laplace approximation cann
54,158
Survival Analysis or Not?
I agree that if there is no censoring, it is probably not really necessary to use the survival analysis. However, I would point out that the goal of your work is very important. If you just want to make a predictive model, then it actually does not matter which method you use as long as it gives you good (it is up to you how you define it) results. If you want to make the model more descriptive and understand which variables and how they effect the dependent variable, I would go as simple as possible so that it is easily interpretable. The final model, in this case, should reflect on your hypothesis how actually the data generating process (DGP) works . Well, you can try OLS, GLM or some f.e. non-linear method, but you need to decide, which one would make the most sense for your DGP.
Survival Analysis or Not?
I agree that if there is no censoring, it is probably not really necessary to use the survival analysis. However, I would point out that the goal of your work is very important. If you just want to ma
Survival Analysis or Not? I agree that if there is no censoring, it is probably not really necessary to use the survival analysis. However, I would point out that the goal of your work is very important. If you just want to make a predictive model, then it actually does not matter which method you use as long as it gives you good (it is up to you how you define it) results. If you want to make the model more descriptive and understand which variables and how they effect the dependent variable, I would go as simple as possible so that it is easily interpretable. The final model, in this case, should reflect on your hypothesis how actually the data generating process (DGP) works . Well, you can try OLS, GLM or some f.e. non-linear method, but you need to decide, which one would make the most sense for your DGP.
Survival Analysis or Not? I agree that if there is no censoring, it is probably not really necessary to use the survival analysis. However, I would point out that the goal of your work is very important. If you just want to ma
54,159
Survival Analysis or Not?
Survival Analysis does not require that your data be censored. Though not having censored data certainty does give you substantially more options regarding your choice of models. The main factors that you should use to determine whether survival analysis is appropriate are: Does the "time" component fit the distribution you want to use? Which from what you said, it looks like it does. Does the proportional hazard assumption hold? If it does not, there are alternatives, though they are more complex to model and interpret. For instance, I often use the Royston and Parmar model (Royston & Parmar, 2022), which uses restricted cubic splines to estimate the survival distribution. So given the information you provided, I would not recommend against the use of a survival model. Though for me to make a specific recommendation, I would need more information. References Royston, P., & Parmar, M. K. (2002). Flexible parametric proportional‐hazards and proportional‐odds models for censored survival data, with application to prognostic modeling and estimation of treatment effects. Statistics in medicine, 21(15), 2175-2197.
Survival Analysis or Not?
Survival Analysis does not require that your data be censored. Though not having censored data certainty does give you substantially more options regarding your choice of models. The main factors that
Survival Analysis or Not? Survival Analysis does not require that your data be censored. Though not having censored data certainty does give you substantially more options regarding your choice of models. The main factors that you should use to determine whether survival analysis is appropriate are: Does the "time" component fit the distribution you want to use? Which from what you said, it looks like it does. Does the proportional hazard assumption hold? If it does not, there are alternatives, though they are more complex to model and interpret. For instance, I often use the Royston and Parmar model (Royston & Parmar, 2022), which uses restricted cubic splines to estimate the survival distribution. So given the information you provided, I would not recommend against the use of a survival model. Though for me to make a specific recommendation, I would need more information. References Royston, P., & Parmar, M. K. (2002). Flexible parametric proportional‐hazards and proportional‐odds models for censored survival data, with application to prognostic modeling and estimation of treatment effects. Statistics in medicine, 21(15), 2175-2197.
Survival Analysis or Not? Survival Analysis does not require that your data be censored. Though not having censored data certainty does give you substantially more options regarding your choice of models. The main factors that
54,160
Survival Analysis or Not?
Stripped down to its mathematical technicalities, survival analysis is essentially just the analysis of continuous non-negative random variables, and certain common compositions of these random variables (e.g., looking at minimums or maximums of these random variables). While survival analysis can accommodate censored data, it is not necessary for censoring to occur to fall within the scope of survival analysis. Survival analysis generally focuses on aspects of continuous non-negative random variables that are most using in contexts involving times-to-failure of objects (e.g., hazard rates, etc.), but it also interfaces with other areas of statistics when they use continuous non-negative random variables. The problem you have described is a regression problem where your response variable is a continuous non-negative random variable. You want to know the effect of a binary intervention on your time response variable (both its direct effect and its interaction effect with other covariates). While the response variable is amenable to methods in survival analysis, this is primarily a regression problem and it would use standard methods for regression with a non-negative response variable (e.g., log-linear regression).
Survival Analysis or Not?
Stripped down to its mathematical technicalities, survival analysis is essentially just the analysis of continuous non-negative random variables, and certain common compositions of these random variab
Survival Analysis or Not? Stripped down to its mathematical technicalities, survival analysis is essentially just the analysis of continuous non-negative random variables, and certain common compositions of these random variables (e.g., looking at minimums or maximums of these random variables). While survival analysis can accommodate censored data, it is not necessary for censoring to occur to fall within the scope of survival analysis. Survival analysis generally focuses on aspects of continuous non-negative random variables that are most using in contexts involving times-to-failure of objects (e.g., hazard rates, etc.), but it also interfaces with other areas of statistics when they use continuous non-negative random variables. The problem you have described is a regression problem where your response variable is a continuous non-negative random variable. You want to know the effect of a binary intervention on your time response variable (both its direct effect and its interaction effect with other covariates). While the response variable is amenable to methods in survival analysis, this is primarily a regression problem and it would use standard methods for regression with a non-negative response variable (e.g., log-linear regression).
Survival Analysis or Not? Stripped down to its mathematical technicalities, survival analysis is essentially just the analysis of continuous non-negative random variables, and certain common compositions of these random variab
54,161
How to find the expectation $\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right]$?
The expectation is infinite. One way to see this is to condition on $H$. Preliminary changes of variable (merely involving rescaling $H$ and $W$ and then shifting to a new origin) reduce the conditional expectation to a positive constant times a two-dimensional integral of the form $$\mathcal{I}(\lambda)=\iint_{\mathbb{C}}\ \frac{1}{|z|^2} e^{-\lambda |z-1|^2}\ dz d\bar z$$ with $\lambda \gt 0.$ In polar coordinates $(r,\theta),$ $|z|^2 = r^2$ and $|z-1|^2 = r^2 - 2r\cos(\theta)+1,$ and the area element is $dzd\bar{z} = r dr d\theta,$ giving $$\mathcal{I}(\lambda) = e^{-\lambda}\int_0^{2\pi}d\theta \int_0^\infty \frac{1}{r^2}e^{-\lambda(r^2 - 2r\cos\theta)}\ r\, drd.$$ For $0 \le r \le \sqrt{1 + 1/\lambda} -1 = u(\lambda)\gt 0,$ the expression in the exponent exceeds $-1,$ so we may underestimate this integral by replacing the exponential by $e^{-1}$ and limiting $r$ to this range: $$\mathcal{I}(\lambda) \ge e^{-\lambda-1}\int_0^{2\pi}d\theta \int_0^{u(\lambda)}\frac{1}{r}dr = 2\pi e^{-\lambda-1} \lim_{\epsilon\to 0} \int_\epsilon^{u(\lambda)} \frac{dr}{r}\ \propto\ \lim_{\epsilon\to 0}\log(u(\lambda)) - \log(\epsilon),$$ which diverges to $+\infty.$ Since all conditional expectations are infinite, the expectation must be infinite. A simulation bears this out. For simplicity I chose $H$ and $W$ to have independent standard (Complex) Normal normal distributions, generated twenty million realizations $(h,w),$ and computed the running mean of $|h|^4/|h+w|^2.$ The periodic large jumps are characteristic of a divergent expectation: no matter how far out you run this simulation, these jumps will recur (whenever a tiny value of $|w+h|$ is generated compared to $|h|^2$) and its mean will never converge. This plot shows the running mean "Mean" as a function of the number of simulated values "N" for $n=10^4$ through $n=2\times 10^7.$ Colors highlight the largest jumps. Evidently one could be fooled by relying on a simulation to estimate the mean: notice how the purple segment from $N\approx 508,000$ to $N\approx 9,300,000$ seems to settle down--only to be followed by a large jump. This indicates that the simulation-based estimate depends entirely on when you choose to end the simulation.
How to find the expectation $\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right]$?
The expectation is infinite. One way to see this is to condition on $H$. Preliminary changes of variable (merely involving rescaling $H$ and $W$ and then shifting to a new origin) reduce the conditio
How to find the expectation $\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right]$? The expectation is infinite. One way to see this is to condition on $H$. Preliminary changes of variable (merely involving rescaling $H$ and $W$ and then shifting to a new origin) reduce the conditional expectation to a positive constant times a two-dimensional integral of the form $$\mathcal{I}(\lambda)=\iint_{\mathbb{C}}\ \frac{1}{|z|^2} e^{-\lambda |z-1|^2}\ dz d\bar z$$ with $\lambda \gt 0.$ In polar coordinates $(r,\theta),$ $|z|^2 = r^2$ and $|z-1|^2 = r^2 - 2r\cos(\theta)+1,$ and the area element is $dzd\bar{z} = r dr d\theta,$ giving $$\mathcal{I}(\lambda) = e^{-\lambda}\int_0^{2\pi}d\theta \int_0^\infty \frac{1}{r^2}e^{-\lambda(r^2 - 2r\cos\theta)}\ r\, drd.$$ For $0 \le r \le \sqrt{1 + 1/\lambda} -1 = u(\lambda)\gt 0,$ the expression in the exponent exceeds $-1,$ so we may underestimate this integral by replacing the exponential by $e^{-1}$ and limiting $r$ to this range: $$\mathcal{I}(\lambda) \ge e^{-\lambda-1}\int_0^{2\pi}d\theta \int_0^{u(\lambda)}\frac{1}{r}dr = 2\pi e^{-\lambda-1} \lim_{\epsilon\to 0} \int_\epsilon^{u(\lambda)} \frac{dr}{r}\ \propto\ \lim_{\epsilon\to 0}\log(u(\lambda)) - \log(\epsilon),$$ which diverges to $+\infty.$ Since all conditional expectations are infinite, the expectation must be infinite. A simulation bears this out. For simplicity I chose $H$ and $W$ to have independent standard (Complex) Normal normal distributions, generated twenty million realizations $(h,w),$ and computed the running mean of $|h|^4/|h+w|^2.$ The periodic large jumps are characteristic of a divergent expectation: no matter how far out you run this simulation, these jumps will recur (whenever a tiny value of $|w+h|$ is generated compared to $|h|^2$) and its mean will never converge. This plot shows the running mean "Mean" as a function of the number of simulated values "N" for $n=10^4$ through $n=2\times 10^7.$ Colors highlight the largest jumps. Evidently one could be fooled by relying on a simulation to estimate the mean: notice how the purple segment from $N\approx 508,000$ to $N\approx 9,300,000$ seems to settle down--only to be followed by a large jump. This indicates that the simulation-based estimate depends entirely on when you choose to end the simulation.
How to find the expectation $\mathbb{E} \left[ \frac{|h|^4}{|h+w|^2} \right]$? The expectation is infinite. One way to see this is to condition on $H$. Preliminary changes of variable (merely involving rescaling $H$ and $W$ and then shifting to a new origin) reduce the conditio
54,162
In R package mgcv, is it valid to have a random effect smooth on two continuous variables?
Huh... I made my post as a guest on SO because I am still on a suspension, but then the question got migrated here! So, if I understand you correctly, there's not really any similarity between the smooth s(x1, x2) and the random effect s(x1, x2, fac, bs = "re"), correct? Correct. The function name "s" does not mean "smooth function" when s() is used to construct a random effect. Broadly speaking, s() is just a model term constructor routine that constructs a design matrix and a penalty matrix. What I was envisioning was something smoothing in 2 dimensions like the former, but with some deviations from the average by factor level. You can get separate smooths per factor level using s(x1, x2, by=fac), but that completely separates the data for each factor level, rather than doing some partial pooling. s(x1, x2, by = fac) gives you something pretty close to what you want, except that as you said, data from different factor levels are treated independently. Technically, "close" means that s(x1, x2, by = fac) gives you the correct design matrix but not the correct penalty matrix. In this regard, you are probably aiming at te(x1, x2, fac, d = c(2, 1), bs = c("tp", "re")). I have never seen such model term before, but its construction is definitely possible in mgcv: library(mgcv) x1 <- runif(1000) x2 <- runif(1000) f <- gl(5, 200) ## "smooth.spec" object smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re")) ## "smooth" object sm <- smooth.construct(smooth_spec, data = list(x1 = x1, x2 = x2, f = f), knots = NULL) You can check that this smooth term has 2 smoothing parameters as expected, one for the s(x1, x2, bs = 'tp') margin, the other for the s(f, bs = 're') margin. Specification of k turns out subtle. You need to explicitly pass nlevels(f) to the random effect margin. For example, if you want a rank-10 thin-plate regression spline, ## my example factor `f` has 5 levels smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re"), k = c(10, 5)) sapply(smooth_spec$margin, "[[", "bs.dim") # [1] 10 5 At first I was thinking that perhaps we can simply pass NA to the random effect margin, but it turns out not! smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re"), k = c(10, NA)) sapply(smooth_spec$margin, "[[", "bs.dim") # [1] 25 5 ## ?? why is it 25? something has gone wrong! This might implies that there is a tiny bug... will have a check when available.
In R package mgcv, is it valid to have a random effect smooth on two continuous variables?
Huh... I made my post as a guest on SO because I am still on a suspension, but then the question got migrated here! So, if I understand you correctly, there's not really any similarity between the sm
In R package mgcv, is it valid to have a random effect smooth on two continuous variables? Huh... I made my post as a guest on SO because I am still on a suspension, but then the question got migrated here! So, if I understand you correctly, there's not really any similarity between the smooth s(x1, x2) and the random effect s(x1, x2, fac, bs = "re"), correct? Correct. The function name "s" does not mean "smooth function" when s() is used to construct a random effect. Broadly speaking, s() is just a model term constructor routine that constructs a design matrix and a penalty matrix. What I was envisioning was something smoothing in 2 dimensions like the former, but with some deviations from the average by factor level. You can get separate smooths per factor level using s(x1, x2, by=fac), but that completely separates the data for each factor level, rather than doing some partial pooling. s(x1, x2, by = fac) gives you something pretty close to what you want, except that as you said, data from different factor levels are treated independently. Technically, "close" means that s(x1, x2, by = fac) gives you the correct design matrix but not the correct penalty matrix. In this regard, you are probably aiming at te(x1, x2, fac, d = c(2, 1), bs = c("tp", "re")). I have never seen such model term before, but its construction is definitely possible in mgcv: library(mgcv) x1 <- runif(1000) x2 <- runif(1000) f <- gl(5, 200) ## "smooth.spec" object smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re")) ## "smooth" object sm <- smooth.construct(smooth_spec, data = list(x1 = x1, x2 = x2, f = f), knots = NULL) You can check that this smooth term has 2 smoothing parameters as expected, one for the s(x1, x2, bs = 'tp') margin, the other for the s(f, bs = 're') margin. Specification of k turns out subtle. You need to explicitly pass nlevels(f) to the random effect margin. For example, if you want a rank-10 thin-plate regression spline, ## my example factor `f` has 5 levels smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re"), k = c(10, 5)) sapply(smooth_spec$margin, "[[", "bs.dim") # [1] 10 5 At first I was thinking that perhaps we can simply pass NA to the random effect margin, but it turns out not! smooth_spec <- te(x1, x2, f, d = c(2, 1), bs = c("tp", "re"), k = c(10, NA)) sapply(smooth_spec$margin, "[[", "bs.dim") # [1] 25 5 ## ?? why is it 25? something has gone wrong! This might implies that there is a tiny bug... will have a check when available.
In R package mgcv, is it valid to have a random effect smooth on two continuous variables? Huh... I made my post as a guest on SO because I am still on a suspension, but then the question got migrated here! So, if I understand you correctly, there's not really any similarity between the sm
54,163
In R package mgcv, is it valid to have a random effect smooth on two continuous variables?
Consider the random effect part of your example toy <- gam(y ~ s(x0, fac, bs = "re") + s(x1, x2, fac, bs="re"), data = dat, method = "REML") This is just the penalized version of the following linear regression model: toy.lm <- lm(y ~ x0:fac + x1:x2:fac, data = dat) where a ridge penalty is applied to x0:fac and x1:x2:fac. The construction of simple random effect in mgcv::gam or mgcv::bam is fairly routine: generate design matrix from X1 <- model.matrix(~x0:fac - 1, data = dat) X2 <- model.matrix(~x1:x2:fac - 1, data = dat) generate ridge penalty matrix S2 <- S1 <- diag(nlevels(dat$fac))
In R package mgcv, is it valid to have a random effect smooth on two continuous variables?
Consider the random effect part of your example toy <- gam(y ~ s(x0, fac, bs = "re") + s(x1, x2, fac, bs="re"), data = dat, method = "REML") This is just the penalized version of the follo
In R package mgcv, is it valid to have a random effect smooth on two continuous variables? Consider the random effect part of your example toy <- gam(y ~ s(x0, fac, bs = "re") + s(x1, x2, fac, bs="re"), data = dat, method = "REML") This is just the penalized version of the following linear regression model: toy.lm <- lm(y ~ x0:fac + x1:x2:fac, data = dat) where a ridge penalty is applied to x0:fac and x1:x2:fac. The construction of simple random effect in mgcv::gam or mgcv::bam is fairly routine: generate design matrix from X1 <- model.matrix(~x0:fac - 1, data = dat) X2 <- model.matrix(~x1:x2:fac - 1, data = dat) generate ridge penalty matrix S2 <- S1 <- diag(nlevels(dat$fac))
In R package mgcv, is it valid to have a random effect smooth on two continuous variables? Consider the random effect part of your example toy <- gam(y ~ s(x0, fac, bs = "re") + s(x1, x2, fac, bs="re"), data = dat, method = "REML") This is just the penalized version of the follo
54,164
Why not set a static first layer in CNN?
My question is why don't we just set the first layer with static filters that find various angles of lines, and only train the rest? Based on your description, what you are suggesting is called Extreme Learning Machine (ELM). These are specific types of feed-forward neural networks that basically are different from the rest by having their hidden layers fixed (not trained), and instead just train to adjust the output layer. These layers are randomly initialized (or manually set based on heuristics) and don't change during training. The idea is that, just like you say, by doing this one could train considerably faster than having to optimize for all hidden layers and nodes. According to some literature they tend to generalize better (as they are less prone to overfitting) and even outperform some other methods. I would also guess that this principle could be extended or modified, so you leave an arbitrary number of layers fixed, while training the others; if we leave fixed the first layer and train for the others we get the specific situation you illustrated. Furthermore, a way to implement this I can think of is to "freeze layers", a feature some APIs have already, like Tensorflow (check this question for several alternatives). The most straightforward option there mentioned is to set trainable=False on such variables.
Why not set a static first layer in CNN?
My question is why don't we just set the first layer with static filters that find various angles of lines, and only train the rest? Based on your description, what you are suggesting is called Extre
Why not set a static first layer in CNN? My question is why don't we just set the first layer with static filters that find various angles of lines, and only train the rest? Based on your description, what you are suggesting is called Extreme Learning Machine (ELM). These are specific types of feed-forward neural networks that basically are different from the rest by having their hidden layers fixed (not trained), and instead just train to adjust the output layer. These layers are randomly initialized (or manually set based on heuristics) and don't change during training. The idea is that, just like you say, by doing this one could train considerably faster than having to optimize for all hidden layers and nodes. According to some literature they tend to generalize better (as they are less prone to overfitting) and even outperform some other methods. I would also guess that this principle could be extended or modified, so you leave an arbitrary number of layers fixed, while training the others; if we leave fixed the first layer and train for the others we get the specific situation you illustrated. Furthermore, a way to implement this I can think of is to "freeze layers", a feature some APIs have already, like Tensorflow (check this question for several alternatives). The most straightforward option there mentioned is to set trainable=False on such variables.
Why not set a static first layer in CNN? My question is why don't we just set the first layer with static filters that find various angles of lines, and only train the rest? Based on your description, what you are suggesting is called Extre
54,165
Why not set a static first layer in CNN?
It seems like a lot of work to me for minimal benefit. Having one more layer to backprop through when there may already be many tens of layers means that it doesn't help to hard-code the filters performance wise. In addition, you only artificially limit yourself -- we know CNNs tend to learn gabor filters in the first layer -- but what if you have an unusual dataset? Or you're exploring a new architecture? Etc. At best, you can hope that by hard-coding in the first layer, you're not missing out on an even better solution in parameter-space. And I think this is what you meant when you said transfer learning, but taking the pretrained weights from one model and using it to jumpstart the weights of another network is an effective strategy, and pretty similar to the static filters idea without the downsides.
Why not set a static first layer in CNN?
It seems like a lot of work to me for minimal benefit. Having one more layer to backprop through when there may already be many tens of layers means that it doesn't help to hard-code the filters perfo
Why not set a static first layer in CNN? It seems like a lot of work to me for minimal benefit. Having one more layer to backprop through when there may already be many tens of layers means that it doesn't help to hard-code the filters performance wise. In addition, you only artificially limit yourself -- we know CNNs tend to learn gabor filters in the first layer -- but what if you have an unusual dataset? Or you're exploring a new architecture? Etc. At best, you can hope that by hard-coding in the first layer, you're not missing out on an even better solution in parameter-space. And I think this is what you meant when you said transfer learning, but taking the pretrained weights from one model and using it to jumpstart the weights of another network is an effective strategy, and pretty similar to the static filters idea without the downsides.
Why not set a static first layer in CNN? It seems like a lot of work to me for minimal benefit. Having one more layer to backprop through when there may already be many tens of layers means that it doesn't help to hard-code the filters perfo
54,166
Why not set a static first layer in CNN?
The following quote from the 2002 review paper explains the role of preset initial layers in neural networks for image processing. "According to Perlovsky, the key to restraining the highly Flexible learning algorithms for ANNs, lies in the very combination with prior (geometric) knowledge. However, most pattern recognition methods do not even use the prior information that neighbouring pixel=voxel values are highly correlated. This problem can be circumvented by extracting features from images first, by using distance or error measures on pixel data which do take spatial coherency into account, or by designing an ANN with spatial coherency [LeCun1989] or contextual relations between objects in mind" [Egmont-Petersen2002]. [LeCun1989] Y. LeCun, L.D. Jackel, B. Boser et al., Handwritten digit recognition—applications of neural network chips and automatic learning, IEEE Commun. Mag. 27 (11) (1989) 41–46. [Egmont-Petersen2002] M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002.
Why not set a static first layer in CNN?
The following quote from the 2002 review paper explains the role of preset initial layers in neural networks for image processing. "According to Perlovsky, the key to restraining the highly Flexible l
Why not set a static first layer in CNN? The following quote from the 2002 review paper explains the role of preset initial layers in neural networks for image processing. "According to Perlovsky, the key to restraining the highly Flexible learning algorithms for ANNs, lies in the very combination with prior (geometric) knowledge. However, most pattern recognition methods do not even use the prior information that neighbouring pixel=voxel values are highly correlated. This problem can be circumvented by extracting features from images first, by using distance or error measures on pixel data which do take spatial coherency into account, or by designing an ANN with spatial coherency [LeCun1989] or contextual relations between objects in mind" [Egmont-Petersen2002]. [LeCun1989] Y. LeCun, L.D. Jackel, B. Boser et al., Handwritten digit recognition—applications of neural network chips and automatic learning, IEEE Commun. Mag. 27 (11) (1989) 41–46. [Egmont-Petersen2002] M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002.
Why not set a static first layer in CNN? The following quote from the 2002 review paper explains the role of preset initial layers in neural networks for image processing. "According to Perlovsky, the key to restraining the highly Flexible l
54,167
KL divergence invariant to affine transformation?
There are a few mistakes in your math. For example, when you expand the expectation, it seems you dropped the integral and also the $P_1(x)$ term. Write $y(x) = mx + c$. Recall that $P(x) dx = P(y) dy$. This is easy to see since $dy/dx = m$ and it makes sense that $P(x) = mP(y)$. Then we can go through with this proof from wikipedia which shows KL is invariant:
KL divergence invariant to affine transformation?
There are a few mistakes in your math. For example, when you expand the expectation, it seems you dropped the integral and also the $P_1(x)$ term. Write $y(x) = mx + c$. Recall that $P(x) dx = P(y) dy
KL divergence invariant to affine transformation? There are a few mistakes in your math. For example, when you expand the expectation, it seems you dropped the integral and also the $P_1(x)$ term. Write $y(x) = mx + c$. Recall that $P(x) dx = P(y) dy$. This is easy to see since $dy/dx = m$ and it makes sense that $P(x) = mP(y)$. Then we can go through with this proof from wikipedia which shows KL is invariant:
KL divergence invariant to affine transformation? There are a few mistakes in your math. For example, when you expand the expectation, it seems you dropped the integral and also the $P_1(x)$ term. Write $y(x) = mx + c$. Recall that $P(x) dx = P(y) dy
54,168
KL divergence invariant to affine transformation?
I made a serious mistake while calculating the $KL$ divergence between the two 1D normal distributions. It is this mistake that causes me to doubt whether $KL$ divergence is invariant to affine transformation. Where did I make the mistake: When evaluating the expected value of $$(x' - \mu_1)^2$$ over the distribution $P_1(x')$, I made the mistake $$\int dx'P_1(x')(x'-\mu_1)^2 = \sigma_1^2$$ However, $P_1(x') = \mathcal N(\mu_1, \frac{\sigma_1^2}{\sigma^2})$, so $$\int dx'P_1(x')(x'-\mu_1)^2 = \frac{\sigma_1^2}{\sigma^2}$$ By making this correction, we will have $$KL(P_1(x')\|P_2(x')) = KL(P_1(x)\|P_2(x))$$ which means KL divergence is invariant to the affine transformation $x' = \mu_1 + \frac{1}{\sigma}(x - \mu_1)$.
KL divergence invariant to affine transformation?
I made a serious mistake while calculating the $KL$ divergence between the two 1D normal distributions. It is this mistake that causes me to doubt whether $KL$ divergence is invariant to affine transf
KL divergence invariant to affine transformation? I made a serious mistake while calculating the $KL$ divergence between the two 1D normal distributions. It is this mistake that causes me to doubt whether $KL$ divergence is invariant to affine transformation. Where did I make the mistake: When evaluating the expected value of $$(x' - \mu_1)^2$$ over the distribution $P_1(x')$, I made the mistake $$\int dx'P_1(x')(x'-\mu_1)^2 = \sigma_1^2$$ However, $P_1(x') = \mathcal N(\mu_1, \frac{\sigma_1^2}{\sigma^2})$, so $$\int dx'P_1(x')(x'-\mu_1)^2 = \frac{\sigma_1^2}{\sigma^2}$$ By making this correction, we will have $$KL(P_1(x')\|P_2(x')) = KL(P_1(x)\|P_2(x))$$ which means KL divergence is invariant to the affine transformation $x' = \mu_1 + \frac{1}{\sigma}(x - \mu_1)$.
KL divergence invariant to affine transformation? I made a serious mistake while calculating the $KL$ divergence between the two 1D normal distributions. It is this mistake that causes me to doubt whether $KL$ divergence is invariant to affine transf
54,169
Mixed-effect model single term deletion -- should I change my random effects?
As a rule for lme4 and other packages with a similar parameterization (at least at the level of the user interface), it does not make sense to have random slopes for terms not present in the fixed effects. The reason for this is straightforward: the random effects (or more precisely, the BLUPs / conditional modes) are computed as offsets from the population-level / fixed effects. So if a given fixed effect is missing, then this is equivalent to assuming that the population-level effect is zero. This is a rather strong assumption and not one we generally want. It will also mess up the estimation of the variance (the actual critical part for random effects, which are in other contexts called variance components), because the variance is calculated as the mean squared distance to the mean and if your assumed mean doesn't match the actual one, then your variance will be wrong. (Note that this is part of the reason for calculating random effects as offsets from the population mean: it means that the random effects have mean equal to 0, so that part of the formula just cancels out.) As an example of the repercussions of this, consider the following two models: m <- lmer(Reaction ~ 1 + (1|Subject), sleepstudy) m.0 <- lmer(Reaction ~ 0 + (1|Subject), sleepstudy) (I'm using the intercept term here for simplicity instead of dealing with slopes, but the same ideas hold equally.) The random effects show that the models actually use the offsets : > ranef(m) $Subject (Intercept) 308 37.829172 309 -72.209815 310 -58.536726 330 4.087222 331 9.476087 ... > ranef(m.0) $Subject (Intercept) 308 341.3933 309 214.7671 310 230.5013 330 302.5651 331 308.7663 ... The first set include negative values because some subjects are faster than the population average, while the second set includes only positive values because all subjects had positive reaction times. We can also extract the individual predictions by combining the offset and the population mean, lme4 will helpfully do this for you: > coef(m) $Subject (Intercept) 308 336.3371 309 226.2981 310 239.9712 330 302.5951 331 307.9840 ... (For m.0, this is of course identical to the random effects.) Note that these values do not match up with the random effects from m.0. This is important -- the random effects for both models are shrunk towards 0, but for m this corresponds shrinking just the offsets, i.e shrinking the individual predictions towards the (grand) mean. For m.0, this corresponds towards shrinking the individual predictions towards 0. This will of course yield different results -- all the individual predictions in m.0 become smaller, but the individual predictions in m can become bigger or smaller, depending on whether an individual subject was faster or slower than the (grand) mean reaction time. The variance estimates also differ: > VarCorr(m) Groups Name Std.Dev. Subject (Intercept) 35.754 Residual 44.259 > VarCorr(m.0) Groups Name Std.Dev. Subject (Intercept) 300.505 Residual 44.259 Clearly m.0 is wrong in some rather fundamental sense: the standard deviation between subjects is not 300.505! Now, overall m.0 does a decent job of fitting the data (with a similar log likelihood to m), but it does so less efficiently (because the computational assumptions of the model is not met) and with parameter estimates that are incorrect/misleading. Now, it is possible to parameterize mixed models such that random effects aren't centered (or "spherical") in this way, and indeed I believe brms uses a non-centered parameterization for its Stan code (there's something about the way the centered parameterization creates weird chokepoints in the critical set for Hamiltonian MCMC), but the formula interface for the most popular packages -- nlme, lme4, brms, rstanarm -- nonetheless requires a centered specification. Since you've recently discovered the different types of sums of squares, make sure to check out Venables' Exegeses on Linear Models, which is often mentioned in such discussions, especially with regards to whether Type-III SS even examine interesting hypothesis (instead of the usual rant about whether they make "sense"). John Fox's excellent book Applied Regression Analysis & Generalized Linear Models. The index coveniently has an entry "Marginality, principle of" with references to many different points in the text whether issues related to this (and thus Type II vs III SS) come to play. car::Anova() which can compute both Type II and III SS for lmer models, either using the $\chi^2$ distribution (i.e. treating the $F$ denominator degrees of freedom as infinite, analogous to treating $t$ values as $z$ values) or using $F$ distribution with Kenward-Roger approximated denominator degrees of freedom. (car is an abbreviation for "Companion to Applied Regression".) lmerTest::anova() which will compute Type I, II and III sums of squares for lmer models using with options for using the Satterthwaite or Kenward-Roger approximations for the denominator degrees of freedom. Note that as of this writing, there is a major package rewrite in beta which generally improves computational efficiency (by caching the ddf approximations) compared to the current version on CRAN.
Mixed-effect model single term deletion -- should I change my random effects?
As a rule for lme4 and other packages with a similar parameterization (at least at the level of the user interface), it does not make sense to have random slopes for terms not present in the fixed eff
Mixed-effect model single term deletion -- should I change my random effects? As a rule for lme4 and other packages with a similar parameterization (at least at the level of the user interface), it does not make sense to have random slopes for terms not present in the fixed effects. The reason for this is straightforward: the random effects (or more precisely, the BLUPs / conditional modes) are computed as offsets from the population-level / fixed effects. So if a given fixed effect is missing, then this is equivalent to assuming that the population-level effect is zero. This is a rather strong assumption and not one we generally want. It will also mess up the estimation of the variance (the actual critical part for random effects, which are in other contexts called variance components), because the variance is calculated as the mean squared distance to the mean and if your assumed mean doesn't match the actual one, then your variance will be wrong. (Note that this is part of the reason for calculating random effects as offsets from the population mean: it means that the random effects have mean equal to 0, so that part of the formula just cancels out.) As an example of the repercussions of this, consider the following two models: m <- lmer(Reaction ~ 1 + (1|Subject), sleepstudy) m.0 <- lmer(Reaction ~ 0 + (1|Subject), sleepstudy) (I'm using the intercept term here for simplicity instead of dealing with slopes, but the same ideas hold equally.) The random effects show that the models actually use the offsets : > ranef(m) $Subject (Intercept) 308 37.829172 309 -72.209815 310 -58.536726 330 4.087222 331 9.476087 ... > ranef(m.0) $Subject (Intercept) 308 341.3933 309 214.7671 310 230.5013 330 302.5651 331 308.7663 ... The first set include negative values because some subjects are faster than the population average, while the second set includes only positive values because all subjects had positive reaction times. We can also extract the individual predictions by combining the offset and the population mean, lme4 will helpfully do this for you: > coef(m) $Subject (Intercept) 308 336.3371 309 226.2981 310 239.9712 330 302.5951 331 307.9840 ... (For m.0, this is of course identical to the random effects.) Note that these values do not match up with the random effects from m.0. This is important -- the random effects for both models are shrunk towards 0, but for m this corresponds shrinking just the offsets, i.e shrinking the individual predictions towards the (grand) mean. For m.0, this corresponds towards shrinking the individual predictions towards 0. This will of course yield different results -- all the individual predictions in m.0 become smaller, but the individual predictions in m can become bigger or smaller, depending on whether an individual subject was faster or slower than the (grand) mean reaction time. The variance estimates also differ: > VarCorr(m) Groups Name Std.Dev. Subject (Intercept) 35.754 Residual 44.259 > VarCorr(m.0) Groups Name Std.Dev. Subject (Intercept) 300.505 Residual 44.259 Clearly m.0 is wrong in some rather fundamental sense: the standard deviation between subjects is not 300.505! Now, overall m.0 does a decent job of fitting the data (with a similar log likelihood to m), but it does so less efficiently (because the computational assumptions of the model is not met) and with parameter estimates that are incorrect/misleading. Now, it is possible to parameterize mixed models such that random effects aren't centered (or "spherical") in this way, and indeed I believe brms uses a non-centered parameterization for its Stan code (there's something about the way the centered parameterization creates weird chokepoints in the critical set for Hamiltonian MCMC), but the formula interface for the most popular packages -- nlme, lme4, brms, rstanarm -- nonetheless requires a centered specification. Since you've recently discovered the different types of sums of squares, make sure to check out Venables' Exegeses on Linear Models, which is often mentioned in such discussions, especially with regards to whether Type-III SS even examine interesting hypothesis (instead of the usual rant about whether they make "sense"). John Fox's excellent book Applied Regression Analysis & Generalized Linear Models. The index coveniently has an entry "Marginality, principle of" with references to many different points in the text whether issues related to this (and thus Type II vs III SS) come to play. car::Anova() which can compute both Type II and III SS for lmer models, either using the $\chi^2$ distribution (i.e. treating the $F$ denominator degrees of freedom as infinite, analogous to treating $t$ values as $z$ values) or using $F$ distribution with Kenward-Roger approximated denominator degrees of freedom. (car is an abbreviation for "Companion to Applied Regression".) lmerTest::anova() which will compute Type I, II and III sums of squares for lmer models using with options for using the Satterthwaite or Kenward-Roger approximations for the denominator degrees of freedom. Note that as of this writing, there is a major package rewrite in beta which generally improves computational efficiency (by caching the ddf approximations) compared to the current version on CRAN.
Mixed-effect model single term deletion -- should I change my random effects? As a rule for lme4 and other packages with a similar parameterization (at least at the level of the user interface), it does not make sense to have random slopes for terms not present in the fixed eff
54,170
Identify the original variable used to calculate the dummies
This answer suggests there is value in a graphical exploration of relationships among the variables and illustrates one useful way. It then provides a simple solution that rapidly and automatically identifies all possible variables that might be represented by a given categorical variable. You can explore graphically by drawing a scatterplot matrix of the age categories and all numerical variables in the data frame. If there are many variables, summarize each by age and check that they fall into non-overlapping intervals. Here are some sample data for illustration. The grouping variable is V1, but let's pretend we don't know that. # # Create data for testing. # n.obs <- 100 n.vars <- 4 X <- as.data.frame(matrix(round(runif(n.obs*n.vars, 15, 40), 0), n.obs)) # # Create the dummary variable. # cutpoints <- c(-Inf, 20, 25, 30, Inf) Age <- data.frame(group=cut(X$V1, cutpoints)) This scatterplot matrix makes it obvious that variable V1 corresponds to the age groups in group, because it is the only variable that is clearly partitioned by a scatterplot in the group row or column: colors <- rainbow(length(cutpoints), alpha=0.6) names(colors) <- unique(Age$group) pairs(cbind(X, Age), pch=21, bg=colors[Age$group]) I recommend using this approach in any event because if no variables are found to match the age variable (as shown below), with this plot you may quickly identify any variables that almost match it. This can be useful for forensic activities such as identifying inconsistencies in a data table. An R implementation of a summary by age uses tapply to break data into groups by age and by to compute their ranges. If these are non-overlapping (as ordered by by), you have a candidate for a correspondence with age. # # Identify all columns of X that might match Age. # The result is a logical vector indicating which fields of X match. # candidates <- sapply(names(X), function(f) { groups <- tapply(X[[f]], Age) boundaries <- unlist(by(X[f], groups, range)) identical(order(boundaries), 1:length(boundaries)) }) message(paste0("Possible variables are (", paste(names(X)[candidates], collapse=","), ").")) The output is Possible variables are (V1). Although this example used data in the form usually stored in a database--namely, as a categorical variable--it will work without change when Age is a data frame in the format given in the question: unique rows of Age are used for grouping.
Identify the original variable used to calculate the dummies
This answer suggests there is value in a graphical exploration of relationships among the variables and illustrates one useful way. It then provides a simple solution that rapidly and automatically i
Identify the original variable used to calculate the dummies This answer suggests there is value in a graphical exploration of relationships among the variables and illustrates one useful way. It then provides a simple solution that rapidly and automatically identifies all possible variables that might be represented by a given categorical variable. You can explore graphically by drawing a scatterplot matrix of the age categories and all numerical variables in the data frame. If there are many variables, summarize each by age and check that they fall into non-overlapping intervals. Here are some sample data for illustration. The grouping variable is V1, but let's pretend we don't know that. # # Create data for testing. # n.obs <- 100 n.vars <- 4 X <- as.data.frame(matrix(round(runif(n.obs*n.vars, 15, 40), 0), n.obs)) # # Create the dummary variable. # cutpoints <- c(-Inf, 20, 25, 30, Inf) Age <- data.frame(group=cut(X$V1, cutpoints)) This scatterplot matrix makes it obvious that variable V1 corresponds to the age groups in group, because it is the only variable that is clearly partitioned by a scatterplot in the group row or column: colors <- rainbow(length(cutpoints), alpha=0.6) names(colors) <- unique(Age$group) pairs(cbind(X, Age), pch=21, bg=colors[Age$group]) I recommend using this approach in any event because if no variables are found to match the age variable (as shown below), with this plot you may quickly identify any variables that almost match it. This can be useful for forensic activities such as identifying inconsistencies in a data table. An R implementation of a summary by age uses tapply to break data into groups by age and by to compute their ranges. If these are non-overlapping (as ordered by by), you have a candidate for a correspondence with age. # # Identify all columns of X that might match Age. # The result is a logical vector indicating which fields of X match. # candidates <- sapply(names(X), function(f) { groups <- tapply(X[[f]], Age) boundaries <- unlist(by(X[f], groups, range)) identical(order(boundaries), 1:length(boundaries)) }) message(paste0("Possible variables are (", paste(names(X)[candidates], collapse=","), ").")) The output is Possible variables are (V1). Although this example used data in the form usually stored in a database--namely, as a categorical variable--it will work without change when Age is a data frame in the format given in the question: unique rows of Age are used for grouping.
Identify the original variable used to calculate the dummies This answer suggests there is value in a graphical exploration of relationships among the variables and illustrates one useful way. It then provides a simple solution that rapidly and automatically i
54,171
Identify the original variable used to calculate the dummies
If you can perfectly reconstruct the dummies from a candidate predictor, then the dummies and the predictor carry the same information. If the dummies encode intervals of the predictor (the most common way of discretizing continuous predictors, which is a bad idea, as discussed often here on CV, but that's not the focus), then you should be able to model the dummies perfectly with a multinomial logistic regression on the predictor: library(nnet) # encode the dummies into a single factor variable: tab$age.level <- as.factor(apply(tab[,-1],1,function(xx)min(which(xx==1)))) model <- multinom(age.level~age,tab) predict(model) table(predict(model),tab$age.level) 1 2 3 4 1 12 0 0 0 2 0 22 0 0 3 0 0 25 0 4 0 0 0 41 We get a perfect confusion matrix. This would be convincing enough for me.
Identify the original variable used to calculate the dummies
If you can perfectly reconstruct the dummies from a candidate predictor, then the dummies and the predictor carry the same information. If the dummies encode intervals of the predictor (the most commo
Identify the original variable used to calculate the dummies If you can perfectly reconstruct the dummies from a candidate predictor, then the dummies and the predictor carry the same information. If the dummies encode intervals of the predictor (the most common way of discretizing continuous predictors, which is a bad idea, as discussed often here on CV, but that's not the focus), then you should be able to model the dummies perfectly with a multinomial logistic regression on the predictor: library(nnet) # encode the dummies into a single factor variable: tab$age.level <- as.factor(apply(tab[,-1],1,function(xx)min(which(xx==1)))) model <- multinom(age.level~age,tab) predict(model) table(predict(model),tab$age.level) 1 2 3 4 1 12 0 0 0 2 0 22 0 0 3 0 0 25 0 4 0 0 0 41 We get a perfect confusion matrix. This would be convincing enough for me.
Identify the original variable used to calculate the dummies If you can perfectly reconstruct the dummies from a candidate predictor, then the dummies and the predictor carry the same information. If the dummies encode intervals of the predictor (the most commo
54,172
Identify the original variable used to calculate the dummies
To detect that age2...5 are constructed from the same variable, you can check if the sum of those dummy variables equals 1 : all(age2+age3+age4+age5==1) But I don't know if there is a way to check if they come specifically from the variable age, it could be any other variable in the dataframe.
Identify the original variable used to calculate the dummies
To detect that age2...5 are constructed from the same variable, you can check if the sum of those dummy variables equals 1 : all(age2+age3+age4+age5==1) But I don't know if there is a way to check if
Identify the original variable used to calculate the dummies To detect that age2...5 are constructed from the same variable, you can check if the sum of those dummy variables equals 1 : all(age2+age3+age4+age5==1) But I don't know if there is a way to check if they come specifically from the variable age, it could be any other variable in the dataframe.
Identify the original variable used to calculate the dummies To detect that age2...5 are constructed from the same variable, you can check if the sum of those dummy variables equals 1 : all(age2+age3+age4+age5==1) But I don't know if there is a way to check if
54,173
Continuous Version of Coupon-Collector Problem
This question reminds me of Wilfrid Kendall's dead leaves simulation, which he uses to explain the difference between forward and backward sampling. Given that the problem can be formalised through uniform spacings, this highly detailed answer on CV is connected with this question. Indeed, if $U_1,\ldots,U_T$ denote the mid-times of the elevator trips on days 1, 2, ..., T, assumed to be Uniform on $(0,L)$, and if $U_{(1)},U_{(2)},\ldots,U_{(T)}$ are the corresponding order statistics, the condition for covering the entire L mn musical programme is $$U_{(2)}-U_{(1)}<1,U_{(3)}-U_{(2)}<1,\ldots,U_{(T)}-U_{(T-1)}<1,$$ and $$U_{(1)}+L-U_{(T)}<1$$ Defining the Dirichlet vector $\Delta$ associated with the spacings $U_{(i)}-U_{(i-1)}$ $(2\le i\le T+1)$, with $U_{(T+1)}=L$, $$\Delta=\frac{1}{L}(U_{(1)},U_{(2)}-U_{(1)}, U_{(3)}-U_{(2)},\ldots,U_{(T)}-U_{(T-1)},1-U_{(T)})$$ which is Dirichlet $\mathcal{D}_{T+1}(1,\ldots,1)$, the question is thus almost equivalent to finding the law of the maximal component of $\Delta$, to determine$$\mathbb{P}\left(\max_{1\le i\le T+1}\Delta_i<1\big/L\right)$$for which the approximation provided in the above mentioned answer applies. Obviously, this is not the entire answer to the question but it provides an interesting entry. The expectation of $T$ can then be deduced from $$\mathbb{P}(\max_i\Delta_i\le 1/L) = \sum_{j=0}^{L} { T+1 \choose j } (-1)^j (1-j/L)^T,$$ since \begin{align*} \mathbb{E}[T]&=\sum_{t=L+1}^\infty t \mathbb{P}(T=t)\\ &=\sum_{t=L+1}^\infty t \mathbb{P}(T\ge t)-\sum_{t=1}^\infty t \mathbb{P}(T\ge t+1)\\ &=(L+1)\mathbb{P}(T\ge L+1)+\sum_{t=L+2}^\infty \mathbb{P}(T\ge t)\\ &=(L+1)\{1-\mathbb{P}(T\le L)\}+\sum_{t=L+2}^\infty \{1-\mathbb{P}(T\le t-1)\}\\ &=L+1+\sum_{t=L+1}^\infty \{1-\mathbb{P}(T\le t)\}\\ &=L+1+\sum_{t=L+1}^\infty \left\{1-\mathbb{P}\left(\max_{1\le i\le T+1}\Delta_i\le1\big/L\right)\right\}\\ &=L+1+\sum_{t=L+1}^\infty \left\{1-\sum_{j=0}^{L} { t+1 \choose j } (-1)^j (1-j/L)^T\right\}\\ \end{align*} A quick simulation shows the accuracy of the expectation The approximation to the actual problem of "the continuous coupon collector" can also be evaluated by simulation and the regression of the simulated expected number $\hat T$ on the (approximate) expected number $\hat T_0$ shows a good fit of the formula $$\hat T=3/2+\hat T_0$$
Continuous Version of Coupon-Collector Problem
This question reminds me of Wilfrid Kendall's dead leaves simulation, which he uses to explain the difference between forward and backward sampling. Given that the problem can be formalised throu
Continuous Version of Coupon-Collector Problem This question reminds me of Wilfrid Kendall's dead leaves simulation, which he uses to explain the difference between forward and backward sampling. Given that the problem can be formalised through uniform spacings, this highly detailed answer on CV is connected with this question. Indeed, if $U_1,\ldots,U_T$ denote the mid-times of the elevator trips on days 1, 2, ..., T, assumed to be Uniform on $(0,L)$, and if $U_{(1)},U_{(2)},\ldots,U_{(T)}$ are the corresponding order statistics, the condition for covering the entire L mn musical programme is $$U_{(2)}-U_{(1)}<1,U_{(3)}-U_{(2)}<1,\ldots,U_{(T)}-U_{(T-1)}<1,$$ and $$U_{(1)}+L-U_{(T)}<1$$ Defining the Dirichlet vector $\Delta$ associated with the spacings $U_{(i)}-U_{(i-1)}$ $(2\le i\le T+1)$, with $U_{(T+1)}=L$, $$\Delta=\frac{1}{L}(U_{(1)},U_{(2)}-U_{(1)}, U_{(3)}-U_{(2)},\ldots,U_{(T)}-U_{(T-1)},1-U_{(T)})$$ which is Dirichlet $\mathcal{D}_{T+1}(1,\ldots,1)$, the question is thus almost equivalent to finding the law of the maximal component of $\Delta$, to determine$$\mathbb{P}\left(\max_{1\le i\le T+1}\Delta_i<1\big/L\right)$$for which the approximation provided in the above mentioned answer applies. Obviously, this is not the entire answer to the question but it provides an interesting entry. The expectation of $T$ can then be deduced from $$\mathbb{P}(\max_i\Delta_i\le 1/L) = \sum_{j=0}^{L} { T+1 \choose j } (-1)^j (1-j/L)^T,$$ since \begin{align*} \mathbb{E}[T]&=\sum_{t=L+1}^\infty t \mathbb{P}(T=t)\\ &=\sum_{t=L+1}^\infty t \mathbb{P}(T\ge t)-\sum_{t=1}^\infty t \mathbb{P}(T\ge t+1)\\ &=(L+1)\mathbb{P}(T\ge L+1)+\sum_{t=L+2}^\infty \mathbb{P}(T\ge t)\\ &=(L+1)\{1-\mathbb{P}(T\le L)\}+\sum_{t=L+2}^\infty \{1-\mathbb{P}(T\le t-1)\}\\ &=L+1+\sum_{t=L+1}^\infty \{1-\mathbb{P}(T\le t)\}\\ &=L+1+\sum_{t=L+1}^\infty \left\{1-\mathbb{P}\left(\max_{1\le i\le T+1}\Delta_i\le1\big/L\right)\right\}\\ &=L+1+\sum_{t=L+1}^\infty \left\{1-\sum_{j=0}^{L} { t+1 \choose j } (-1)^j (1-j/L)^T\right\}\\ \end{align*} A quick simulation shows the accuracy of the expectation The approximation to the actual problem of "the continuous coupon collector" can also be evaluated by simulation and the regression of the simulated expected number $\hat T$ on the (approximate) expected number $\hat T_0$ shows a good fit of the formula $$\hat T=3/2+\hat T_0$$
Continuous Version of Coupon-Collector Problem This question reminds me of Wilfrid Kendall's dead leaves simulation, which he uses to explain the difference between forward and backward sampling. Given that the problem can be formalised throu
54,174
How to calculate the standard deviation for a Gaussian Process?
The closed form equation for the predicted covariance inverts $K(x_d, x_d)$. However, using the Cholesky decomposition is faster and more numerically stable than directly taking the inverse. You can see the procedure laid out in Algorithm 2.1 of Gaussian Processes for Machine Learning. However, they don't show why they're equivalent, so I'll sketch it out here. First The closed-form equation gives a $n$ x $n$ covariance matrix for $d$ test points. The code only cares about it's diagonal. $K(x_*, x_*)$ is the $n$ a $n$ prior covariance between the test points, but the code extracts np.diag(K_ss) Note that np.sum(Lk**2, axis=0) is equivalent to: np.diag(Lk.T @ Lk) In other words, Lk.T @ Lk is somehow equivalent to $$ K(x_*, x_d) K(x_d, x_d)^{-1} K(x_d, x_*) $$ The Cholesky decomposition of $K$ produces a lower triangular matrix $L$ with real positive diagonal entries such that $$K = LL^T$$ Let $k_s = K(x_d, x_*)$ so that the least squares (np.linalg.solve) step outputs: $$L_k = (L^TL)^{-1}L^Tk_s$$ $(AB)^T = B^TA^T$, $(AB)^{-1} = B^{-1}A^{-1}$, and $(A^{-1})^T = (A^T)^{-1} = A^{-T}$ , so \begin{align} L_k^TL_k &= k_s^T L [(L^TL)^{-1})]^T(L^TL)^{-1}L^Tk_s\\ L_k^TL_k &= k_s^T L (L^{-1}L^{-T})^TL^{-1}L^{-T}L^Tk_s\\ L_k^TL_k &= k_s^T L L^{-1}L^{-T}L^{-1}L^{-T}L^Tk_s\\ L_k^TL_k &= k_s^T L^{-T}L^{-1}k_s \end{align} From the definition of the Cholesky decomposition: $$ K^{-1} = (LL^T)^{-1} = L^{-T}L^{-1}$$ Therefore: $$ L_k^TL_k = k_s^T K^{-1} k_s $$ Finally, note that in the code, they add a small diagonal element to the training covariance so that: $$K = K(x_d, x_d) + \sigma_n^2I$$. The small diagonal element represents an estimate of the observation noise. That is, we assume that the observations $y$ are normally distributed with mean $f(x)$ and variance $\sigma_n^2$. Numerically, this helps to guarantee that the training covariance is positive definite.
How to calculate the standard deviation for a Gaussian Process?
The closed form equation for the predicted covariance inverts $K(x_d, x_d)$. However, using the Cholesky decomposition is faster and more numerically stable than directly taking the inverse. You can s
How to calculate the standard deviation for a Gaussian Process? The closed form equation for the predicted covariance inverts $K(x_d, x_d)$. However, using the Cholesky decomposition is faster and more numerically stable than directly taking the inverse. You can see the procedure laid out in Algorithm 2.1 of Gaussian Processes for Machine Learning. However, they don't show why they're equivalent, so I'll sketch it out here. First The closed-form equation gives a $n$ x $n$ covariance matrix for $d$ test points. The code only cares about it's diagonal. $K(x_*, x_*)$ is the $n$ a $n$ prior covariance between the test points, but the code extracts np.diag(K_ss) Note that np.sum(Lk**2, axis=0) is equivalent to: np.diag(Lk.T @ Lk) In other words, Lk.T @ Lk is somehow equivalent to $$ K(x_*, x_d) K(x_d, x_d)^{-1} K(x_d, x_*) $$ The Cholesky decomposition of $K$ produces a lower triangular matrix $L$ with real positive diagonal entries such that $$K = LL^T$$ Let $k_s = K(x_d, x_*)$ so that the least squares (np.linalg.solve) step outputs: $$L_k = (L^TL)^{-1}L^Tk_s$$ $(AB)^T = B^TA^T$, $(AB)^{-1} = B^{-1}A^{-1}$, and $(A^{-1})^T = (A^T)^{-1} = A^{-T}$ , so \begin{align} L_k^TL_k &= k_s^T L [(L^TL)^{-1})]^T(L^TL)^{-1}L^Tk_s\\ L_k^TL_k &= k_s^T L (L^{-1}L^{-T})^TL^{-1}L^{-T}L^Tk_s\\ L_k^TL_k &= k_s^T L L^{-1}L^{-T}L^{-1}L^{-T}L^Tk_s\\ L_k^TL_k &= k_s^T L^{-T}L^{-1}k_s \end{align} From the definition of the Cholesky decomposition: $$ K^{-1} = (LL^T)^{-1} = L^{-T}L^{-1}$$ Therefore: $$ L_k^TL_k = k_s^T K^{-1} k_s $$ Finally, note that in the code, they add a small diagonal element to the training covariance so that: $$K = K(x_d, x_d) + \sigma_n^2I$$. The small diagonal element represents an estimate of the observation noise. That is, we assume that the observations $y$ are normally distributed with mean $f(x)$ and variance $\sigma_n^2$. Numerically, this helps to guarantee that the training covariance is positive definite.
How to calculate the standard deviation for a Gaussian Process? The closed form equation for the predicted covariance inverts $K(x_d, x_d)$. However, using the Cholesky decomposition is faster and more numerically stable than directly taking the inverse. You can s
54,175
E[X| X>Y] for independent X, Y ~ N(0,1)
Rewriting and using linearity of expectation, $$ E[X | X > Y] = E[X | X - Y > 0] = E[X - Y | X - Y > 0] + E[Y | X - Y > 0]. $$ Define now $X' = -X, Y' = -Y$. Then $$ E[Y | X - Y > 0] = E[-Y' | Y' - X' > 0] = -E[Y' | Y' - X' > 0]. $$ However, note that $Y', X' \sim X, Y$, so we have $E[Y' | Y' - X' > 0] = E[X | X - Y > 0]$. Inserting back, then, $$ E[X | X > Y] = E[X - Y | X - Y > 0] - E[X | X > Y], $$ implying that $$ E[X | X > Y] = \frac{E[X - Y | X - Y > 0]}{2}. $$ Finally, note that $X - Y$ is a a normal R.V. with mean 0 and variance 2, so $E[ X - Y | X - Y > 0]$ is easy to find.
E[X| X>Y] for independent X, Y ~ N(0,1)
Rewriting and using linearity of expectation, $$ E[X | X > Y] = E[X | X - Y > 0] = E[X - Y | X - Y > 0] + E[Y | X - Y > 0]. $$ Define now $X' = -X, Y' = -Y$. Then $$ E[Y | X - Y > 0] = E[-Y' | Y' - X'
E[X| X>Y] for independent X, Y ~ N(0,1) Rewriting and using linearity of expectation, $$ E[X | X > Y] = E[X | X - Y > 0] = E[X - Y | X - Y > 0] + E[Y | X - Y > 0]. $$ Define now $X' = -X, Y' = -Y$. Then $$ E[Y | X - Y > 0] = E[-Y' | Y' - X' > 0] = -E[Y' | Y' - X' > 0]. $$ However, note that $Y', X' \sim X, Y$, so we have $E[Y' | Y' - X' > 0] = E[X | X - Y > 0]$. Inserting back, then, $$ E[X | X > Y] = E[X - Y | X - Y > 0] - E[X | X > Y], $$ implying that $$ E[X | X > Y] = \frac{E[X - Y | X - Y > 0]}{2}. $$ Finally, note that $X - Y$ is a a normal R.V. with mean 0 and variance 2, so $E[ X - Y | X - Y > 0]$ is easy to find.
E[X| X>Y] for independent X, Y ~ N(0,1) Rewriting and using linearity of expectation, $$ E[X | X > Y] = E[X | X - Y > 0] = E[X - Y | X - Y > 0] + E[Y | X - Y > 0]. $$ Define now $X' = -X, Y' = -Y$. Then $$ E[Y | X - Y > 0] = E[-Y' | Y' - X'
54,176
The purpose of threshold in naive bayes algorithm
In short: The threshold is not a part of the Naive Bayes algorithm A Naive Bayes algorithm will be able to say for a certain sample, that the probability of it being of C1 is 60% and of C2 is 40%. Then it's up to you to interpret this as a classification in class C1, which would be the case for a 50% threshold. When using accuracy as a metric you essentially count the amount of correct classifications and thus state a definite threshold (like 50%) that is used to determine which class is being predicted for each sample. You might want to take a look at this answer, and Frank Harrell's Classification vs. Prediction. Why cross validation? Because you want to use a separate training/test set and use all your data. How? You could determine recall & precision for every fold and every threshold and make a choice. Some sidenote: Naive Bayes has the tendency to push probabilities to extremes (0% and 100%), it's what they call badly calibrated. It's due its basic assumption that all attributes are conditionally independent which is often not the case. The latter is visible in the example below; some metrics are plotted in function of threshold for a Naive Bayes classifier on a spam task.
The purpose of threshold in naive bayes algorithm
In short: The threshold is not a part of the Naive Bayes algorithm A Naive Bayes algorithm will be able to say for a certain sample, that the probability of it being of C1 is 60% and of C2 is 40%. The
The purpose of threshold in naive bayes algorithm In short: The threshold is not a part of the Naive Bayes algorithm A Naive Bayes algorithm will be able to say for a certain sample, that the probability of it being of C1 is 60% and of C2 is 40%. Then it's up to you to interpret this as a classification in class C1, which would be the case for a 50% threshold. When using accuracy as a metric you essentially count the amount of correct classifications and thus state a definite threshold (like 50%) that is used to determine which class is being predicted for each sample. You might want to take a look at this answer, and Frank Harrell's Classification vs. Prediction. Why cross validation? Because you want to use a separate training/test set and use all your data. How? You could determine recall & precision for every fold and every threshold and make a choice. Some sidenote: Naive Bayes has the tendency to push probabilities to extremes (0% and 100%), it's what they call badly calibrated. It's due its basic assumption that all attributes are conditionally independent which is often not the case. The latter is visible in the example below; some metrics are plotted in function of threshold for a Naive Bayes classifier on a spam task.
The purpose of threshold in naive bayes algorithm In short: The threshold is not a part of the Naive Bayes algorithm A Naive Bayes algorithm will be able to say for a certain sample, that the probability of it being of C1 is 60% and of C2 is 40%. The
54,177
What does word embedding weighted by tf-idf mean?
This quote is clearly talking about sentence embeddings, obtained from word embeddings. If the sentence $s$ consists of words $(w_1, ..., w_n)$, we'd like to define an embedding vector $Emb_s(s) \in \mathbb{R}^d$ for some $d > 0$. The authors of this paper propose to compute it from the embeddings of words $w_i$, let's call them $Emb_w(w_i)$, so that $Emb_s(s)$ is a linear combination of $Emb_w(w_i)$ and has the same dimensionality $d$: $$Emb_w(s) = \sum_{w_i \in s} c_i \cdot Emb_w(w_i)$$ .... where $c_i \in \mathbb{R}$ are the coefficients (scalars). Note that $d$ is the same for all word vectors. In the simplest case, all $c_i = 1$, so $Emb_s(s)$ would be a sum of constituent vectors. A better approach is to do averaging, i.e., $c_i = \frac{1}{n}$ (to handle sentences of different length). Note that the dimensionality doesn't change, it's still $d$. Finally, the proposed method is the weighted average, where the weights are TF-IDF. This allows to capture that some words in a sentence are naturally more valuable than others. Once again, there's no problem with dimensions, because it's a sum of $\mathbb{R}^d$ vectors, multiplied by scalars.
What does word embedding weighted by tf-idf mean?
This quote is clearly talking about sentence embeddings, obtained from word embeddings. If the sentence $s$ consists of words $(w_1, ..., w_n)$, we'd like to define an embedding vector $Emb_s(s) \in \
What does word embedding weighted by tf-idf mean? This quote is clearly talking about sentence embeddings, obtained from word embeddings. If the sentence $s$ consists of words $(w_1, ..., w_n)$, we'd like to define an embedding vector $Emb_s(s) \in \mathbb{R}^d$ for some $d > 0$. The authors of this paper propose to compute it from the embeddings of words $w_i$, let's call them $Emb_w(w_i)$, so that $Emb_s(s)$ is a linear combination of $Emb_w(w_i)$ and has the same dimensionality $d$: $$Emb_w(s) = \sum_{w_i \in s} c_i \cdot Emb_w(w_i)$$ .... where $c_i \in \mathbb{R}$ are the coefficients (scalars). Note that $d$ is the same for all word vectors. In the simplest case, all $c_i = 1$, so $Emb_s(s)$ would be a sum of constituent vectors. A better approach is to do averaging, i.e., $c_i = \frac{1}{n}$ (to handle sentences of different length). Note that the dimensionality doesn't change, it's still $d$. Finally, the proposed method is the weighted average, where the weights are TF-IDF. This allows to capture that some words in a sentence are naturally more valuable than others. Once again, there's no problem with dimensions, because it's a sum of $\mathbb{R}^d$ vectors, multiplied by scalars.
What does word embedding weighted by tf-idf mean? This quote is clearly talking about sentence embeddings, obtained from word embeddings. If the sentence $s$ consists of words $(w_1, ..., w_n)$, we'd like to define an embedding vector $Emb_s(s) \in \
54,178
Why is two-sided gradient checking more accurate? [closed]
One way to look at this is by Taylor approximation. Remember $$f(x+\Delta x)\approx f(x)+\Delta x f'(x)+\frac 1 2 \Delta x^2 f''(x)+\frac 1 6 \Delta x^3f'''(x)+\dots$$ One sided looks like this $$\frac{f(x+\Delta x)-f(x)}{\Delta x}\approx f'(x)+\frac 1 2 \Delta x f''(x)$$ Two sided looks like this $$\frac{f(x+\Delta x)-f(x-\Delta x)}{2\Delta x}\approx f'(x)+\frac 1 6 \Delta x^2 f'''(x)$$ In other words the two sided plays on the symmetry, and cancels out the contribution of the square term in Taylor expansion. So, when you squeeze $\Delta x$, the two sided's error diminishes much quicker than one sided.
Why is two-sided gradient checking more accurate? [closed]
One way to look at this is by Taylor approximation. Remember $$f(x+\Delta x)\approx f(x)+\Delta x f'(x)+\frac 1 2 \Delta x^2 f''(x)+\frac 1 6 \Delta x^3f'''(x)+\dots$$ One sided looks like this $$\f
Why is two-sided gradient checking more accurate? [closed] One way to look at this is by Taylor approximation. Remember $$f(x+\Delta x)\approx f(x)+\Delta x f'(x)+\frac 1 2 \Delta x^2 f''(x)+\frac 1 6 \Delta x^3f'''(x)+\dots$$ One sided looks like this $$\frac{f(x+\Delta x)-f(x)}{\Delta x}\approx f'(x)+\frac 1 2 \Delta x f''(x)$$ Two sided looks like this $$\frac{f(x+\Delta x)-f(x-\Delta x)}{2\Delta x}\approx f'(x)+\frac 1 6 \Delta x^2 f'''(x)$$ In other words the two sided plays on the symmetry, and cancels out the contribution of the square term in Taylor expansion. So, when you squeeze $\Delta x$, the two sided's error diminishes much quicker than one sided.
Why is two-sided gradient checking more accurate? [closed] One way to look at this is by Taylor approximation. Remember $$f(x+\Delta x)\approx f(x)+\Delta x f'(x)+\frac 1 2 \Delta x^2 f''(x)+\frac 1 6 \Delta x^3f'''(x)+\dots$$ One sided looks like this $$\f
54,179
Why is two-sided gradient checking more accurate? [closed]
For a theoretical analysis you need to read a book on numerical analysis. But intuitively it seems reasonable to appriximate the tangent line with a secant between two point symmetric about the point of tangency. Lets look at a simple numerical example, let $f(x)=x^2$ and we are interested in the derivative at $x_0=0$, so we know the true value is zero. Then the symmetrical two-sided difference gives $$ \frac{f(\epsilon)-f(-\epsilon)}{\epsilon}=\frac{(\epsilon)^2-(-\epsilon)^2}{\epsilon}=0 $$ for all $\epsilon \not =0$. Withe the asymmetric difference you can see for yourself that the value will be either $\epsilon$ or $-\epsilon$. Just as an illustration.
Why is two-sided gradient checking more accurate? [closed]
For a theoretical analysis you need to read a book on numerical analysis. But intuitively it seems reasonable to appriximate the tangent line with a secant between two point symmetric about the point
Why is two-sided gradient checking more accurate? [closed] For a theoretical analysis you need to read a book on numerical analysis. But intuitively it seems reasonable to appriximate the tangent line with a secant between two point symmetric about the point of tangency. Lets look at a simple numerical example, let $f(x)=x^2$ and we are interested in the derivative at $x_0=0$, so we know the true value is zero. Then the symmetrical two-sided difference gives $$ \frac{f(\epsilon)-f(-\epsilon)}{\epsilon}=\frac{(\epsilon)^2-(-\epsilon)^2}{\epsilon}=0 $$ for all $\epsilon \not =0$. Withe the asymmetric difference you can see for yourself that the value will be either $\epsilon$ or $-\epsilon$. Just as an illustration.
Why is two-sided gradient checking more accurate? [closed] For a theoretical analysis you need to read a book on numerical analysis. But intuitively it seems reasonable to appriximate the tangent line with a secant between two point symmetric about the point
54,180
Can random forest based feature selection method be used for multiple regression in machine learning
Firstly, a method that first looks at univariate correlations for pre-identifying things that should go into a final model, will tend to do badly for a number of reasons: ignoring model uncertainy (single selected model), using statistical significance/strength of correlation as a criterion to select (if it is about prediction, you should rather try to assess how much something helps for prediction - these are not necessarily the same thing), "falsely" identifying predictors in univariate correlations (i.e. another predictor is even better, but because the one you look at correlates a bit with it, it looks like it correlates pretty well with the outcome) and missing out on predictors (they may only show up/become clear once other ones are adjusted for). Additionally, not wrapping this into any form of bootstrapping/cross-validation/whatever to get a realistic assessment of your model uncertainty is likely to mislead you. Furthermore, treating continuous predictors as having linear effects can often be improved upon by methods that do not make such an assumption (e.g. RF). Using RF as a pre-selection for a linear model is not such a good idea. Variable importance is really hard to interpret and it is really hard (or meaningless?) to set a cut-off on it. You do not know whether variable importance is about the variable itself or about interactions, plus you are losing out on non-linear transformations of variables. It depends in part of what you want to do. If you want good predictions, maybe you should not care too much about whether your method is a traditional statistical model or not. Of course, there are plenty of things like the elastic net, LASSO, Bayesian models with the horseshoe prior etc. that fit better into a traditional modeling framework and could also accomodate e.g. splines for continuous covariates.
Can random forest based feature selection method be used for multiple regression in machine learning
Firstly, a method that first looks at univariate correlations for pre-identifying things that should go into a final model, will tend to do badly for a number of reasons: ignoring model uncertainy (si
Can random forest based feature selection method be used for multiple regression in machine learning Firstly, a method that first looks at univariate correlations for pre-identifying things that should go into a final model, will tend to do badly for a number of reasons: ignoring model uncertainy (single selected model), using statistical significance/strength of correlation as a criterion to select (if it is about prediction, you should rather try to assess how much something helps for prediction - these are not necessarily the same thing), "falsely" identifying predictors in univariate correlations (i.e. another predictor is even better, but because the one you look at correlates a bit with it, it looks like it correlates pretty well with the outcome) and missing out on predictors (they may only show up/become clear once other ones are adjusted for). Additionally, not wrapping this into any form of bootstrapping/cross-validation/whatever to get a realistic assessment of your model uncertainty is likely to mislead you. Furthermore, treating continuous predictors as having linear effects can often be improved upon by methods that do not make such an assumption (e.g. RF). Using RF as a pre-selection for a linear model is not such a good idea. Variable importance is really hard to interpret and it is really hard (or meaningless?) to set a cut-off on it. You do not know whether variable importance is about the variable itself or about interactions, plus you are losing out on non-linear transformations of variables. It depends in part of what you want to do. If you want good predictions, maybe you should not care too much about whether your method is a traditional statistical model or not. Of course, there are plenty of things like the elastic net, LASSO, Bayesian models with the horseshoe prior etc. that fit better into a traditional modeling framework and could also accomodate e.g. splines for continuous covariates.
Can random forest based feature selection method be used for multiple regression in machine learning Firstly, a method that first looks at univariate correlations for pre-identifying things that should go into a final model, will tend to do badly for a number of reasons: ignoring model uncertainy (si
54,181
Can random forest based feature selection method be used for multiple regression in machine learning
There are also variable selection methods (e.g. forward or backward) with the AIC or BIC. Be aware, that interpretation of p-values after variable selection is not any more completely correct. Search "post-selection inference" for more informations about this.
Can random forest based feature selection method be used for multiple regression in machine learning
There are also variable selection methods (e.g. forward or backward) with the AIC or BIC. Be aware, that interpretation of p-values after variable selection is not any more completely correct. Search
Can random forest based feature selection method be used for multiple regression in machine learning There are also variable selection methods (e.g. forward or backward) with the AIC or BIC. Be aware, that interpretation of p-values after variable selection is not any more completely correct. Search "post-selection inference" for more informations about this.
Can random forest based feature selection method be used for multiple regression in machine learning There are also variable selection methods (e.g. forward or backward) with the AIC or BIC. Be aware, that interpretation of p-values after variable selection is not any more completely correct. Search
54,182
Can k-NN be ensembled?
Sure, k-NN can be ensembled. You could, for example, use resampling to generate different models (like with a Random Forest), or you could vary N, or you could use different functions for computing the distance. But, my experience is that k-NN rarely does well in high dimensional problems, so it would just be an ensemble of bad models, which isn't going to do well relative to an ensemble of good models.
Can k-NN be ensembled?
Sure, k-NN can be ensembled. You could, for example, use resampling to generate different models (like with a Random Forest), or you could vary N, or you could use different functions for computing th
Can k-NN be ensembled? Sure, k-NN can be ensembled. You could, for example, use resampling to generate different models (like with a Random Forest), or you could vary N, or you could use different functions for computing the distance. But, my experience is that k-NN rarely does well in high dimensional problems, so it would just be an ensemble of bad models, which isn't going to do well relative to an ensemble of good models.
Can k-NN be ensembled? Sure, k-NN can be ensembled. You could, for example, use resampling to generate different models (like with a Random Forest), or you could vary N, or you could use different functions for computing th
54,183
Can k-NN be ensembled?
I see four abstract ways to do so from simplest to more complex. By applying $k$-NN in different Random Projections latent space or other (eg Neural network autoencoder latent space) and combine them. that is: $Ensemble(kNN_{raw},kNN_{projected}) $ apply different colaborative filtering scores eg. average distance, mean distnace, max, or any other linear combination eg: $ score = w_1+d_{k_1} + w_2+d_{k_2} ...$ (for unseuprvised learning only), ann similarly combine them. use different bins of neighbors eg the bin_1: 1st-10th k neighbors, bin_2: 10th-20th. and combine the score from bin_number $k$-NN. different distance definition (minkowski, manhatan etc) Hope it helps
Can k-NN be ensembled?
I see four abstract ways to do so from simplest to more complex. By applying $k$-NN in different Random Projections latent space or other (eg Neural network autoencoder latent space) and combine them
Can k-NN be ensembled? I see four abstract ways to do so from simplest to more complex. By applying $k$-NN in different Random Projections latent space or other (eg Neural network autoencoder latent space) and combine them. that is: $Ensemble(kNN_{raw},kNN_{projected}) $ apply different colaborative filtering scores eg. average distance, mean distnace, max, or any other linear combination eg: $ score = w_1+d_{k_1} + w_2+d_{k_2} ...$ (for unseuprvised learning only), ann similarly combine them. use different bins of neighbors eg the bin_1: 1st-10th k neighbors, bin_2: 10th-20th. and combine the score from bin_number $k$-NN. different distance definition (minkowski, manhatan etc) Hope it helps
Can k-NN be ensembled? I see four abstract ways to do so from simplest to more complex. By applying $k$-NN in different Random Projections latent space or other (eg Neural network autoencoder latent space) and combine them
54,184
No change in accuracy using Adam Optimizer when SGD works fine
The benefits of Adam can be marginal, at best. The initial results were strong, but there is evidence that Adam converges to dramatically different minima compared to SGD (or SGD + momentum). "The Marginal Value of Adaptive Gradient Methods in Machine Learning" Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple over-parameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks. Speaking from personal experience, Adam can struggle unless you set a small learning rate -- which sort of defeats the whole purpose of using an adaptive method in the first place, not to mention all of the wasted time spent toying with learning rate.
No change in accuracy using Adam Optimizer when SGD works fine
The benefits of Adam can be marginal, at best. The initial results were strong, but there is evidence that Adam converges to dramatically different minima compared to SGD (or SGD + momentum). "The Mar
No change in accuracy using Adam Optimizer when SGD works fine The benefits of Adam can be marginal, at best. The initial results were strong, but there is evidence that Adam converges to dramatically different minima compared to SGD (or SGD + momentum). "The Marginal Value of Adaptive Gradient Methods in Machine Learning" Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple over-parameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks. Speaking from personal experience, Adam can struggle unless you set a small learning rate -- which sort of defeats the whole purpose of using an adaptive method in the first place, not to mention all of the wasted time spent toying with learning rate.
No change in accuracy using Adam Optimizer when SGD works fine The benefits of Adam can be marginal, at best. The initial results were strong, but there is evidence that Adam converges to dramatically different minima compared to SGD (or SGD + momentum). "The Mar
54,185
ARIMA, what is the interpretation for the sum of AR coefficients?
Just rearrange your $\text{AR}(p)$ polynomial $\phi(z) = 1 - \phi_1 z - \cdots - \phi_p z^p$. If $z'$ is a root, then $1 - \phi_1 z' - \cdots - \phi_p (z')^p = 0$. If $z'$ is a unit root, then you can rearrange that to get $$ \sum_i \phi_i = 1. $$ This suggests you can write your $\text{AR}(p)$ polynomial as $\phi(z) = (1 - z)\phi^*(z) = \phi^*(z)(1 - z)$, where $\phi^*(z)$ is an $\text{AR}(p-1)$ polynomial. Differencing your data and then estimating an $\text{AR}(p-1)$ model $\phi^*(z)$ is the same as estimating the initial $\text{AR}(p)$ polynomial $\phi(z)$ but under the assumption that it has one unit root.
ARIMA, what is the interpretation for the sum of AR coefficients?
Just rearrange your $\text{AR}(p)$ polynomial $\phi(z) = 1 - \phi_1 z - \cdots - \phi_p z^p$. If $z'$ is a root, then $1 - \phi_1 z' - \cdots - \phi_p (z')^p = 0$. If $z'$ is a unit root, then you can
ARIMA, what is the interpretation for the sum of AR coefficients? Just rearrange your $\text{AR}(p)$ polynomial $\phi(z) = 1 - \phi_1 z - \cdots - \phi_p z^p$. If $z'$ is a root, then $1 - \phi_1 z' - \cdots - \phi_p (z')^p = 0$. If $z'$ is a unit root, then you can rearrange that to get $$ \sum_i \phi_i = 1. $$ This suggests you can write your $\text{AR}(p)$ polynomial as $\phi(z) = (1 - z)\phi^*(z) = \phi^*(z)(1 - z)$, where $\phi^*(z)$ is an $\text{AR}(p-1)$ polynomial. Differencing your data and then estimating an $\text{AR}(p-1)$ model $\phi^*(z)$ is the same as estimating the initial $\text{AR}(p)$ polynomial $\phi(z)$ but under the assumption that it has one unit root.
ARIMA, what is the interpretation for the sum of AR coefficients? Just rearrange your $\text{AR}(p)$ polynomial $\phi(z) = 1 - \phi_1 z - \cdots - \phi_p z^p$. If $z'$ is a root, then $1 - \phi_1 z' - \cdots - \phi_p (z')^p = 0$. If $z'$ is a unit root, then you can
54,186
ARIMA, what is the interpretation for the sum of AR coefficients?
Differencing like any other transformations like power transformations should only be done when deemed necessary. Assuming ( which I would never do ! ) that the exogenous predictors only have a contemporaneous effect , compute a regression , study the error processs's acf/pacf and cautiously/iteratively construct/identify an AR/MA model perhaps ultimately incorporating differences. Note well that pulses/level shifts/seasonal pulses and time trends may also be necessary to render a white noise error process and if untreated will create confusion in your modelling process. If you wish post your data in a column oriented csv file and I will try and help you further. EDITED AFTER RECEIPT OF DATA: Note as @Taylor suggested a smaller model is more appropriate . I scaled his dependent variable by 10**6 . The user wished to retain all input series thus no stepdown was implemented thus the final model contains a few non-significant (non-intrusive) coefficients. 106 quarterly values and 5 predictor series were automatically analyzed by AUTOBOX using transfer function modelling procedures yielded the following model Automatic Transfer Function model identification was accomplished essentially following TSAY http://www.math.cts.nthu.edu.tw/download.php?filename=569_fe0ff1a2.pdf&dir=publish&title=Ruey+S.+Tsay-Lec1 BUT definitely avoiding the Corner Method as it is not robust (i.e. doesn't work ) when you have Gaussian Violations. In more detail and here] suggesting 9 deterministic input series anomalies. A significant change in error variance at period 69 was detected leading to WLS . and . The residuals from this transfer function are here . The ACF of the residuals is here . Finally the model is (1,1,0)(0,0,0) 4 with an error variance change at period 46 and 9 unusual values including a seasonal pulse and a level shift. Your model had an unnecessary sar coefficient and did not treat the anomalies OR the evidented non-constancy of the error variance over time thus the answer to your question is "YES". Finally the seasonality in your data is deterministic NOT autoregressive thus the need for a seasonal dummy starting at period 14 ( 1994 qtr 2 ) . The sum of the ar coefficients has to be interpreted based upon/given the order of the ar polynomial.
ARIMA, what is the interpretation for the sum of AR coefficients?
Differencing like any other transformations like power transformations should only be done when deemed necessary. Assuming ( which I would never do ! ) that the exogenous predictors only have a conte
ARIMA, what is the interpretation for the sum of AR coefficients? Differencing like any other transformations like power transformations should only be done when deemed necessary. Assuming ( which I would never do ! ) that the exogenous predictors only have a contemporaneous effect , compute a regression , study the error processs's acf/pacf and cautiously/iteratively construct/identify an AR/MA model perhaps ultimately incorporating differences. Note well that pulses/level shifts/seasonal pulses and time trends may also be necessary to render a white noise error process and if untreated will create confusion in your modelling process. If you wish post your data in a column oriented csv file and I will try and help you further. EDITED AFTER RECEIPT OF DATA: Note as @Taylor suggested a smaller model is more appropriate . I scaled his dependent variable by 10**6 . The user wished to retain all input series thus no stepdown was implemented thus the final model contains a few non-significant (non-intrusive) coefficients. 106 quarterly values and 5 predictor series were automatically analyzed by AUTOBOX using transfer function modelling procedures yielded the following model Automatic Transfer Function model identification was accomplished essentially following TSAY http://www.math.cts.nthu.edu.tw/download.php?filename=569_fe0ff1a2.pdf&dir=publish&title=Ruey+S.+Tsay-Lec1 BUT definitely avoiding the Corner Method as it is not robust (i.e. doesn't work ) when you have Gaussian Violations. In more detail and here] suggesting 9 deterministic input series anomalies. A significant change in error variance at period 69 was detected leading to WLS . and . The residuals from this transfer function are here . The ACF of the residuals is here . Finally the model is (1,1,0)(0,0,0) 4 with an error variance change at period 46 and 9 unusual values including a seasonal pulse and a level shift. Your model had an unnecessary sar coefficient and did not treat the anomalies OR the evidented non-constancy of the error variance over time thus the answer to your question is "YES". Finally the seasonality in your data is deterministic NOT autoregressive thus the need for a seasonal dummy starting at period 14 ( 1994 qtr 2 ) . The sum of the ar coefficients has to be interpreted based upon/given the order of the ar polynomial.
ARIMA, what is the interpretation for the sum of AR coefficients? Differencing like any other transformations like power transformations should only be done when deemed necessary. Assuming ( which I would never do ! ) that the exogenous predictors only have a conte
54,187
Intuition for why sum of gaussian RVs is different from gaussian mixture
Forget the Gaussian part for a moment. Compare these two simple situations: A) take a coin whose two sides are marked with 0 and 1 and a die with 20 sides numbered 1 to 20. Toss the coin and roll the die --- and add the results to get a total. Consider these questions (and hints): What's the chance you get a total of 0? (you can't!) What's the chance you get a total of 1? (need 0 on the coin and a 1 on the die) What's the chance you get a total of 11? (You need (0,11) or (1,10) here) What's the chance you get a total of 21? (need (1,20) here) B) take the same coin and the same die and choose which one to use, by tossing a second coin labelled "coin" and "die". Now your total is the number showing on whichever object you tossed or rolled at the second step. What's the chance you get a total of 0? (get this by tossing ("coin",0) ) What's the chance you get a total of 1? ( ("coin",1) or ("die",1) ) What's the chance you get a total of 11? ( ("die",11) ) What's the chance you get a total of 21? (you can't!) The first case (A) is a sum of random variables (convolution of the p.m.f.s). The second case (B) is a mixture (weighted average of p.m.f.s). They're entirely different kinds of things. Now consider a Gaussian case. For example, if X ~ N(0,1) and Y ~ N(100,10) - where the two are independent - then X+Y has almost no chance of being between -1 and 1 (the contribution of Y makes the sum very far above values like those), but a 50-50 mixture of X and Y has quite a good chance of being between -1 and 1. In the case of iid Gaussians (as you suggest in comments), let's simplify further and take the example of the standard normal case (because it's the same up to scale and location shifts that don't alter the distribution shape of either the inputs or the output), though I'll talk about more general cases. First we have to see what calculating distributions for sums of random variables (particularly independent ones) involves; this is necessarily somewhat mathematical but I'll try to motivate what's going on rather than just do the algebra (which is easy to find done for the Gaussian case anyway so I'll just be explaining what happens rather than repeating what's easily found) With discrete variables (as above) to compute the probability of a sum, you have to add the probabilities of all the distinct events that have that sum. So if $Z=X+Y$, $P(Z=z)=\sum_x P(X=x)P(Y=z-x)=\sum_x p_X(x)p_Y(z-x)$ (product because of independence of $X$ and $Y$, and $x$ ranges over the values possible for $x$ in that sum). This is discrete convolution. Similarly for the continuous case -- we add over the cases that give the sum -- . So $f_{Z}(z)=\int_x f_X(x)f_Y(z-x) dx$ ... this is a convolution integral. In particular, see the animation in the second diagram here$^{[1]}$ For standard normals, that integral is just $\int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}}e^{-\frac12 x^2} \frac{1}{\sqrt{2\pi}}e^{-\frac12 (z-x)^2} dx$ But note that we have a quadratic-in-x in both exponents, so when we multiply them we're just adding the quadratics -- giving another quadratic (clearly this same idea applies for any independent Gaussians whatever their means and variances, and indeed, will generalize up to the jointly-normal case). So we end up with exp of (a quadratic in x with negative leading term). Because we can complete the square, we can split the quadratic into two terms -- the quadratic becomes $k.(x-\text{something})^2$ + a term not in $x$ (which drops out the front of the integral, since as far as the integral in $x$ is concerned, it's a "constant"). In our simplified example that should look something like $2(x^2-z/2)^2+z^2/4$. The first thing is (up to a scaling factor) a density in $x$, so if you pull out the required scaling factor from the scaling constant on the bivariate you started with you're left with an integral of a normal density. The remainder of the original scaling factor goes out the front, and the integral is $1$ (it's a density!). What's left out the front is $\exp$ of a quadratic in $z$ with a negative leading term (i.e. a Gaussian), times just the required scaling constant to make it a density. In short, the reason why convolutions of Gaussians are Gaussian is because products of $\exp$s are $\exp$s of sums and sums of quadratics are also quadratics. If your quadratics have negative leading terms (as they will for Gaussians), the sum is also going to have a negative leading term. So convolutions of Gaussians must be in the form of Gaussians. Another simple/intuitive argument is as follows: Again consider the iid-bivariate Gaussian case $(X,Y)$, where the contours are circular. Without loss of generality, make it centered at the origin. Now if we rotate 45 degrees \begin{equation} \begin{pmatrix} U \\ V \\ \end{pmatrix} = \begin{pmatrix} \sqrt{\frac12} & \sqrt{\frac12} \\ \sqrt{\frac12} &-\sqrt{\frac12} \\ \end{pmatrix} \begin{pmatrix} X \\ Y \\ \end{pmatrix} = \begin{pmatrix} \sqrt{\frac12}(X+Y) \\ \sqrt{\frac12}(X-Y) \\ \end{pmatrix} \end{equation} we still have the same circular-bivariate Gaussian density centered at the origin. So $U$ and $V$ are also iid Gaussian with the same mean and variance as (X,Y). But $U$ is just a scaled $Z$, so $Z$ is also Gaussian. (We can generalize to the non-zero-mean case by simple shifts) There are a number of proofs for the Gaussian case here; generally using characteristic functions (/Fourier transforms) or by using the convolution integral. I don't know that they're necessarily going to be intuitive for you. However, if you're used to characteristic functions/Fourier transforms that pretty much gives it to you immediately and might convey some of the intuition; in particular the Fourier transform of a Gaussian is Gaussian in form, and the product of two scaled Gaussian functions in the same variable is another scaled Gaussian, and convolution in the time domain equals multiplication in the frequency domain, so we see that the convolution of two Gaussians is Gaussian. In any case it's useful to have the proofs so you see in more detail where my prior handwaving about quadratics is coming from. $[1]$ Weisstein, Eric W. "Convolution." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/Convolution.html
Intuition for why sum of gaussian RVs is different from gaussian mixture
Forget the Gaussian part for a moment. Compare these two simple situations: A) take a coin whose two sides are marked with 0 and 1 and a die with 20 sides numbered 1 to 20. Toss the coin and roll the
Intuition for why sum of gaussian RVs is different from gaussian mixture Forget the Gaussian part for a moment. Compare these two simple situations: A) take a coin whose two sides are marked with 0 and 1 and a die with 20 sides numbered 1 to 20. Toss the coin and roll the die --- and add the results to get a total. Consider these questions (and hints): What's the chance you get a total of 0? (you can't!) What's the chance you get a total of 1? (need 0 on the coin and a 1 on the die) What's the chance you get a total of 11? (You need (0,11) or (1,10) here) What's the chance you get a total of 21? (need (1,20) here) B) take the same coin and the same die and choose which one to use, by tossing a second coin labelled "coin" and "die". Now your total is the number showing on whichever object you tossed or rolled at the second step. What's the chance you get a total of 0? (get this by tossing ("coin",0) ) What's the chance you get a total of 1? ( ("coin",1) or ("die",1) ) What's the chance you get a total of 11? ( ("die",11) ) What's the chance you get a total of 21? (you can't!) The first case (A) is a sum of random variables (convolution of the p.m.f.s). The second case (B) is a mixture (weighted average of p.m.f.s). They're entirely different kinds of things. Now consider a Gaussian case. For example, if X ~ N(0,1) and Y ~ N(100,10) - where the two are independent - then X+Y has almost no chance of being between -1 and 1 (the contribution of Y makes the sum very far above values like those), but a 50-50 mixture of X and Y has quite a good chance of being between -1 and 1. In the case of iid Gaussians (as you suggest in comments), let's simplify further and take the example of the standard normal case (because it's the same up to scale and location shifts that don't alter the distribution shape of either the inputs or the output), though I'll talk about more general cases. First we have to see what calculating distributions for sums of random variables (particularly independent ones) involves; this is necessarily somewhat mathematical but I'll try to motivate what's going on rather than just do the algebra (which is easy to find done for the Gaussian case anyway so I'll just be explaining what happens rather than repeating what's easily found) With discrete variables (as above) to compute the probability of a sum, you have to add the probabilities of all the distinct events that have that sum. So if $Z=X+Y$, $P(Z=z)=\sum_x P(X=x)P(Y=z-x)=\sum_x p_X(x)p_Y(z-x)$ (product because of independence of $X$ and $Y$, and $x$ ranges over the values possible for $x$ in that sum). This is discrete convolution. Similarly for the continuous case -- we add over the cases that give the sum -- . So $f_{Z}(z)=\int_x f_X(x)f_Y(z-x) dx$ ... this is a convolution integral. In particular, see the animation in the second diagram here$^{[1]}$ For standard normals, that integral is just $\int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}}e^{-\frac12 x^2} \frac{1}{\sqrt{2\pi}}e^{-\frac12 (z-x)^2} dx$ But note that we have a quadratic-in-x in both exponents, so when we multiply them we're just adding the quadratics -- giving another quadratic (clearly this same idea applies for any independent Gaussians whatever their means and variances, and indeed, will generalize up to the jointly-normal case). So we end up with exp of (a quadratic in x with negative leading term). Because we can complete the square, we can split the quadratic into two terms -- the quadratic becomes $k.(x-\text{something})^2$ + a term not in $x$ (which drops out the front of the integral, since as far as the integral in $x$ is concerned, it's a "constant"). In our simplified example that should look something like $2(x^2-z/2)^2+z^2/4$. The first thing is (up to a scaling factor) a density in $x$, so if you pull out the required scaling factor from the scaling constant on the bivariate you started with you're left with an integral of a normal density. The remainder of the original scaling factor goes out the front, and the integral is $1$ (it's a density!). What's left out the front is $\exp$ of a quadratic in $z$ with a negative leading term (i.e. a Gaussian), times just the required scaling constant to make it a density. In short, the reason why convolutions of Gaussians are Gaussian is because products of $\exp$s are $\exp$s of sums and sums of quadratics are also quadratics. If your quadratics have negative leading terms (as they will for Gaussians), the sum is also going to have a negative leading term. So convolutions of Gaussians must be in the form of Gaussians. Another simple/intuitive argument is as follows: Again consider the iid-bivariate Gaussian case $(X,Y)$, where the contours are circular. Without loss of generality, make it centered at the origin. Now if we rotate 45 degrees \begin{equation} \begin{pmatrix} U \\ V \\ \end{pmatrix} = \begin{pmatrix} \sqrt{\frac12} & \sqrt{\frac12} \\ \sqrt{\frac12} &-\sqrt{\frac12} \\ \end{pmatrix} \begin{pmatrix} X \\ Y \\ \end{pmatrix} = \begin{pmatrix} \sqrt{\frac12}(X+Y) \\ \sqrt{\frac12}(X-Y) \\ \end{pmatrix} \end{equation} we still have the same circular-bivariate Gaussian density centered at the origin. So $U$ and $V$ are also iid Gaussian with the same mean and variance as (X,Y). But $U$ is just a scaled $Z$, so $Z$ is also Gaussian. (We can generalize to the non-zero-mean case by simple shifts) There are a number of proofs for the Gaussian case here; generally using characteristic functions (/Fourier transforms) or by using the convolution integral. I don't know that they're necessarily going to be intuitive for you. However, if you're used to characteristic functions/Fourier transforms that pretty much gives it to you immediately and might convey some of the intuition; in particular the Fourier transform of a Gaussian is Gaussian in form, and the product of two scaled Gaussian functions in the same variable is another scaled Gaussian, and convolution in the time domain equals multiplication in the frequency domain, so we see that the convolution of two Gaussians is Gaussian. In any case it's useful to have the proofs so you see in more detail where my prior handwaving about quadratics is coming from. $[1]$ Weisstein, Eric W. "Convolution." From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/Convolution.html
Intuition for why sum of gaussian RVs is different from gaussian mixture Forget the Gaussian part for a moment. Compare these two simple situations: A) take a coin whose two sides are marked with 0 and 1 and a die with 20 sides numbered 1 to 20. Toss the coin and roll the
54,188
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? [duplicate]
First note that the random variable $X_i$ is not defined on 0. So assume, for now that $X_i$s are defined on the positive reals. On the positive reals, the function $f(x) = 1/x$ is a convex function. Thus, using Jensen's inequality, $$E \left[\dfrac{1}{\sum X_i} \right] \geq \dfrac{1}{E\left[ \sum X_i \right]} = \dfrac{1}{\sum E\left[X_i\right]} \,.$$ Now the equality in Jensen's inequality holds if either $f$ is affine (which in this case $f(x) = 1/x$ is not) or if $\sum X_i$ is a degenerate random variable, i.e., a constant. Using concavity, a similar argument can be used when $X_i$s are all defined on the negatives.
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$?
First note that the random variable $X_i$ is not defined on 0. So assume, for now that $X_i$s are defined on the positive reals. On the positive reals, the function $f(x) = 1/x$ is a convex function.
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? [duplicate] First note that the random variable $X_i$ is not defined on 0. So assume, for now that $X_i$s are defined on the positive reals. On the positive reals, the function $f(x) = 1/x$ is a convex function. Thus, using Jensen's inequality, $$E \left[\dfrac{1}{\sum X_i} \right] \geq \dfrac{1}{E\left[ \sum X_i \right]} = \dfrac{1}{\sum E\left[X_i\right]} \,.$$ Now the equality in Jensen's inequality holds if either $f$ is affine (which in this case $f(x) = 1/x$ is not) or if $\sum X_i$ is a degenerate random variable, i.e., a constant. Using concavity, a similar argument can be used when $X_i$s are all defined on the negatives.
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? First note that the random variable $X_i$ is not defined on 0. So assume, for now that $X_i$s are defined on the positive reals. On the positive reals, the function $f(x) = 1/x$ is a convex function.
54,189
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? [duplicate]
I worked through some equations, with the original intention of showing that it is in fact true, but finally convincing myself that it might not be. In case it's useful, here is the working that convinced me that it might not be generally true: Initially, I thought that expectation being a linear operator was going to make it easy to prove truth. So I wrote down: $$ \def\Exp{\mathbb{E}} \Exp[A + B] = \Exp[A] + \Exp[B] $$ Then I called your lhs expression $E_1$. So we have: $$ E_1 = \Exp\left[ \frac{1} {\sum X_i} \right] $$ In order to go further, I felt we need to form the expectation over a distribution, so let's say that we have $\def\X{\mathbf{X}}\X \sim g$, where $\X = \{X_1, X_2, \dots, X_n \}$. So, we have: $$ E_1 = \Exp_{\X \sim g}\left[ \frac{1}{\sum X_i} \right] $$ We can expand the expectation as an integration (discrete or continuous, lets take the continuous case): $$ E_1 = \int_\X p(\X) \frac{1}{\sum X_i} \, d\X \\ = \int_\X \frac{p(\X)}{\sum X_i} \, d\X $$ It's at this point I thought, ok, there seems no obvious way of just multiplying the equation by $\sum X_i$ or similar. Which doesnt prove that we cant do something similar, but combined with Michael Chernick's assertions, I guess it at least convinces me his assertions seem not unreasonable to me.
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$?
I worked through some equations, with the original intention of showing that it is in fact true, but finally convincing myself that it might not be. In case it's useful, here is the working that convi
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? [duplicate] I worked through some equations, with the original intention of showing that it is in fact true, but finally convincing myself that it might not be. In case it's useful, here is the working that convinced me that it might not be generally true: Initially, I thought that expectation being a linear operator was going to make it easy to prove truth. So I wrote down: $$ \def\Exp{\mathbb{E}} \Exp[A + B] = \Exp[A] + \Exp[B] $$ Then I called your lhs expression $E_1$. So we have: $$ E_1 = \Exp\left[ \frac{1} {\sum X_i} \right] $$ In order to go further, I felt we need to form the expectation over a distribution, so let's say that we have $\def\X{\mathbf{X}}\X \sim g$, where $\X = \{X_1, X_2, \dots, X_n \}$. So, we have: $$ E_1 = \Exp_{\X \sim g}\left[ \frac{1}{\sum X_i} \right] $$ We can expand the expectation as an integration (discrete or continuous, lets take the continuous case): $$ E_1 = \int_\X p(\X) \frac{1}{\sum X_i} \, d\X \\ = \int_\X \frac{p(\X)}{\sum X_i} \, d\X $$ It's at this point I thought, ok, there seems no obvious way of just multiplying the equation by $\sum X_i$ or similar. Which doesnt prove that we cant do something similar, but combined with Michael Chernick's assertions, I guess it at least convinces me his assertions seem not unreasonable to me.
When is $\mathbb{E}\left[\frac{1}{\sum X_{i}}\right] = \frac{1}{\mathbb{E}\left[\sum X_{i}\right]}$? I worked through some equations, with the original intention of showing that it is in fact true, but finally convincing myself that it might not be. In case it's useful, here is the working that convi
54,190
Taylor approximation of expected value of multivariate function
Taylor series approximation of multivariate function $f$ around $x_0$ is $$ f(x) \approx f(x_0) + \nabla f(x_0)'(x-x_0) + \frac{1}{2} (x-x_0)' H_f(x_0) (x-x_0). $$ If you substitute $x=X$ and $x_0 = \mathbb{E}X$ you get $$ f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)'(X-\mathbb{E}X) + \frac{1}{2} (X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X). $$ Taking expectation on both sides gives $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)' \mathbb{E}(X-\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ As you noticed, $\mathbb{E}(X-\mathbb{E}X) = 0$, so the expression simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ This is as far as you can get without assumptions on X. However in your specific case the second term can be further simplified. Rewriting quadratic form using sums gives $$ \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)] = \sum_{i=1}^n \sum_{j=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ij} (X_j - \mathbb{E}X_j)] = (*). $$ If $i \neq j$ then $X_i$ and $X_j$ are independent, therefore $$ \mathbb{E}[(X_i−\mathbb{E}X_i)Hf(\mathbb{E}X)_{ij}(X_j−\mathbb{E}X_j)] = \mathbb{E}[X_i−\mathbb{E}X_i]Hf(\mathbb{E}X)_{ij}\mathbb{E}[X_j−\mathbb{E}X_j] = 0. $$ Using that fact, the double sum simplifies to a single sum $$ (*) = \sum_{i=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ii} (X_i - \mathbb{E}X_i)] = (*). $$ Expression $H_f(\mathbb{E}X)_{ii}$ is constant (not random), so it can be extracted from expectation $$ (*) = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii}\mathbb{E}[(X_i - \mathbb{E}X_i)^2] = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ Summing up, the Taylor series approximation simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ In your case $\mathbb{E}X_i = \frac{a+b}{2}$ and $Var(X_i) = \frac{1}{12}(b-a)^2$. Also, you don't need to compute the whole Hessian matrix, because only its diagonal elements are used in the formula.
Taylor approximation of expected value of multivariate function
Taylor series approximation of multivariate function $f$ around $x_0$ is $$ f(x) \approx f(x_0) + \nabla f(x_0)'(x-x_0) + \frac{1}{2} (x-x_0)' H_f(x_0) (x-x_0). $$ If you substitute $x=X$ and $x_0 =
Taylor approximation of expected value of multivariate function Taylor series approximation of multivariate function $f$ around $x_0$ is $$ f(x) \approx f(x_0) + \nabla f(x_0)'(x-x_0) + \frac{1}{2} (x-x_0)' H_f(x_0) (x-x_0). $$ If you substitute $x=X$ and $x_0 = \mathbb{E}X$ you get $$ f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)'(X-\mathbb{E}X) + \frac{1}{2} (X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X). $$ Taking expectation on both sides gives $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \nabla f(\mathbb{E}X)' \mathbb{E}(X-\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ As you noticed, $\mathbb{E}(X-\mathbb{E}X) = 0$, so the expression simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)]. $$ This is as far as you can get without assumptions on X. However in your specific case the second term can be further simplified. Rewriting quadratic form using sums gives $$ \mathbb{E}[(X-\mathbb{E}X)' H_f(\mathbb{E}X) (X-\mathbb{E}X)] = \sum_{i=1}^n \sum_{j=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ij} (X_j - \mathbb{E}X_j)] = (*). $$ If $i \neq j$ then $X_i$ and $X_j$ are independent, therefore $$ \mathbb{E}[(X_i−\mathbb{E}X_i)Hf(\mathbb{E}X)_{ij}(X_j−\mathbb{E}X_j)] = \mathbb{E}[X_i−\mathbb{E}X_i]Hf(\mathbb{E}X)_{ij}\mathbb{E}[X_j−\mathbb{E}X_j] = 0. $$ Using that fact, the double sum simplifies to a single sum $$ (*) = \sum_{i=1}^n \mathbb{E}[(X_i - \mathbb{E}X_i) H_f(\mathbb{E}X)_{ii} (X_i - \mathbb{E}X_i)] = (*). $$ Expression $H_f(\mathbb{E}X)_{ii}$ is constant (not random), so it can be extracted from expectation $$ (*) = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii}\mathbb{E}[(X_i - \mathbb{E}X_i)^2] = \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ Summing up, the Taylor series approximation simplifies to $$ \mathbb{E}f(X) \approx f(\mathbb{E}X) + \frac{1}{2} \sum_{i=1}^n H_f(\mathbb{E}X)_{ii} Var(X_i). $$ In your case $\mathbb{E}X_i = \frac{a+b}{2}$ and $Var(X_i) = \frac{1}{12}(b-a)^2$. Also, you don't need to compute the whole Hessian matrix, because only its diagonal elements are used in the formula.
Taylor approximation of expected value of multivariate function Taylor series approximation of multivariate function $f$ around $x_0$ is $$ f(x) \approx f(x_0) + \nabla f(x_0)'(x-x_0) + \frac{1}{2} (x-x_0)' H_f(x_0) (x-x_0). $$ If you substitute $x=X$ and $x_0 =
54,191
Survival Analysis - Delayed Entry?
Your scenario raises the issue of selection bias. In order for an individual to be selected for measurement into your study, they must survive until their first period of measurement, and as you point out each individual has a different time of entry. This effectively means that individuals who start later have periods of 'immortality' where they cannot have died (or been excluded for other reasons pertaining to your research), making them more likely to be special in some way as compared with folks who entered the study earlier. For example, individuals entering the study 2002 or later, are individuals who survived past 9/11, whereas those entering the study before 9/11/2001 did not have to survive this. To help minimize the effects of selection bias: Be super explicit about defining your target population, and, consequently, your inferences. Be very careful about comparisons of survival between groups with different distributions of first period of observation dates (ideally the distributions should be identical). Otherwise you run into survivor effects (e.g., healthy worker effect). If start time distributions vary between comparator groups, include it in your model as an age at entry term, or stratify by it. Including age (either age since birth, or some kind of elapsed time as a control relevant w/r/t your study, a la 'years since surgery' or 'years since graduation') as a covariate or stratifying variable in your model. NB: If you artificially create a uniform starting time for each person in your data set, then you create an 'immortal person-time' bias (see Rothman and Greenland), which will tend to bias rates toward 0, and as a consequence bias comparisons toward no difference. Rothman, K. J. and Greenland, S. (1998). Modern Epidemiology, chapter Cohort Studies—Immortal Person-Time. Lippincott-Raven, 2nd edition.
Survival Analysis - Delayed Entry?
Your scenario raises the issue of selection bias. In order for an individual to be selected for measurement into your study, they must survive until their first period of measurement, and as you point
Survival Analysis - Delayed Entry? Your scenario raises the issue of selection bias. In order for an individual to be selected for measurement into your study, they must survive until their first period of measurement, and as you point out each individual has a different time of entry. This effectively means that individuals who start later have periods of 'immortality' where they cannot have died (or been excluded for other reasons pertaining to your research), making them more likely to be special in some way as compared with folks who entered the study earlier. For example, individuals entering the study 2002 or later, are individuals who survived past 9/11, whereas those entering the study before 9/11/2001 did not have to survive this. To help minimize the effects of selection bias: Be super explicit about defining your target population, and, consequently, your inferences. Be very careful about comparisons of survival between groups with different distributions of first period of observation dates (ideally the distributions should be identical). Otherwise you run into survivor effects (e.g., healthy worker effect). If start time distributions vary between comparator groups, include it in your model as an age at entry term, or stratify by it. Including age (either age since birth, or some kind of elapsed time as a control relevant w/r/t your study, a la 'years since surgery' or 'years since graduation') as a covariate or stratifying variable in your model. NB: If you artificially create a uniform starting time for each person in your data set, then you create an 'immortal person-time' bias (see Rothman and Greenland), which will tend to bias rates toward 0, and as a consequence bias comparisons toward no difference. Rothman, K. J. and Greenland, S. (1998). Modern Epidemiology, chapter Cohort Studies—Immortal Person-Time. Lippincott-Raven, 2nd edition.
Survival Analysis - Delayed Entry? Your scenario raises the issue of selection bias. In order for an individual to be selected for measurement into your study, they must survive until their first period of measurement, and as you point
54,192
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more likely distribution?
Your approach is not correct. For a moment let's forget about the distributions and simplify to asking about simpler question: given $X$, what is the probability that it comes from the class $C_i$?, i.e. $p(C_i | X)$, while what you propose is looking at the probability that $X=x$ given that it comes from $C_i$. Those are two different things. To calculate the probability that you are interested in, you would need to use Bayes theorem $$ p(C_i | X) = \frac{p(X | C_i) \,p(C_i)}{\sum_j p(X | C_j) \,p(C_j)} $$ so you would need to assume some prior for $P(C_i)$, i.e. the probability of observing samples from class $C_i$. By looking only at the likelihood $p(X | C_i) $ you cannot tell the probability you are interested in, you can only say that there is a greater likelihood of observing one option as compared to another and for this there is no problem in dealing with probability densities, since you look only at their relative sizes. If you are not interested in the probabilities, but only in deciding from which class your sample might have come from, you may use likelihood-ratio test.
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more
Your approach is not correct. For a moment let's forget about the distributions and simplify to asking about simpler question: given $X$, what is the probability that it comes from the class $C_i$?, i
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more likely distribution? Your approach is not correct. For a moment let's forget about the distributions and simplify to asking about simpler question: given $X$, what is the probability that it comes from the class $C_i$?, i.e. $p(C_i | X)$, while what you propose is looking at the probability that $X=x$ given that it comes from $C_i$. Those are two different things. To calculate the probability that you are interested in, you would need to use Bayes theorem $$ p(C_i | X) = \frac{p(X | C_i) \,p(C_i)}{\sum_j p(X | C_j) \,p(C_j)} $$ so you would need to assume some prior for $P(C_i)$, i.e. the probability of observing samples from class $C_i$. By looking only at the likelihood $p(X | C_i) $ you cannot tell the probability you are interested in, you can only say that there is a greater likelihood of observing one option as compared to another and for this there is no problem in dealing with probability densities, since you look only at their relative sizes. If you are not interested in the probabilities, but only in deciding from which class your sample might have come from, you may use likelihood-ratio test.
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more Your approach is not correct. For a moment let's forget about the distributions and simplify to asking about simpler question: given $X$, what is the probability that it comes from the class $C_i$?, i
54,193
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more likely distribution?
The purpose of this answer is simply to expand on the answer by @Tim. Suppose the likelihood of the parameters given the sample can be expressed as \begin{equation} p(X|\theta) = \prod_{i=1}^n \textsf{N}(x_i|\mu,\sigma^2) , \end{equation} where $X = (x_1, \ldots, x_n)$ is the sample and $\theta = (\mu,\sigma^2)$ are the parameters. Then in general the likelihood of model $j$ (i.e., class $j$) can be expressed as \begin{equation} p(X|C_j) = \int p(X|\theta)\,p(\theta|C_j)\,d\theta , \end{equation} where $p(\theta|C_j)$ is the distribution of $\theta$ given model $j$. This general approach can be specialized to the current case as follows. Let \begin{equation} p(\theta|C_j) = \delta(\mu-\mu_j)\,\delta(\sigma^2 - \sigma_j^2) , \end{equation} where $\delta(x)$ is the Dirac delta "function." In effect, this distribution puts a point mass at $\theta_j = (\mu_j,\sigma_j^2)$. The two salient properties of the Dirac delta function are $\int \delta(x-x_0)\,dx = 1$ and $\int f(x)\,\delta(x-x_0)\,dx = f(x_0)$. With this point-mass distribution, we can compute the desired expression: \begin{equation} \begin{split} p(X|C_j) &= \iint p(X|\mu,\sigma^2)\,\delta(\mu-\mu_j)\,\delta(\sigma^2-\sigma_j^2)\,d\mu\,d\sigma^2 \\ &= p(X|\theta_j) \\ &= \prod_{i=1}^n \textsf{N}(x_i|\mu_j,\sigma_j^2) . \end{split} \end{equation}
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more
The purpose of this answer is simply to expand on the answer by @Tim. Suppose the likelihood of the parameters given the sample can be expressed as \begin{equation} p(X|\theta) = \prod_{i=1}^n \texts
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more likely distribution? The purpose of this answer is simply to expand on the answer by @Tim. Suppose the likelihood of the parameters given the sample can be expressed as \begin{equation} p(X|\theta) = \prod_{i=1}^n \textsf{N}(x_i|\mu,\sigma^2) , \end{equation} where $X = (x_1, \ldots, x_n)$ is the sample and $\theta = (\mu,\sigma^2)$ are the parameters. Then in general the likelihood of model $j$ (i.e., class $j$) can be expressed as \begin{equation} p(X|C_j) = \int p(X|\theta)\,p(\theta|C_j)\,d\theta , \end{equation} where $p(\theta|C_j)$ is the distribution of $\theta$ given model $j$. This general approach can be specialized to the current case as follows. Let \begin{equation} p(\theta|C_j) = \delta(\mu-\mu_j)\,\delta(\sigma^2 - \sigma_j^2) , \end{equation} where $\delta(x)$ is the Dirac delta "function." In effect, this distribution puts a point mass at $\theta_j = (\mu_j,\sigma_j^2)$. The two salient properties of the Dirac delta function are $\int \delta(x-x_0)\,dx = 1$ and $\int f(x)\,\delta(x-x_0)\,dx = f(x_0)$. With this point-mass distribution, we can compute the desired expression: \begin{equation} \begin{split} p(X|C_j) &= \iint p(X|\mu,\sigma^2)\,\delta(\mu-\mu_j)\,\delta(\sigma^2-\sigma_j^2)\,d\mu\,d\sigma^2 \\ &= p(X|\theta_j) \\ &= \prod_{i=1}^n \textsf{N}(x_i|\mu_j,\sigma_j^2) . \end{split} \end{equation}
Probability of a sample being drawn from a continuous distribution is zero; so how to choose a more The purpose of this answer is simply to expand on the answer by @Tim. Suppose the likelihood of the parameters given the sample can be expressed as \begin{equation} p(X|\theta) = \prod_{i=1}^n \texts
54,194
Are regression coefficients in a model with interactions ALL made conditional, or just those involved in the interaction?
The interpretation is motivated by considering how the model predictions change when controlled, simple changes are induced in the original variables. Let's frame this a little abstractly because it doesn't make the situation any more complicated while revealing the essence of the matter. If we denote those variables by $u=(u_1, \ldots, u_m)$, say, then we may write the regressors--by which I mean the variables that actually are involved in the regression--as specified functions $f_1,f_2, \ldots, f_p$ of $u$. For example, $m=3$ numerical variables plus an interaction between the first two, would produce $p=4$ regressors; namely, $$\eqalign{x_1 &= f_1(u) = u_1,\\ x_2 &= f_2(u) = u_2,\\ x_3&=f_3(u)=u_3, \text{ and}\\x_4 &= f_4(u)=u_1u_2\ (\text{the interaction}).\tag{*}}$$ The fitted model based on estimated parameters $b=(b_0, b_1, b_2, \ldots, b_p)$ (an "intercept" is hereby included as $b_0$) fits or "predicts" the response $y$ associated with regressor values $x_1, \ldots, x_p$ to be $$\hat y = b_0 + b_1 x_1 + b_2 x_2 + \cdots + b_p x_p.$$ One common way of interpreting this asks how $\hat y$ would change when, say, the original set of variables $u$ is changed by adding a fixed amount $\delta$ to just one variable $u_j$, becoming $u_j + \delta$, thereby creating a new set of values $u^\prime$. Those not appealing to the Calculus will compute the difference directly, as follows. Let $\hat y$ be the fitted value for regressors associated with $u^\prime$. Subtracting off $\hat y$ and organizing the result by the index $1, 2, \ldots, p$ exhibits the change in response as a linear combination of changes in the regressors: $$\hat y^\prime - \hat y = b_1(f_1(u^\prime)-f_1(u)) + \cdots + b_p(f_p(u^\prime)-f_p(u)).\tag{**}$$ The form of this expression highlights the (obvious) fact that changing $u_j$ affects only the terms for which $f_i(u)$ actually depends on $u_j$. (This is the one aspect of this analysis you might want to commit to memory.) In example $(*)$, for instance, if $j=3$ then only $x_3$ is changed when $u_3$ is changed, thereby becoming $$x_3^\prime - x_3 = f_3(u^\prime) - f_3(u) = u^\prime_3 - u_3 = (u_3 + \delta) - u_3 = \delta.$$ In all other cases $i\ne 3$, $x_i^\prime-x_i=f_i(u^\prime) - f_i(u)=0$: there is no change. Plugging these changes into $(**)$ simplifies it right down to $$\hat y^\prime - \hat y = b_3\,\delta.$$ Interpretation: "changing $u_3$ by $\delta$ (while fixing all the other $u_i$) changes the response $y$ by $b_3$ times $\delta$." Most readers of this site will appreciate that this is intended only as an English description of the foregoing mathematical relationships; in particular, it is not a causal claim. It says nothing about what will happen in the world to $y$ if somehow an observation could be altered to change $u_3$ to $u_3+\delta$ (if that is even possible). Note that $b_3$ does not depend on whatever values the $u_i$ might have: it is a "constant." This makes the interpretation particularly simple. Continuing the example, suppose instead that $u_1+\delta$ is used in the model instead of $u_1$. Now two of the $x_i$ in $(*)$ are affected: $x_1$ increases by $\delta$ while $x_4$ increases by $\delta u_2$. Consequently $(**)$ yields $$\hat y^\prime - \hat y = b_1\delta + b_4 \delta u_2 = (b_1 + b_4 u_2)\delta.$$ Interpretation: "changing $u_1$ by $\delta$ (while fixing all the other $u_i$) changes the response $y$ by $b_1 + b_4 u_2$ times $\delta$." There is the interaction: the change in predicted value depends on the values of the underlying variable $u_2$. Notice that $u_3$ is not involved. The answer to the original question should now be clear from the very forms of $(*)$ and $(**)$. This method of analysis applies not only to interactions, but--by virtue of the abstract specification of the functions $f_j$--for all other models that combine the $u_i$ in any manner whatsoever. (This includes polynomial models, higher-order interactions, and other nonlinear models that might involve exponential growth, sinusoidal variation, and more.) In particular, the interpretation of any interaction does not depend on any variables that are not directly involved in that interaction.
Are regression coefficients in a model with interactions ALL made conditional, or just those involve
The interpretation is motivated by considering how the model predictions change when controlled, simple changes are induced in the original variables. Let's frame this a little abstractly because it
Are regression coefficients in a model with interactions ALL made conditional, or just those involved in the interaction? The interpretation is motivated by considering how the model predictions change when controlled, simple changes are induced in the original variables. Let's frame this a little abstractly because it doesn't make the situation any more complicated while revealing the essence of the matter. If we denote those variables by $u=(u_1, \ldots, u_m)$, say, then we may write the regressors--by which I mean the variables that actually are involved in the regression--as specified functions $f_1,f_2, \ldots, f_p$ of $u$. For example, $m=3$ numerical variables plus an interaction between the first two, would produce $p=4$ regressors; namely, $$\eqalign{x_1 &= f_1(u) = u_1,\\ x_2 &= f_2(u) = u_2,\\ x_3&=f_3(u)=u_3, \text{ and}\\x_4 &= f_4(u)=u_1u_2\ (\text{the interaction}).\tag{*}}$$ The fitted model based on estimated parameters $b=(b_0, b_1, b_2, \ldots, b_p)$ (an "intercept" is hereby included as $b_0$) fits or "predicts" the response $y$ associated with regressor values $x_1, \ldots, x_p$ to be $$\hat y = b_0 + b_1 x_1 + b_2 x_2 + \cdots + b_p x_p.$$ One common way of interpreting this asks how $\hat y$ would change when, say, the original set of variables $u$ is changed by adding a fixed amount $\delta$ to just one variable $u_j$, becoming $u_j + \delta$, thereby creating a new set of values $u^\prime$. Those not appealing to the Calculus will compute the difference directly, as follows. Let $\hat y$ be the fitted value for regressors associated with $u^\prime$. Subtracting off $\hat y$ and organizing the result by the index $1, 2, \ldots, p$ exhibits the change in response as a linear combination of changes in the regressors: $$\hat y^\prime - \hat y = b_1(f_1(u^\prime)-f_1(u)) + \cdots + b_p(f_p(u^\prime)-f_p(u)).\tag{**}$$ The form of this expression highlights the (obvious) fact that changing $u_j$ affects only the terms for which $f_i(u)$ actually depends on $u_j$. (This is the one aspect of this analysis you might want to commit to memory.) In example $(*)$, for instance, if $j=3$ then only $x_3$ is changed when $u_3$ is changed, thereby becoming $$x_3^\prime - x_3 = f_3(u^\prime) - f_3(u) = u^\prime_3 - u_3 = (u_3 + \delta) - u_3 = \delta.$$ In all other cases $i\ne 3$, $x_i^\prime-x_i=f_i(u^\prime) - f_i(u)=0$: there is no change. Plugging these changes into $(**)$ simplifies it right down to $$\hat y^\prime - \hat y = b_3\,\delta.$$ Interpretation: "changing $u_3$ by $\delta$ (while fixing all the other $u_i$) changes the response $y$ by $b_3$ times $\delta$." Most readers of this site will appreciate that this is intended only as an English description of the foregoing mathematical relationships; in particular, it is not a causal claim. It says nothing about what will happen in the world to $y$ if somehow an observation could be altered to change $u_3$ to $u_3+\delta$ (if that is even possible). Note that $b_3$ does not depend on whatever values the $u_i$ might have: it is a "constant." This makes the interpretation particularly simple. Continuing the example, suppose instead that $u_1+\delta$ is used in the model instead of $u_1$. Now two of the $x_i$ in $(*)$ are affected: $x_1$ increases by $\delta$ while $x_4$ increases by $\delta u_2$. Consequently $(**)$ yields $$\hat y^\prime - \hat y = b_1\delta + b_4 \delta u_2 = (b_1 + b_4 u_2)\delta.$$ Interpretation: "changing $u_1$ by $\delta$ (while fixing all the other $u_i$) changes the response $y$ by $b_1 + b_4 u_2$ times $\delta$." There is the interaction: the change in predicted value depends on the values of the underlying variable $u_2$. Notice that $u_3$ is not involved. The answer to the original question should now be clear from the very forms of $(*)$ and $(**)$. This method of analysis applies not only to interactions, but--by virtue of the abstract specification of the functions $f_j$--for all other models that combine the $u_i$ in any manner whatsoever. (This includes polynomial models, higher-order interactions, and other nonlinear models that might involve exponential growth, sinusoidal variation, and more.) In particular, the interpretation of any interaction does not depend on any variables that are not directly involved in that interaction.
Are regression coefficients in a model with interactions ALL made conditional, or just those involve The interpretation is motivated by considering how the model predictions change when controlled, simple changes are induced in the original variables. Let's frame this a little abstractly because it
54,195
Are regression coefficients in a model with interactions ALL made conditional, or just those involved in the interaction?
Using your example - including an interaction term won't affect the interpretation of the other 8 coefficients, but it may change the coefficient itself. For instance, if you have a model y ~ a*age + b*gender + c*age:gender + d*sex + ... + j*race, the interpretation of sex would be: "holding age, gender, ... , race constant, increasing sex by one unit (i.e. moving from female to male), increases y by d. This interpretation holds regardless of whether there's an interaction between age and gender. Though, the value of d may be different depending on if an interaction term is in the model or not.
Are regression coefficients in a model with interactions ALL made conditional, or just those involve
Using your example - including an interaction term won't affect the interpretation of the other 8 coefficients, but it may change the coefficient itself. For instance, if you have a model y ~ a*age +
Are regression coefficients in a model with interactions ALL made conditional, or just those involved in the interaction? Using your example - including an interaction term won't affect the interpretation of the other 8 coefficients, but it may change the coefficient itself. For instance, if you have a model y ~ a*age + b*gender + c*age:gender + d*sex + ... + j*race, the interpretation of sex would be: "holding age, gender, ... , race constant, increasing sex by one unit (i.e. moving from female to male), increases y by d. This interpretation holds regardless of whether there's an interaction between age and gender. Though, the value of d may be different depending on if an interaction term is in the model or not.
Are regression coefficients in a model with interactions ALL made conditional, or just those involve Using your example - including an interaction term won't affect the interpretation of the other 8 coefficients, but it may change the coefficient itself. For instance, if you have a model y ~ a*age +
54,196
Standard error of the estimate in logistic regression
Let's just say we have one parameter $\theta$ and univariate data $x_1, \ldots, x_n$. The likelihood estimates are obtained by solving the score equations: $$ \sum_i l'(\hat\theta,x_i) = 0 $$ where $l(\theta,x_i)$ is the log-likelihood associated with $i$-th observation, evaluated at parameter value $\theta$. Near the true value $\theta_0$, we can have a Taylor expansion of those scores: $$ \sum_i \bigl[ l'(\hat\theta,x_i)-l'(\theta_0,x_i) \bigr] = - \sum_i l'(\theta_0,x_i) = \sum_i l''(\theta_0,x_i)(\hat\theta-\theta_0) + o(|\hat\theta-\theta_0|) $$ where the first equality is due to the definition of $\hat\theta$ in the first step. Asymptotics means we are ignoring the small term $o(|\hat\theta-\theta_0|)$. Asymptotics means we are approximating $\sum_i l''(\theta_0,x_i)$ with what we know, $\sum_i l''(\hat\theta,x_i)$, assuming that $l''(\theta,x_i)$ is a sufficiently smooth function of both $\theta$ and $x$ and does not bounce around unpredictably. Or with $\mathbb{E} \, l''(\hat\theta,x)$ by plugging $x$ and integrating over its distribution. Asymptotics means that the most interesting remaining term $\sum_i l'(\theta_0,x_i)$ is a sum of i.i.d. random variables, and hence asymptotically normal. It has a mean of zero and some sort of variance that smart books derive to be Fisher information. The proper scaling, according to CLT, would then be $\sqrt{n} \sum_i l'(\theta_0,x_i) \to N(0,\omega^2)$ for some $\omega$. Our interest is actually in $\hat\theta-\theta_0$. Let's express it out of step 2, with these approximations in mind: $$ \hat\theta-\theta_0 \approx - \sum_i l'(\theta_0,x_i) \Bigl/ \sum_i l''(\theta_0,x_i) $$ The numerator is asymptotically normal with mean 0 and known (sort of) variance. The denominator is a non-zero quantity, and in large samples is supposed to be a reasonably stable thing (see above about bouncing around). We thus conclude that $\sqrt{n} (\hat\theta-\theta_0) \to N(0,\sigma^2)$ where $\sigma^2$ is a function of the asymptotic variance of the scores and something like $\mathbb{E} \, l''(\theta_0,x)$. Turns out they cancel each other when the model is true (and if not, you get the sandwich variance estimator instead). And that gives you Wald test, more or less. In multivariate case, you need to track vectors and matrices and multiplication on the left and on the right, but that's the gist of it.
Standard error of the estimate in logistic regression
Let's just say we have one parameter $\theta$ and univariate data $x_1, \ldots, x_n$. The likelihood estimates are obtained by solving the score equations: $$ \sum_i l'(\hat\theta,x_i) = 0 $$ where $
Standard error of the estimate in logistic regression Let's just say we have one parameter $\theta$ and univariate data $x_1, \ldots, x_n$. The likelihood estimates are obtained by solving the score equations: $$ \sum_i l'(\hat\theta,x_i) = 0 $$ where $l(\theta,x_i)$ is the log-likelihood associated with $i$-th observation, evaluated at parameter value $\theta$. Near the true value $\theta_0$, we can have a Taylor expansion of those scores: $$ \sum_i \bigl[ l'(\hat\theta,x_i)-l'(\theta_0,x_i) \bigr] = - \sum_i l'(\theta_0,x_i) = \sum_i l''(\theta_0,x_i)(\hat\theta-\theta_0) + o(|\hat\theta-\theta_0|) $$ where the first equality is due to the definition of $\hat\theta$ in the first step. Asymptotics means we are ignoring the small term $o(|\hat\theta-\theta_0|)$. Asymptotics means we are approximating $\sum_i l''(\theta_0,x_i)$ with what we know, $\sum_i l''(\hat\theta,x_i)$, assuming that $l''(\theta,x_i)$ is a sufficiently smooth function of both $\theta$ and $x$ and does not bounce around unpredictably. Or with $\mathbb{E} \, l''(\hat\theta,x)$ by plugging $x$ and integrating over its distribution. Asymptotics means that the most interesting remaining term $\sum_i l'(\theta_0,x_i)$ is a sum of i.i.d. random variables, and hence asymptotically normal. It has a mean of zero and some sort of variance that smart books derive to be Fisher information. The proper scaling, according to CLT, would then be $\sqrt{n} \sum_i l'(\theta_0,x_i) \to N(0,\omega^2)$ for some $\omega$. Our interest is actually in $\hat\theta-\theta_0$. Let's express it out of step 2, with these approximations in mind: $$ \hat\theta-\theta_0 \approx - \sum_i l'(\theta_0,x_i) \Bigl/ \sum_i l''(\theta_0,x_i) $$ The numerator is asymptotically normal with mean 0 and known (sort of) variance. The denominator is a non-zero quantity, and in large samples is supposed to be a reasonably stable thing (see above about bouncing around). We thus conclude that $\sqrt{n} (\hat\theta-\theta_0) \to N(0,\sigma^2)$ where $\sigma^2$ is a function of the asymptotic variance of the scores and something like $\mathbb{E} \, l''(\theta_0,x)$. Turns out they cancel each other when the model is true (and if not, you get the sandwich variance estimator instead). And that gives you Wald test, more or less. In multivariate case, you need to track vectors and matrices and multiplication on the left and on the right, but that's the gist of it.
Standard error of the estimate in logistic regression Let's just say we have one parameter $\theta$ and univariate data $x_1, \ldots, x_n$. The likelihood estimates are obtained by solving the score equations: $$ \sum_i l'(\hat\theta,x_i) = 0 $$ where $
54,197
Predicting sequence of integers / binary values
One approach you could consider is trying to learn a Markov Chain (MC) to represent each sequence and then predict future values based on this MC. MCs are a way of representing types of learning automata (LA) and can be used when the subsequent state of a system depends solely on the current state. They can be intuitively represented diagrammatically: This is a very simple LA. It has two states: one where the last number seen was a 1 and one where the last number seen was a 0. There are transition probabilities between the different states noted as well. For example, when the LA is in state 0 it will stay in state 0 with probability $x$ and will move to state 1 with probability $1-x$. This can also be shown in the form of a matrix: $\begin{bmatrix}x & 1-x \\ 1-y & y\end{bmatrix}$ Estimating from your example sequence, $1 1 1 1 0 0 1 1 1 1 1 0 0 0$, we might say that in this case $x = 0.6$ and $y = 0.77$. This kind of solution can also be extended; we could learn an LA with more states and more "memory." or $\begin{bmatrix}w & 0 & 1-w & 0 \\ x & 0 & 1-x & 0 \\ 0 & 1-y & 0 & y \\ 0 & 1-z & 0 & z\end{bmatrix}$ This LA has four states: 00, where two or more consecutive 0s have been seen; 0, where only one consecutive 0 has been seen; 1, where only one consecutive 1 has been seen; and 11, where two or more consecutive 1s have been seen. We can again estimate the corresponding probabilities from your example sequence and might say that $w = 0.33$, $x = 1$, $y = 1$ and $z = 0.71$.
Predicting sequence of integers / binary values
One approach you could consider is trying to learn a Markov Chain (MC) to represent each sequence and then predict future values based on this MC. MCs are a way of representing types of learning autom
Predicting sequence of integers / binary values One approach you could consider is trying to learn a Markov Chain (MC) to represent each sequence and then predict future values based on this MC. MCs are a way of representing types of learning automata (LA) and can be used when the subsequent state of a system depends solely on the current state. They can be intuitively represented diagrammatically: This is a very simple LA. It has two states: one where the last number seen was a 1 and one where the last number seen was a 0. There are transition probabilities between the different states noted as well. For example, when the LA is in state 0 it will stay in state 0 with probability $x$ and will move to state 1 with probability $1-x$. This can also be shown in the form of a matrix: $\begin{bmatrix}x & 1-x \\ 1-y & y\end{bmatrix}$ Estimating from your example sequence, $1 1 1 1 0 0 1 1 1 1 1 0 0 0$, we might say that in this case $x = 0.6$ and $y = 0.77$. This kind of solution can also be extended; we could learn an LA with more states and more "memory." or $\begin{bmatrix}w & 0 & 1-w & 0 \\ x & 0 & 1-x & 0 \\ 0 & 1-y & 0 & y \\ 0 & 1-z & 0 & z\end{bmatrix}$ This LA has four states: 00, where two or more consecutive 0s have been seen; 0, where only one consecutive 0 has been seen; 1, where only one consecutive 1 has been seen; and 11, where two or more consecutive 1s have been seen. We can again estimate the corresponding probabilities from your example sequence and might say that $w = 0.33$, $x = 1$, $y = 1$ and $z = 0.71$.
Predicting sequence of integers / binary values One approach you could consider is trying to learn a Markov Chain (MC) to represent each sequence and then predict future values based on this MC. MCs are a way of representing types of learning autom
54,198
Predicting sequence of integers / binary values
I may worth to try a neural network for classification, specifically an LSTM is doing quite what you would like to achieve. It could be used as follows: LSTM need input sequences to be the same length. This could be solved by padding the data by adding leading characters. The padding character should be not 0 or 1. Another solution is to use batch size 1 without resetting the status after each batch. Once padded data should be encoded using the one hot method. You can use 3 categories: 0, 1 and the padding character. The last binary value of each sequence should be used as a target, the rest as input. Of course the padding character will never be the target because we padded on the front. Stacking 2 LSTM will do a better job Your network could look like: Input->LSTM->LSTM->Dense->Dense->Output The more data you have the better it will learn the patterns. Such a network will learn very easily that the padding character is never the output, so don't worry about it.
Predicting sequence of integers / binary values
I may worth to try a neural network for classification, specifically an LSTM is doing quite what you would like to achieve. It could be used as follows: LSTM need input sequences to be the same lengt
Predicting sequence of integers / binary values I may worth to try a neural network for classification, specifically an LSTM is doing quite what you would like to achieve. It could be used as follows: LSTM need input sequences to be the same length. This could be solved by padding the data by adding leading characters. The padding character should be not 0 or 1. Another solution is to use batch size 1 without resetting the status after each batch. Once padded data should be encoded using the one hot method. You can use 3 categories: 0, 1 and the padding character. The last binary value of each sequence should be used as a target, the rest as input. Of course the padding character will never be the target because we padded on the front. Stacking 2 LSTM will do a better job Your network could look like: Input->LSTM->LSTM->Dense->Dense->Output The more data you have the better it will learn the patterns. Such a network will learn very easily that the padding character is never the output, so don't worry about it.
Predicting sequence of integers / binary values I may worth to try a neural network for classification, specifically an LSTM is doing quite what you would like to achieve. It could be used as follows: LSTM need input sequences to be the same lengt
54,199
Predicting sequence of integers / binary values
One way to do it is through classification. You need a binary output, which is exactly what classification algorithms can provide. You can construct a data-set out of this time series. Assume you have $n$ values right now. Further assume that you think the value in $n+1$ can be determined based on the $t$ last values between $n$ and $n-t+1$. Like this, you have $n-t+1$ observations (all except the first few in the time-series because there would not be enough lag observations for them) with $t$ binary variables each (all ordered in the same way starting from the then current value towards the lag values). You can construct all sorts of classifiers based on this data-set, compare their performance and choose the best one to predict. $t$ is a parameter to be tuned. This will always be a high risk forecast though. Since you basically want to forecast only one number which can only have two values. Per definition, this is going to be all or nothing. If the costs associated with falsely predicting a 0 are different from the costs of falsely predicting a 1, you can reflect that and bias your algorithm towards predicting more readily the value that would be more expensive to miss.
Predicting sequence of integers / binary values
One way to do it is through classification. You need a binary output, which is exactly what classification algorithms can provide. You can construct a data-set out of this time series. Assume you have
Predicting sequence of integers / binary values One way to do it is through classification. You need a binary output, which is exactly what classification algorithms can provide. You can construct a data-set out of this time series. Assume you have $n$ values right now. Further assume that you think the value in $n+1$ can be determined based on the $t$ last values between $n$ and $n-t+1$. Like this, you have $n-t+1$ observations (all except the first few in the time-series because there would not be enough lag observations for them) with $t$ binary variables each (all ordered in the same way starting from the then current value towards the lag values). You can construct all sorts of classifiers based on this data-set, compare their performance and choose the best one to predict. $t$ is a parameter to be tuned. This will always be a high risk forecast though. Since you basically want to forecast only one number which can only have two values. Per definition, this is going to be all or nothing. If the costs associated with falsely predicting a 0 are different from the costs of falsely predicting a 1, you can reflect that and bias your algorithm towards predicting more readily the value that would be more expensive to miss.
Predicting sequence of integers / binary values One way to do it is through classification. You need a binary output, which is exactly what classification algorithms can provide. You can construct a data-set out of this time series. Assume you have
54,200
Predicting sequence of integers / binary values
Using a Neural Network (NN) would be a solution. NN learn/train on dataset to target dataset, and then make a forecast on that particular database set. NN are very good at processing binary data (some of them generalize input data for better processing). There is no need to know what binary data represent or how they are translated back (using non-generalized data leads to overwhelming complex models, a beginner overkill). Some Open Source Statistical Software Packages for Mining Data feature learning NN like Orange/Quasar, JASP, KNIME to name a few, to start on. In general, the more dataset the better outcome. There are more heavy-weight software packages that feature learning NN but there is a huge learning curve not only for starting to use such software packages, but also what in particular all those tools are, what they are for, how and when to use them, and how they perform, especially for a beginner. The easiest way to start on that in MS Excel is to use XLSTATA, NeuralSolution, PALISADE as MS Excel Add-ins, to name a few, but they are all commercial some offer a full trial use. In Excel such Add-ins offer more easy "control" over steps performed on those tools, like SVM, k-NN, and Log regression, ... to start on.
Predicting sequence of integers / binary values
Using a Neural Network (NN) would be a solution. NN learn/train on dataset to target dataset, and then make a forecast on that particular database set. NN are very good at processing binary data (some
Predicting sequence of integers / binary values Using a Neural Network (NN) would be a solution. NN learn/train on dataset to target dataset, and then make a forecast on that particular database set. NN are very good at processing binary data (some of them generalize input data for better processing). There is no need to know what binary data represent or how they are translated back (using non-generalized data leads to overwhelming complex models, a beginner overkill). Some Open Source Statistical Software Packages for Mining Data feature learning NN like Orange/Quasar, JASP, KNIME to name a few, to start on. In general, the more dataset the better outcome. There are more heavy-weight software packages that feature learning NN but there is a huge learning curve not only for starting to use such software packages, but also what in particular all those tools are, what they are for, how and when to use them, and how they perform, especially for a beginner. The easiest way to start on that in MS Excel is to use XLSTATA, NeuralSolution, PALISADE as MS Excel Add-ins, to name a few, but they are all commercial some offer a full trial use. In Excel such Add-ins offer more easy "control" over steps performed on those tools, like SVM, k-NN, and Log regression, ... to start on.
Predicting sequence of integers / binary values Using a Neural Network (NN) would be a solution. NN learn/train on dataset to target dataset, and then make a forecast on that particular database set. NN are very good at processing binary data (some