idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
55,001 | Help interpreting "poisson process" calculations from a paper | Yes, a moving average with large $\lambda$ will be close to Normally distributed. No, this calculation is not legitimate.
Because the cumulant generating function (cgf) of a Poisson distribution of intensity $\lambda$ is $$\psi_\lambda(t) = \lambda(e^{it}-1) = i(t\lambda) + \frac{1}{2!}(it\sqrt{\lambda})^2+\lambda^{-1/2}\frac{1}{3!}(it\sqrt{\lambda})^3 + \cdots$$
and the cgf of a Normal distribution with mean $\lambda$ and variance $\lambda$ is $$\phi_\lambda(t) = i(t\lambda) + \frac{1}{2!}(it\sqrt{\lambda})^2,$$
the terms
$$\lambda^{-1/2}\frac{1}{3!}(it\sqrt{\lambda})^3 + \cdots$$
express their difference. The factor of $\lambda^{-1/2}$ multiplying them indicates how rapidly the Poisson approaches the Normal as $\lambda$ increases. Look at the two distributions for $\lambda=260$:
The heights of the bars are the Poisson probabilities while the red curve gives the corresponding Normal densities (per unit interval). They are almost indistinguishable.
The calculation in question is described thus:
the broken grey lines show the likely range of random statistical variation around the 5-year moving average ... if the number of deaths can be represented as the result of a Poisson process, for which the underlying rate at which the events (deaths) occur is given by the 5-year moving average, then random year to year variation would result in only about one year in 20 having a figure outwith [sic] this range (which is a ‘95 per cent confidence interval’, calculated thus: the underlying rate of occurrence plus or minus 1.96 times its standard deviation;...
This is erroneous because it fails to account for variation in the average itself. To analyze that, let $X_1, X_2, \ldots, X_5$ be the counts in the five (non-overlapping) years and suppose they are independent (as they would be in a Poisson process). Then the "random year to year variation ... around the 5-year moving average" is the variance of $X_i - \bar X$ where $\bar X=(X_1+\cdots+X_5)/5$ is the five-year average. Due to independence, all five of these variances are the same, equal to the first, which can be computed as
$$\operatorname{Var}(X_1 - \bar X) = \operatorname{Var}\left(\frac{4}{5}X_1 - \frac{1}{5}X_2 - \cdots - \frac{1}{5}X_5\right) \\= \left(\frac{4}{5}\right)^2\lambda + \left(-\frac{1}{5}\right)^2\lambda + \cdots + \left(-\frac{1}{5}\right)^2\lambda = \frac{4}{5}\lambda.$$
Consequently the correct interval (it's not a confidence interval) for 95% of these variations would have a half-width of $$1.96 \sqrt{4\lambda/5}\approx 0.89(1.96\sqrt{\lambda}),$$ over 10% shorter than claimed. The discrepancy can be appreciated in this simulation of 100,000 five-year averages (giving 500,000 such deviations):
The gray bars are a histogram of annual deviations from the five-year means. The red curve uses the paper's formula. The blue curve uses the adjusted variance (multiplied by $4/5$). It is clear which is correct.
Note that neither of these calculations is truly appropriate in the application: the interval does not account for the fact that $\lambda$ is estimated (uncertainly) from the data. Accounting for this ought to widen the interval. As a result, the values computed in the paper will come out close to correct--but only because of this accidental near-cancellation of two separate errors!
So that you can check, here is the code (in R) used for the simulation.
lambda <- 260 # Annual rate
n <- 5 # Years in window
n.sim <- 1e5 # Windows to simulate
x <- matrix(rpois(n.sim*n, lambda), n.sim) # Annual values
x <- x - rowMeans(x) # Deviations from means
hist(x, breaks=seq(floor(min(x))-1/2, ceiling(max(x))+1/2), freq=FALSE,
col="White", border="#c0c0c0",
main="Simulated five-year deviations")
curve(dnorm(x, 0, sqrt(260)), col="Red", lwd=2, add=TRUE) # As in the paper
curve(dnorm(x, 0, sqrt(260 * 4/5)), col="Blue", lwd=2, add=TRUE)# Corrected
The following material has been added in response to comments.
By focusing on fluctuations in this short time series, the paper makes much of very little. Clearly there is a trend. A simple reasonable way to explore a trend is to fit a growth model, such as regressing the log counts against time. Here is a summary.
Residuals:
Min 1Q Median 3Q Max
-0.153579 -0.060968 0.008444 0.046664 0.213764
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.071e+02 7.593e+00 -14.10 3.60e-11 ***
x 5.638e-02 3.786e-03 14.89 1.45e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.09763 on 18 degrees of freedom
This regression estimates a 5.6% annual trend ($p \lt 10^{-10}$: it's significant). All data fluctuate around the fit but remain within 25% of it. The typical amount of fluctuation is 10%.
Specialized models are not required. The large counts indicate a general linear model is unnecessary. There is little evidence of serial correlation, suggesting a time series model would add nothing to the analysis. To identify outliers and temporary fluctuations, one may supplement this simple regression with a robust smooth, such as Loess (shown here), IRLS, or possibly a GAM. Loess is used below; there are no outliers among its residuals. The models that allow for detailed temporary fluctuations tend to fit the data more closely and (accordingly) be more optimistic about the random fluctuations: that is, they probably overfit the data and thereby underestimate how much of the fluctuation should be viewed as random. Thus, the simpler regression model should be preferred provided it fits the data adequately.
The hollow dots show the data. The dashed black line is a Loess smooth (using the R loess function with a span of 1/2). The solid red line is the regression of log count against the year. The solid gray lines above and below it are a symmetric 95% prediction interval. Indeed, 95% of the data points lie within those lines and the remaining one is right on one of the lines. This implies none of the data points should be viewed as unusual in the context of this fit.
The usual diagnostic plots of residuals (not illustrated) show nothing is amiss; in particular, there are no outliers.
Consequently, a good description of these data is that
Drug-related deaths have been growing at 5.6% per year between 1996 and 2015, with no exceptional years to note.
This sentence replaces--and improves upon--more than a page of dense text in the report, spanning sections 3.1.2 and 3.1.3. | Help interpreting "poisson process" calculations from a paper | Yes, a moving average with large $\lambda$ will be close to Normally distributed. No, this calculation is not legitimate.
Because the cumulant generating function (cgf) of a Poisson distribution of i | Help interpreting "poisson process" calculations from a paper
Yes, a moving average with large $\lambda$ will be close to Normally distributed. No, this calculation is not legitimate.
Because the cumulant generating function (cgf) of a Poisson distribution of intensity $\lambda$ is $$\psi_\lambda(t) = \lambda(e^{it}-1) = i(t\lambda) + \frac{1}{2!}(it\sqrt{\lambda})^2+\lambda^{-1/2}\frac{1}{3!}(it\sqrt{\lambda})^3 + \cdots$$
and the cgf of a Normal distribution with mean $\lambda$ and variance $\lambda$ is $$\phi_\lambda(t) = i(t\lambda) + \frac{1}{2!}(it\sqrt{\lambda})^2,$$
the terms
$$\lambda^{-1/2}\frac{1}{3!}(it\sqrt{\lambda})^3 + \cdots$$
express their difference. The factor of $\lambda^{-1/2}$ multiplying them indicates how rapidly the Poisson approaches the Normal as $\lambda$ increases. Look at the two distributions for $\lambda=260$:
The heights of the bars are the Poisson probabilities while the red curve gives the corresponding Normal densities (per unit interval). They are almost indistinguishable.
The calculation in question is described thus:
the broken grey lines show the likely range of random statistical variation around the 5-year moving average ... if the number of deaths can be represented as the result of a Poisson process, for which the underlying rate at which the events (deaths) occur is given by the 5-year moving average, then random year to year variation would result in only about one year in 20 having a figure outwith [sic] this range (which is a ‘95 per cent confidence interval’, calculated thus: the underlying rate of occurrence plus or minus 1.96 times its standard deviation;...
This is erroneous because it fails to account for variation in the average itself. To analyze that, let $X_1, X_2, \ldots, X_5$ be the counts in the five (non-overlapping) years and suppose they are independent (as they would be in a Poisson process). Then the "random year to year variation ... around the 5-year moving average" is the variance of $X_i - \bar X$ where $\bar X=(X_1+\cdots+X_5)/5$ is the five-year average. Due to independence, all five of these variances are the same, equal to the first, which can be computed as
$$\operatorname{Var}(X_1 - \bar X) = \operatorname{Var}\left(\frac{4}{5}X_1 - \frac{1}{5}X_2 - \cdots - \frac{1}{5}X_5\right) \\= \left(\frac{4}{5}\right)^2\lambda + \left(-\frac{1}{5}\right)^2\lambda + \cdots + \left(-\frac{1}{5}\right)^2\lambda = \frac{4}{5}\lambda.$$
Consequently the correct interval (it's not a confidence interval) for 95% of these variations would have a half-width of $$1.96 \sqrt{4\lambda/5}\approx 0.89(1.96\sqrt{\lambda}),$$ over 10% shorter than claimed. The discrepancy can be appreciated in this simulation of 100,000 five-year averages (giving 500,000 such deviations):
The gray bars are a histogram of annual deviations from the five-year means. The red curve uses the paper's formula. The blue curve uses the adjusted variance (multiplied by $4/5$). It is clear which is correct.
Note that neither of these calculations is truly appropriate in the application: the interval does not account for the fact that $\lambda$ is estimated (uncertainly) from the data. Accounting for this ought to widen the interval. As a result, the values computed in the paper will come out close to correct--but only because of this accidental near-cancellation of two separate errors!
So that you can check, here is the code (in R) used for the simulation.
lambda <- 260 # Annual rate
n <- 5 # Years in window
n.sim <- 1e5 # Windows to simulate
x <- matrix(rpois(n.sim*n, lambda), n.sim) # Annual values
x <- x - rowMeans(x) # Deviations from means
hist(x, breaks=seq(floor(min(x))-1/2, ceiling(max(x))+1/2), freq=FALSE,
col="White", border="#c0c0c0",
main="Simulated five-year deviations")
curve(dnorm(x, 0, sqrt(260)), col="Red", lwd=2, add=TRUE) # As in the paper
curve(dnorm(x, 0, sqrt(260 * 4/5)), col="Blue", lwd=2, add=TRUE)# Corrected
The following material has been added in response to comments.
By focusing on fluctuations in this short time series, the paper makes much of very little. Clearly there is a trend. A simple reasonable way to explore a trend is to fit a growth model, such as regressing the log counts against time. Here is a summary.
Residuals:
Min 1Q Median 3Q Max
-0.153579 -0.060968 0.008444 0.046664 0.213764
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.071e+02 7.593e+00 -14.10 3.60e-11 ***
x 5.638e-02 3.786e-03 14.89 1.45e-11 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.09763 on 18 degrees of freedom
This regression estimates a 5.6% annual trend ($p \lt 10^{-10}$: it's significant). All data fluctuate around the fit but remain within 25% of it. The typical amount of fluctuation is 10%.
Specialized models are not required. The large counts indicate a general linear model is unnecessary. There is little evidence of serial correlation, suggesting a time series model would add nothing to the analysis. To identify outliers and temporary fluctuations, one may supplement this simple regression with a robust smooth, such as Loess (shown here), IRLS, or possibly a GAM. Loess is used below; there are no outliers among its residuals. The models that allow for detailed temporary fluctuations tend to fit the data more closely and (accordingly) be more optimistic about the random fluctuations: that is, they probably overfit the data and thereby underestimate how much of the fluctuation should be viewed as random. Thus, the simpler regression model should be preferred provided it fits the data adequately.
The hollow dots show the data. The dashed black line is a Loess smooth (using the R loess function with a span of 1/2). The solid red line is the regression of log count against the year. The solid gray lines above and below it are a symmetric 95% prediction interval. Indeed, 95% of the data points lie within those lines and the remaining one is right on one of the lines. This implies none of the data points should be viewed as unusual in the context of this fit.
The usual diagnostic plots of residuals (not illustrated) show nothing is amiss; in particular, there are no outliers.
Consequently, a good description of these data is that
Drug-related deaths have been growing at 5.6% per year between 1996 and 2015, with no exceptional years to note.
This sentence replaces--and improves upon--more than a page of dense text in the report, spanning sections 3.1.2 and 3.1.3. | Help interpreting "poisson process" calculations from a paper
Yes, a moving average with large $\lambda$ will be close to Normally distributed. No, this calculation is not legitimate.
Because the cumulant generating function (cgf) of a Poisson distribution of i |
55,002 | Standard practice for dealing with U flagged chemistry data | Entire books have been written about this, especially Dennis Helsel's Nondetects and Data Analysis (Wiley-Interscience, 2005). Helsel has a NADA package for R, too. I will therefore confine this answer to the most important things I would explain to anyone beginning to analyze environmental data.
There are more than 20 different definitions of a "detection limit." Make sure you understand the meaning of the limits that have been given you. Usually they should be considered reporting limits: they are values chosen by the laboratory to limit their liability. They are usually not LoDs, MDLs, PQLS, etc. (which have some relationship to the measurement process). So as not to imply anything unintended, I will call the numbers associated with the U-values "RLs."
If by "standard practice" you mean "what everybody does (whether they understand the issues or not," then the answer is to use either 0, RL/2, or RL.
If you mean "reasonable practice," then please understand that what everybody does has been thoroughly and persistently criticized in the literature for over 30 years. Sometimes you can get away with these simple substitution methods, especially when the results of your analysis wind up not depending on how you impute the values. But in most cases you cannot. The basic problem is that any fixed imputation method, such as RL/2, collapses a quantity that truly is varying into a quantity that does not vary: that can completely screw up all estimates of variation and at that point there's hardly any use in performing any kind of statistical procedure apart from summarizing the data.
Helsel advocates applying nonparametric survival methods. Just negate all the values and pretend they behave like survival times. (It's a clever approach and sometimes works, but it does make fairly strong underlying assumptions about the data and in my experience they don't seem to hold.)
A class of maximum likelihood-based techniques works pretty well when there's enough data. I have been adapting these to regression models and more recently to time series models with some success. The challenge lies in making inferences about correlations among data that have a large proportion of nondetects. A simple implementation (which does not allow for variable RLs) is available in the censReg package for R.
You're probably best off spending your time developing appropriate graphical methods to display these data. In scatterplots, for instance, use distinct symbols for the four possible combinations of data: both quantified, first ND, second ND, both ND. Plot them at the values given by the RLs so you can see the reporting limits. That gives you the best chance of discovering the parts of the data that will be sensitive to how you treat the nondetects.
Learn about the nonparametric methods available for computing upper tolerance limits and upper prediction limits. The beauty of these methods is that often you don't need to impute values to the NDs at all. They are described extensively in the US EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Arguably, this enormous document (nearly 1000 pages including its appendices) embodies "standard practice" across the US and--because it's widely emulated in other countries--throughout the world.
Finally, you might as well know the US EPA offers some software that deals with NDs. It's called Pro-UCL. Because it is favored by this regulatory agency, its use is rapidly becoming "standard practice" among consultants--especially among those who have no understanding of statistics. It offers a smorgasbord of procedures for any dataset (ranging from good through horribly bad), enabling any user to pick and choose the statistical results they want. (No comment.) Using it will be labor-intensive--it's basically a big spreadsheet. If you really know what you're doing, there's some value in it; and if you have to submit your results to a US federal or state agency, you might be compelled to use it regardless. | Standard practice for dealing with U flagged chemistry data | Entire books have been written about this, especially Dennis Helsel's Nondetects and Data Analysis (Wiley-Interscience, 2005). Helsel has a NADA package for R, too. I will therefore confine this answ | Standard practice for dealing with U flagged chemistry data
Entire books have been written about this, especially Dennis Helsel's Nondetects and Data Analysis (Wiley-Interscience, 2005). Helsel has a NADA package for R, too. I will therefore confine this answer to the most important things I would explain to anyone beginning to analyze environmental data.
There are more than 20 different definitions of a "detection limit." Make sure you understand the meaning of the limits that have been given you. Usually they should be considered reporting limits: they are values chosen by the laboratory to limit their liability. They are usually not LoDs, MDLs, PQLS, etc. (which have some relationship to the measurement process). So as not to imply anything unintended, I will call the numbers associated with the U-values "RLs."
If by "standard practice" you mean "what everybody does (whether they understand the issues or not," then the answer is to use either 0, RL/2, or RL.
If you mean "reasonable practice," then please understand that what everybody does has been thoroughly and persistently criticized in the literature for over 30 years. Sometimes you can get away with these simple substitution methods, especially when the results of your analysis wind up not depending on how you impute the values. But in most cases you cannot. The basic problem is that any fixed imputation method, such as RL/2, collapses a quantity that truly is varying into a quantity that does not vary: that can completely screw up all estimates of variation and at that point there's hardly any use in performing any kind of statistical procedure apart from summarizing the data.
Helsel advocates applying nonparametric survival methods. Just negate all the values and pretend they behave like survival times. (It's a clever approach and sometimes works, but it does make fairly strong underlying assumptions about the data and in my experience they don't seem to hold.)
A class of maximum likelihood-based techniques works pretty well when there's enough data. I have been adapting these to regression models and more recently to time series models with some success. The challenge lies in making inferences about correlations among data that have a large proportion of nondetects. A simple implementation (which does not allow for variable RLs) is available in the censReg package for R.
You're probably best off spending your time developing appropriate graphical methods to display these data. In scatterplots, for instance, use distinct symbols for the four possible combinations of data: both quantified, first ND, second ND, both ND. Plot them at the values given by the RLs so you can see the reporting limits. That gives you the best chance of discovering the parts of the data that will be sensitive to how you treat the nondetects.
Learn about the nonparametric methods available for computing upper tolerance limits and upper prediction limits. The beauty of these methods is that often you don't need to impute values to the NDs at all. They are described extensively in the US EPA's Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities. Arguably, this enormous document (nearly 1000 pages including its appendices) embodies "standard practice" across the US and--because it's widely emulated in other countries--throughout the world.
Finally, you might as well know the US EPA offers some software that deals with NDs. It's called Pro-UCL. Because it is favored by this regulatory agency, its use is rapidly becoming "standard practice" among consultants--especially among those who have no understanding of statistics. It offers a smorgasbord of procedures for any dataset (ranging from good through horribly bad), enabling any user to pick and choose the statistical results they want. (No comment.) Using it will be labor-intensive--it's basically a big spreadsheet. If you really know what you're doing, there's some value in it; and if you have to submit your results to a US federal or state agency, you might be compelled to use it regardless. | Standard practice for dealing with U flagged chemistry data
Entire books have been written about this, especially Dennis Helsel's Nondetects and Data Analysis (Wiley-Interscience, 2005). Helsel has a NADA package for R, too. I will therefore confine this answ |
55,003 | Standard practice for dealing with U flagged chemistry data | We see this phenomenon in HIV modeling where CD4 and viral load are frequently below limits of detection, even though a participant is a carrier for the disease. The methods you describe are useful approaches that, while biased, are easy to describe. Let me suggest that another approach you might take is simultaneously calculating means among complete cases, e.g. cases where values did not achieve LLD, then presenting alongside that a proportion/count of cases that did achieve that value.
You suggest many forms of single imputation, which is known not to work. Imputation being a missing value is filled a "best guess" of what it might be. The result is that you tend to underestimate standard errors. To solve this, an approach you might take is a parametric modeling approach where you assume a distribution, such as log-normal, for the concentration values and use an EM-algorithm to simultaneously estimate the shape of their distribution and a range of values that the LLD observations achieved. In doing this, you can appropriately account for the error associated with the unknown LLD values and obtain unbiased estimates of the mean and standard deviation. This would best be presented alongside my earlier suggestion as two approaches to the problem.
Bayesian estimation software like Winbugs or R is adept at performing this kind of inference, but I emphasize that the approach I describe is not actually Bayesian. The EM-algorithm is simply a maximum likelihood approach that uses parametric assumptions about the the data to fill in missing values. | Standard practice for dealing with U flagged chemistry data | We see this phenomenon in HIV modeling where CD4 and viral load are frequently below limits of detection, even though a participant is a carrier for the disease. The methods you describe are useful ap | Standard practice for dealing with U flagged chemistry data
We see this phenomenon in HIV modeling where CD4 and viral load are frequently below limits of detection, even though a participant is a carrier for the disease. The methods you describe are useful approaches that, while biased, are easy to describe. Let me suggest that another approach you might take is simultaneously calculating means among complete cases, e.g. cases where values did not achieve LLD, then presenting alongside that a proportion/count of cases that did achieve that value.
You suggest many forms of single imputation, which is known not to work. Imputation being a missing value is filled a "best guess" of what it might be. The result is that you tend to underestimate standard errors. To solve this, an approach you might take is a parametric modeling approach where you assume a distribution, such as log-normal, for the concentration values and use an EM-algorithm to simultaneously estimate the shape of their distribution and a range of values that the LLD observations achieved. In doing this, you can appropriately account for the error associated with the unknown LLD values and obtain unbiased estimates of the mean and standard deviation. This would best be presented alongside my earlier suggestion as two approaches to the problem.
Bayesian estimation software like Winbugs or R is adept at performing this kind of inference, but I emphasize that the approach I describe is not actually Bayesian. The EM-algorithm is simply a maximum likelihood approach that uses parametric assumptions about the the data to fill in missing values. | Standard practice for dealing with U flagged chemistry data
We see this phenomenon in HIV modeling where CD4 and viral load are frequently below limits of detection, even though a participant is a carrier for the disease. The methods you describe are useful ap |
55,004 | Calculating probability that a probability of heads lies in a given interval | If probability of heads is known
If probability of heads is known, then you are not asking about confidence interval (check here and here for definition and further details), but about distribution quantiles. If you toss your coin $n$ times and observe $x \le n$ heads, then the tosses can be described by a binomial distribution parametrized by the number of tosses $n$ and probability of tossing head $p$. If you asked "what is the probability of observing $x$ or more heads in $n$ tosses?", you would be asking about cumulative distribution function
$$ \Pr(X \le x) = F_X(x) = q $$
quantile function answers the opposite question, i.e. "what is the value of $x$ such that $\Pr(X \le x) = q$ for a given value $q$?",
$$ F_X^{-1}(q) = x $$
more precisely, since cumulative distribution function of a discrete distribution is a step function, we need a generalized inverse distribution function for it
$$ F_X^{-1}(q) = \inf \{\, x \in \mathbb{R}: F_X(x) \geq q \,\} $$
Binomial distribution does not have a closed-form quantile function, but most statistical software offers you to calculate it numerically (e.g. qbinom function in R). It can be also calculated "by-hand" for discrete distributions like binomial, since you can compute cumulative probabilities $q_i$ for each $x_i = 0,1,2,\dots,n $, and then simply choose such values of $q_i$ that correspond to the probabilities of interest.
If probability of heads is unknown
If, as stated later on in comments, we are dealing with hypothetical coin "whose properties are completely unknown", then this is a different problem. We would still assume binomial distribution for counts of heads $X$, but the "unknown properties" can be understood as assuming that probability of heads $p$ is a random variable. As noticed in the comments below by amoeba and Xi'an, this can be assumed as a Bayesian problem, in terms of beta-binomial model.
Since $p$ is random and can be anything, we assume for it a "uniform" prior, i.e. beta distribution with parameters $\alpha = \beta = 1$. If we toss the coin some number of times, then we can update our prior for the additional information. Since beta is a conjugate prior for binomial distribution, updating is very simple and updated parameters become $\alpha' = \alpha + \text{number of heads}$ and $\beta' = \beta + \text{number of tails}$ and the quantiles of interest can be calculated from posterior beta distribution parametrized by $\alpha'$ and $\beta'$.
The interval of interest can be easily obtained using theoretical quantiles. Two different kinds of intervals may be obtained in this scenario:
interval for number of heads $x$, that can be calculated from posterior predictive distribution that in this case is a beta-binomial distribution, or
interval for possible values of $p$ that can be calculated from beta distribution.
As noted above, the interval can be calculated before seeing any data from prior distribution parametrized by $\alpha = \beta = 1$, or after seeing some data and updating for it, from posterior distribution parametrized by $\alpha'$ and $\beta'$. Each time you observe new data you can update your posterior for it to make your estimate more and more precise. | Calculating probability that a probability of heads lies in a given interval | If probability of heads is known
If probability of heads is known, then you are not asking about confidence interval (check here and here for definition and further details), but about distribution qu | Calculating probability that a probability of heads lies in a given interval
If probability of heads is known
If probability of heads is known, then you are not asking about confidence interval (check here and here for definition and further details), but about distribution quantiles. If you toss your coin $n$ times and observe $x \le n$ heads, then the tosses can be described by a binomial distribution parametrized by the number of tosses $n$ and probability of tossing head $p$. If you asked "what is the probability of observing $x$ or more heads in $n$ tosses?", you would be asking about cumulative distribution function
$$ \Pr(X \le x) = F_X(x) = q $$
quantile function answers the opposite question, i.e. "what is the value of $x$ such that $\Pr(X \le x) = q$ for a given value $q$?",
$$ F_X^{-1}(q) = x $$
more precisely, since cumulative distribution function of a discrete distribution is a step function, we need a generalized inverse distribution function for it
$$ F_X^{-1}(q) = \inf \{\, x \in \mathbb{R}: F_X(x) \geq q \,\} $$
Binomial distribution does not have a closed-form quantile function, but most statistical software offers you to calculate it numerically (e.g. qbinom function in R). It can be also calculated "by-hand" for discrete distributions like binomial, since you can compute cumulative probabilities $q_i$ for each $x_i = 0,1,2,\dots,n $, and then simply choose such values of $q_i$ that correspond to the probabilities of interest.
If probability of heads is unknown
If, as stated later on in comments, we are dealing with hypothetical coin "whose properties are completely unknown", then this is a different problem. We would still assume binomial distribution for counts of heads $X$, but the "unknown properties" can be understood as assuming that probability of heads $p$ is a random variable. As noticed in the comments below by amoeba and Xi'an, this can be assumed as a Bayesian problem, in terms of beta-binomial model.
Since $p$ is random and can be anything, we assume for it a "uniform" prior, i.e. beta distribution with parameters $\alpha = \beta = 1$. If we toss the coin some number of times, then we can update our prior for the additional information. Since beta is a conjugate prior for binomial distribution, updating is very simple and updated parameters become $\alpha' = \alpha + \text{number of heads}$ and $\beta' = \beta + \text{number of tails}$ and the quantiles of interest can be calculated from posterior beta distribution parametrized by $\alpha'$ and $\beta'$.
The interval of interest can be easily obtained using theoretical quantiles. Two different kinds of intervals may be obtained in this scenario:
interval for number of heads $x$, that can be calculated from posterior predictive distribution that in this case is a beta-binomial distribution, or
interval for possible values of $p$ that can be calculated from beta distribution.
As noted above, the interval can be calculated before seeing any data from prior distribution parametrized by $\alpha = \beta = 1$, or after seeing some data and updating for it, from posterior distribution parametrized by $\alpha'$ and $\beta'$. Each time you observe new data you can update your posterior for it to make your estimate more and more precise. | Calculating probability that a probability of heads lies in a given interval
If probability of heads is known
If probability of heads is known, then you are not asking about confidence interval (check here and here for definition and further details), but about distribution qu |
55,005 | Calculating probability that a probability of heads lies in a given interval | Assuming coin flips are independent, the number of heads has a binomial distribution with mean $np$ and variance $np(1-p)$, where $n$ is the sample size and $p$ is the true success probability. To construct the interval you first specify the confidence level (often taken to be 95%). You can compute the confidence interval for the exact binomial using what is called the Clopper-Pearson method. If the sample size is large such as the 500 you posed as an example, then you can use the normal approximation to construct the interval. If the interval contains 0.5 you cannot reject the hypothesis that you have a fair coin. If 0.5 lies outside the interval you would conclude that the coin is biased with a significance level equal to 1 - the confidence level, where the confidence level is expressed as a proportion. | Calculating probability that a probability of heads lies in a given interval | Assuming coin flips are independent, the number of heads has a binomial distribution with mean $np$ and variance $np(1-p)$, where $n$ is the sample size and $p$ is the true success probability. To co | Calculating probability that a probability of heads lies in a given interval
Assuming coin flips are independent, the number of heads has a binomial distribution with mean $np$ and variance $np(1-p)$, where $n$ is the sample size and $p$ is the true success probability. To construct the interval you first specify the confidence level (often taken to be 95%). You can compute the confidence interval for the exact binomial using what is called the Clopper-Pearson method. If the sample size is large such as the 500 you posed as an example, then you can use the normal approximation to construct the interval. If the interval contains 0.5 you cannot reject the hypothesis that you have a fair coin. If 0.5 lies outside the interval you would conclude that the coin is biased with a significance level equal to 1 - the confidence level, where the confidence level is expressed as a proportion. | Calculating probability that a probability of heads lies in a given interval
Assuming coin flips are independent, the number of heads has a binomial distribution with mean $np$ and variance $np(1-p)$, where $n$ is the sample size and $p$ is the true success probability. To co |
55,006 | R squared always higher than 1 | Your mistake doesn't come from putting the mean to zero, but from the general computation of $R^2$, which isn't the one you wrote. Using your notation we have several values:
$SS_{tot} = \sum_i (y_i-\bar{y})^2$ total sum of squares
$SS_{reg} = \sum_i (\hat{y}_i-\bar{y})^2$ explained sum of squares
$SS_{res} = \sum_i (y_i-\hat{y}_i)^2$ residual sum of squares
Now the general formula is $R^2 = 1- \frac{SS_{res}}{SS_{tot}}$ depending on the ratio between the unexplained variance and the total variance of the data.
When $SS_{res} + SS_{reg} = SS_{tot}$ then the general formula is equivalent to the one you wrote: $R^2 = \frac{SS_{reg}}{SS_{tot}}$, which can be seen as the ratio between the explained variance to the total variance.
The condition $SS_{res} + SS_{reg} = SS_{tot}$ is true for instance in unregularized linear regression, but possibly not true with LASSO penalty. | R squared always higher than 1 | Your mistake doesn't come from putting the mean to zero, but from the general computation of $R^2$, which isn't the one you wrote. Using your notation we have several values:
$SS_{tot} = \sum_i (y_i- | R squared always higher than 1
Your mistake doesn't come from putting the mean to zero, but from the general computation of $R^2$, which isn't the one you wrote. Using your notation we have several values:
$SS_{tot} = \sum_i (y_i-\bar{y})^2$ total sum of squares
$SS_{reg} = \sum_i (\hat{y}_i-\bar{y})^2$ explained sum of squares
$SS_{res} = \sum_i (y_i-\hat{y}_i)^2$ residual sum of squares
Now the general formula is $R^2 = 1- \frac{SS_{res}}{SS_{tot}}$ depending on the ratio between the unexplained variance and the total variance of the data.
When $SS_{res} + SS_{reg} = SS_{tot}$ then the general formula is equivalent to the one you wrote: $R^2 = \frac{SS_{reg}}{SS_{tot}}$, which can be seen as the ratio between the explained variance to the total variance.
The condition $SS_{res} + SS_{reg} = SS_{tot}$ is true for instance in unregularized linear regression, but possibly not true with LASSO penalty. | R squared always higher than 1
Your mistake doesn't come from putting the mean to zero, but from the general computation of $R^2$, which isn't the one you wrote. Using your notation we have several values:
$SS_{tot} = \sum_i (y_i- |
55,007 | R squared always higher than 1 | This is an interesting question, see this and this for two related posts. As far as I understood from the literature, and judging from the answers/comments to the above cited posts, the calculation and interpretation of the coefficient of determination and the calculation of standard errors in penalised estimation approaches are currently open problems.
So my current answer to your question is: just give up using $R^2$ and adjusted $R^2$ in lasso-type problems. Perhaps, goodness-of-fit tests can be a viable alternative to $R^2$. | R squared always higher than 1 | This is an interesting question, see this and this for two related posts. As far as I understood from the literature, and judging from the answers/comments to the above cited posts, the calculation an | R squared always higher than 1
This is an interesting question, see this and this for two related posts. As far as I understood from the literature, and judging from the answers/comments to the above cited posts, the calculation and interpretation of the coefficient of determination and the calculation of standard errors in penalised estimation approaches are currently open problems.
So my current answer to your question is: just give up using $R^2$ and adjusted $R^2$ in lasso-type problems. Perhaps, goodness-of-fit tests can be a viable alternative to $R^2$. | R squared always higher than 1
This is an interesting question, see this and this for two related posts. As far as I understood from the literature, and judging from the answers/comments to the above cited posts, the calculation an |
55,008 | What is the intuition of momentum term in the neural network back propagation? | Since you are asking for intuition, here's the general idea:
Imagine that you are using stochastic gradient descent (SGD) to traverse the surface of the loss function. Further imagine that that surface looks like a mountain and that you are starting near the top. The surface has many small crags and lips but is clearly headed down to a valley. As you traverse the surface using SGD, the magnitude - and sign - of the derivative changes often, and so there is a great deal of fluctuation in the direction and speed that you take down the mountain. If you include a momentum term, then instead of just relying on the gradient at each new stopping point to determine your movement, your direction now also depends on the size and direction of your movement in the previous update. And since the amount and direction you moved last update also depended on the update before it you can see that each movement depends on the entire history of previous movements. So pretty soon you're running down the mountain toward the valley.
Notice that the further you are in time from an update, the less important it is for determining your current update, since, if you expand the momentum term at time $t$ then the contribution of the update from time $t-k$ is:
\begin{equation}
\alpha^{k+1}\Delta\omega_{t-k}
\end{equation}
Where $\alpha\in[0,1]$ is the momentum and $\Delta\omega_{t-k}$ is the weight update at time $t-k$ | What is the intuition of momentum term in the neural network back propagation? | Since you are asking for intuition, here's the general idea:
Imagine that you are using stochastic gradient descent (SGD) to traverse the surface of the loss function. Further imagine that that surfac | What is the intuition of momentum term in the neural network back propagation?
Since you are asking for intuition, here's the general idea:
Imagine that you are using stochastic gradient descent (SGD) to traverse the surface of the loss function. Further imagine that that surface looks like a mountain and that you are starting near the top. The surface has many small crags and lips but is clearly headed down to a valley. As you traverse the surface using SGD, the magnitude - and sign - of the derivative changes often, and so there is a great deal of fluctuation in the direction and speed that you take down the mountain. If you include a momentum term, then instead of just relying on the gradient at each new stopping point to determine your movement, your direction now also depends on the size and direction of your movement in the previous update. And since the amount and direction you moved last update also depended on the update before it you can see that each movement depends on the entire history of previous movements. So pretty soon you're running down the mountain toward the valley.
Notice that the further you are in time from an update, the less important it is for determining your current update, since, if you expand the momentum term at time $t$ then the contribution of the update from time $t-k$ is:
\begin{equation}
\alpha^{k+1}\Delta\omega_{t-k}
\end{equation}
Where $\alpha\in[0,1]$ is the momentum and $\Delta\omega_{t-k}$ is the weight update at time $t-k$ | What is the intuition of momentum term in the neural network back propagation?
Since you are asking for intuition, here's the general idea:
Imagine that you are using stochastic gradient descent (SGD) to traverse the surface of the loss function. Further imagine that that surfac |
55,009 | What is the intuition of momentum term in the neural network back propagation? | Firstly, let's pretend we have just started training so that:
\begin{equation}
n = 0
\end{equation}
and
\begin{equation}
\omega_{kj}(0) = \alpha\delta_jy_k
\end{equation}
because this is the first time step. If we go one training step further to
\begin{equation}
n = 1
\end{equation}
then:
\begin{equation}
\omega_{kj}(1) = \alpha\delta_jy_k + \eta\omega_{kj}(0)
\end{equation}
and one more to
\begin{equation}
n = 2
\end{equation}
\begin{equation}
\omega_{kj}(2) = \alpha\delta_jy_k + \eta\omega_{kj}(1)
\end{equation}
and substitute:
\begin{equation}
\omega_{kj}(2) = \alpha\delta_{j2}y_{k2} + \eta(\alpha\delta_{j1}y_{k1} + \eta\omega_{kj}(0))
\end{equation}
You can see that if we continued this trend, even a time step of
\begin{equation}
n = 200
\end{equation}
would still slightly be influenced by the very first weight gradient at time step 0 (but very very slightly). This is the concept of momentum: velocity with a memory of past velocities. This concept is directly related to physics.
If I push a block 10m/s forward (think of that as my first time step) and then I push it at -20m/s (my second time step); then via momentum this would be:
new speed = -20m/s + u(100m/s) where u is friction (or in our case eta). So if friction is 0.1, my final speed will actually be -20 + (0.1)(100) = -10. I am closer to -20 but I am not completely there. If I push the block again at -20m/s my result is: -10 + (0.1)(-20) = -12, and I am closer to the speed I wish to attain.
In the concept of neural networks, this means if I am going down the gradient and searching for the minimum, I do not want to drastically change directions at each time step because some directions could lead to valleys or local minima. Therefore I want to go the direction in which most of my weight gradients push me. Changing this direction usually requires a large number of weight gradients against my current gradient 'flow'. | What is the intuition of momentum term in the neural network back propagation? | Firstly, let's pretend we have just started training so that:
\begin{equation}
n = 0
\end{equation}
and
\begin{equation}
\omega_{kj}(0) = \alpha\delta_jy_k
\end{equation}
because this is the first tim | What is the intuition of momentum term in the neural network back propagation?
Firstly, let's pretend we have just started training so that:
\begin{equation}
n = 0
\end{equation}
and
\begin{equation}
\omega_{kj}(0) = \alpha\delta_jy_k
\end{equation}
because this is the first time step. If we go one training step further to
\begin{equation}
n = 1
\end{equation}
then:
\begin{equation}
\omega_{kj}(1) = \alpha\delta_jy_k + \eta\omega_{kj}(0)
\end{equation}
and one more to
\begin{equation}
n = 2
\end{equation}
\begin{equation}
\omega_{kj}(2) = \alpha\delta_jy_k + \eta\omega_{kj}(1)
\end{equation}
and substitute:
\begin{equation}
\omega_{kj}(2) = \alpha\delta_{j2}y_{k2} + \eta(\alpha\delta_{j1}y_{k1} + \eta\omega_{kj}(0))
\end{equation}
You can see that if we continued this trend, even a time step of
\begin{equation}
n = 200
\end{equation}
would still slightly be influenced by the very first weight gradient at time step 0 (but very very slightly). This is the concept of momentum: velocity with a memory of past velocities. This concept is directly related to physics.
If I push a block 10m/s forward (think of that as my first time step) and then I push it at -20m/s (my second time step); then via momentum this would be:
new speed = -20m/s + u(100m/s) where u is friction (or in our case eta). So if friction is 0.1, my final speed will actually be -20 + (0.1)(100) = -10. I am closer to -20 but I am not completely there. If I push the block again at -20m/s my result is: -10 + (0.1)(-20) = -12, and I am closer to the speed I wish to attain.
In the concept of neural networks, this means if I am going down the gradient and searching for the minimum, I do not want to drastically change directions at each time step because some directions could lead to valleys or local minima. Therefore I want to go the direction in which most of my weight gradients push me. Changing this direction usually requires a large number of weight gradients against my current gradient 'flow'. | What is the intuition of momentum term in the neural network back propagation?
Firstly, let's pretend we have just started training so that:
\begin{equation}
n = 0
\end{equation}
and
\begin{equation}
\omega_{kj}(0) = \alpha\delta_jy_k
\end{equation}
because this is the first tim |
55,010 | Time Series and XGBoost | It is likely that your features have time biases. Assume you want to predict month and year from your data. You would see very good performance during CV as you have enough data points at each month. However you can not predict the future(April 2016 to June 2016) very well using your predictor.
Another example is that assume you have a feature that its density changes during the time. Then the difference between cv error and your prediction in the future would be big.
To solve it, you can use following suggestions:
Drop time biased features: time related indices, features that has a big shift during time, etc
Create features that are partially independent of time: from date, get "day of week", "weekend" and so on | Time Series and XGBoost | It is likely that your features have time biases. Assume you want to predict month and year from your data. You would see very good performance during CV as you have enough data points at each month. | Time Series and XGBoost
It is likely that your features have time biases. Assume you want to predict month and year from your data. You would see very good performance during CV as you have enough data points at each month. However you can not predict the future(April 2016 to June 2016) very well using your predictor.
Another example is that assume you have a feature that its density changes during the time. Then the difference between cv error and your prediction in the future would be big.
To solve it, you can use following suggestions:
Drop time biased features: time related indices, features that has a big shift during time, etc
Create features that are partially independent of time: from date, get "day of week", "weekend" and so on | Time Series and XGBoost
It is likely that your features have time biases. Assume you want to predict month and year from your data. You would see very good performance during CV as you have enough data points at each month. |
55,011 | Time Series and XGBoost | First, if there is a trend in time series, then tree-based model maybe not the good choice (because of tree model can't extrapolate, can't predict value bigger or smaller than the value in the training set), or you can remove the trend first, then using the xgboost to predict the residuals of linear models.
Second, as Mortezaaa suggests, if this time series correlation with day of week, holiday, weekends, or season and so on. Then you can construct many features to improve you prediction result! Beside it, the moving average of time series can be the features too. There is a lot of good example on kaggle, such as rossmann-store-sales prediction and bike-sharing-demand prediction, there are time series too, and the winners do a lot of feature engineering! You will learn a lot. | Time Series and XGBoost | First, if there is a trend in time series, then tree-based model maybe not the good choice (because of tree model can't extrapolate, can't predict value bigger or smaller than the value in the trainin | Time Series and XGBoost
First, if there is a trend in time series, then tree-based model maybe not the good choice (because of tree model can't extrapolate, can't predict value bigger or smaller than the value in the training set), or you can remove the trend first, then using the xgboost to predict the residuals of linear models.
Second, as Mortezaaa suggests, if this time series correlation with day of week, holiday, weekends, or season and so on. Then you can construct many features to improve you prediction result! Beside it, the moving average of time series can be the features too. There is a lot of good example on kaggle, such as rossmann-store-sales prediction and bike-sharing-demand prediction, there are time series too, and the winners do a lot of feature engineering! You will learn a lot. | Time Series and XGBoost
First, if there is a trend in time series, then tree-based model maybe not the good choice (because of tree model can't extrapolate, can't predict value bigger or smaller than the value in the trainin |
55,012 | Slope and intercept of the decision boundary from a logistic regression model | This is actually straightforward. We think of statistical models specifying a conditional response distribution, which is stochastic, but once you are working with the fitted model, it is just a deterministic function. In this case, a logistic regression model specifies the conditional parameter $\pi$ that governs the behavior of a binomial distribution. That is:
$$
\ln\bigg(\frac{\pi}{(1-\pi)}\bigg) = \beta_0 + \beta_1X_1 + \beta_2X_2
$$
With respect to assigning predicted classes, the most intuitive thing to do is call an observation a 'success' if $\hat\pi_i>.5$ or a 'failure' if not. (Note that using $.5$ as your threshold will not necessarily maximize the accuracy of a given model, and that any conversion from predicted probabilities to predicted classes throws away a lot of information—probably unnecessarily.) Using $.5$ on the probability scale corresponds to using $0$ on the log odds (linear) scale. If we only want to know the set of all points in the $X_1$, $X_2$ space that correspond to a predicted log odds of $0$, we can set the fitted model equal to $0$ and then algebraically rearrange the equation to make one variable a function of the other. (In the example, weight as a function of height.) That's just algebra. Once you have that, you can plot the decision boundary on the $X_1$, $X_2$ (height, weight) plane.
To solve for weight when height is $0$:
\begin{align}
0 &= \hat\beta_0 + \hat\beta_10 + \hat\beta_2{\rm weight} \\[8pt]
-\hat\beta_0 &= \hat\beta_2{\rm weight} \\[8pt]
\frac{-\hat\beta_0}{\hat\beta_2} &= \text{weight (i.e., the intercept)} \\[20pt]
\end{align}
To solve for the increase in weight when height goes up by $1$ unit (inch), let's use two points, where height equals $0$ and where height equals $1$. (Since it's a straight line, any two points would do, but these are convenient.) Then:
\begin{align}
0 &= \hat\beta_0 + \hat\beta_1{\rm height}_1 + \hat\beta_2{\rm weight}_1 \\[8pt]
&\quad -(\hat\beta_0 + \hat\beta_1{\rm height}_0 + \hat\beta_2{\rm weight}_0) \\[8pt]
0 &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1{\rm height}_1 - \hat\beta_1{\rm height}_0 + \hat\beta_2{\rm weight}_1 - \hat\beta_2{\rm weight}_0 \\[8pt]
0 &= \hat\beta_1 + \hat\beta_2\Delta{\rm weight} \\[8pt]
\frac{-\hat\beta_1}{\hat\beta_2} &= \Delta{\rm weight} \text{ (i.e., the slope)} \\
\end{align} | Slope and intercept of the decision boundary from a logistic regression model | This is actually straightforward. We think of statistical models specifying a conditional response distribution, which is stochastic, but once you are working with the fitted model, it is just a dete | Slope and intercept of the decision boundary from a logistic regression model
This is actually straightforward. We think of statistical models specifying a conditional response distribution, which is stochastic, but once you are working with the fitted model, it is just a deterministic function. In this case, a logistic regression model specifies the conditional parameter $\pi$ that governs the behavior of a binomial distribution. That is:
$$
\ln\bigg(\frac{\pi}{(1-\pi)}\bigg) = \beta_0 + \beta_1X_1 + \beta_2X_2
$$
With respect to assigning predicted classes, the most intuitive thing to do is call an observation a 'success' if $\hat\pi_i>.5$ or a 'failure' if not. (Note that using $.5$ as your threshold will not necessarily maximize the accuracy of a given model, and that any conversion from predicted probabilities to predicted classes throws away a lot of information—probably unnecessarily.) Using $.5$ on the probability scale corresponds to using $0$ on the log odds (linear) scale. If we only want to know the set of all points in the $X_1$, $X_2$ space that correspond to a predicted log odds of $0$, we can set the fitted model equal to $0$ and then algebraically rearrange the equation to make one variable a function of the other. (In the example, weight as a function of height.) That's just algebra. Once you have that, you can plot the decision boundary on the $X_1$, $X_2$ (height, weight) plane.
To solve for weight when height is $0$:
\begin{align}
0 &= \hat\beta_0 + \hat\beta_10 + \hat\beta_2{\rm weight} \\[8pt]
-\hat\beta_0 &= \hat\beta_2{\rm weight} \\[8pt]
\frac{-\hat\beta_0}{\hat\beta_2} &= \text{weight (i.e., the intercept)} \\[20pt]
\end{align}
To solve for the increase in weight when height goes up by $1$ unit (inch), let's use two points, where height equals $0$ and where height equals $1$. (Since it's a straight line, any two points would do, but these are convenient.) Then:
\begin{align}
0 &= \hat\beta_0 + \hat\beta_1{\rm height}_1 + \hat\beta_2{\rm weight}_1 \\[8pt]
&\quad -(\hat\beta_0 + \hat\beta_1{\rm height}_0 + \hat\beta_2{\rm weight}_0) \\[8pt]
0 &= \hat\beta_0 - \hat\beta_0 + \hat\beta_1{\rm height}_1 - \hat\beta_1{\rm height}_0 + \hat\beta_2{\rm weight}_1 - \hat\beta_2{\rm weight}_0 \\[8pt]
0 &= \hat\beta_1 + \hat\beta_2\Delta{\rm weight} \\[8pt]
\frac{-\hat\beta_1}{\hat\beta_2} &= \Delta{\rm weight} \text{ (i.e., the slope)} \\
\end{align} | Slope and intercept of the decision boundary from a logistic regression model
This is actually straightforward. We think of statistical models specifying a conditional response distribution, which is stochastic, but once you are working with the fitted model, it is just a dete |
55,013 | Slope and intercept of the decision boundary from a logistic regression model | It sounds simply like your model was simply projected to consider a bivariate relationship between a single regressor and your outcome. The additive assumptions of the GLM ensure that the conditional mean relation between a first regressor and the outcome does not vary for other levels of the second regressor. The second regressor merely functions like a intercept term to obtain that conditional mean relationship.
For instance, suppose I model cancer risk $E[Y]$ as a function of age $X_1$ and smoking $X_2$. Suppose further I obtain a model $E[Y|X_1, X_2] = \beta_0 + \beta_1 X_1 + \beta_2 X_2$ from logistic regression.
If I turn my attention to predicting cancer risk for a participant of a particular age, say 45, then the model simplifies as the following:
$E[Y|X_1= 45, X_2] = (\beta_0 + \beta_1 45) + \beta_2 X_2$
Where for this conditional logit model, $\beta_0 + \beta_1 45$ functions like a predicted intercept. | Slope and intercept of the decision boundary from a logistic regression model | It sounds simply like your model was simply projected to consider a bivariate relationship between a single regressor and your outcome. The additive assumptions of the GLM ensure that the conditional | Slope and intercept of the decision boundary from a logistic regression model
It sounds simply like your model was simply projected to consider a bivariate relationship between a single regressor and your outcome. The additive assumptions of the GLM ensure that the conditional mean relation between a first regressor and the outcome does not vary for other levels of the second regressor. The second regressor merely functions like a intercept term to obtain that conditional mean relationship.
For instance, suppose I model cancer risk $E[Y]$ as a function of age $X_1$ and smoking $X_2$. Suppose further I obtain a model $E[Y|X_1, X_2] = \beta_0 + \beta_1 X_1 + \beta_2 X_2$ from logistic regression.
If I turn my attention to predicting cancer risk for a participant of a particular age, say 45, then the model simplifies as the following:
$E[Y|X_1= 45, X_2] = (\beta_0 + \beta_1 45) + \beta_2 X_2$
Where for this conditional logit model, $\beta_0 + \beta_1 45$ functions like a predicted intercept. | Slope and intercept of the decision boundary from a logistic regression model
It sounds simply like your model was simply projected to consider a bivariate relationship between a single regressor and your outcome. The additive assumptions of the GLM ensure that the conditional |
55,014 | Confidence Interval from bootstrap | There are at least 4 or 5 types of Bootstrap confidence intervals (BCa, bootstrap-t, ABC, and calibration). These are thoroughly described here:
Bootstrap Confidence Intervals. Thomas J. DiCiccio and Bradley Efron
And they are all implemented in the 'boot' R package. | Confidence Interval from bootstrap | There are at least 4 or 5 types of Bootstrap confidence intervals (BCa, bootstrap-t, ABC, and calibration). These are thoroughly described here:
Bootstrap Confidence Intervals. Thomas J. DiCiccio and | Confidence Interval from bootstrap
There are at least 4 or 5 types of Bootstrap confidence intervals (BCa, bootstrap-t, ABC, and calibration). These are thoroughly described here:
Bootstrap Confidence Intervals. Thomas J. DiCiccio and Bradley Efron
And they are all implemented in the 'boot' R package. | Confidence Interval from bootstrap
There are at least 4 or 5 types of Bootstrap confidence intervals (BCa, bootstrap-t, ABC, and calibration). These are thoroughly described here:
Bootstrap Confidence Intervals. Thomas J. DiCiccio and |
55,015 | Confidence Interval from bootstrap | Some ideas I have read about in Rand R. Wilcox - Fundamentals of Modern Statistical Methods, which by the way, it is a really nice book to read in general.
Approach 1 which is called percentile bootstrap works well only if you have a pretty number of observations and it covers well the whole interval of interest.
Approach 2 which is called percentile t bootstrap is slightly better for smaller samples than the previous samples. However it is interesting to check if the distribution resembles a normal distribution. If this is the case a t statistic is better than percentile t bootstrap. The bootstrap provides advantages only when the distribution is not normal.
I always used the second method. However what I do usually is to employ both methods and if they provides very different results than try to identify why that happened and chose the variant which makes those assumptions which fits better my data. | Confidence Interval from bootstrap | Some ideas I have read about in Rand R. Wilcox - Fundamentals of Modern Statistical Methods, which by the way, it is a really nice book to read in general.
Approach 1 which is called percentile boots | Confidence Interval from bootstrap
Some ideas I have read about in Rand R. Wilcox - Fundamentals of Modern Statistical Methods, which by the way, it is a really nice book to read in general.
Approach 1 which is called percentile bootstrap works well only if you have a pretty number of observations and it covers well the whole interval of interest.
Approach 2 which is called percentile t bootstrap is slightly better for smaller samples than the previous samples. However it is interesting to check if the distribution resembles a normal distribution. If this is the case a t statistic is better than percentile t bootstrap. The bootstrap provides advantages only when the distribution is not normal.
I always used the second method. However what I do usually is to employ both methods and if they provides very different results than try to identify why that happened and chose the variant which makes those assumptions which fits better my data. | Confidence Interval from bootstrap
Some ideas I have read about in Rand R. Wilcox - Fundamentals of Modern Statistical Methods, which by the way, it is a really nice book to read in general.
Approach 1 which is called percentile boots |
55,016 | When using the likelihood function, where does the indicator function come from? | Most families of distributions $f_\theta$ have a fixed support,
$$\text{supp}(f_\theta)=\{x\in\mathcal{X};\ f_\theta(x)>0\}$$
like the Normal or Binomial distributions, but some have a parameter dependent support, like uniforms $\text{U}(0,\theta)$ or $\text{U}(-\theta,\theta)$. For such families, it is important to keep the support constraint in the likelihood, because it brings in hard constraints on $\theta$, which means it can be better estimated in such settings. If one considers a sample $(x_1,\ldots,x_n)$ from the $\text{U}(-\theta,\theta)$ distribution, the likelihood is
$$\prod_{i=1}^n \frac{1}{2\theta}\times\mathbb{I}_{(-\theta,\theta)}(x_i)
=\frac{1}{(2\theta)^n}\times\prod_{i=1}^n \mathbb{I}_{-\theta\le x_i\le\theta}=\frac{1}{(2\theta)^n}\times\mathbb{I}_{\theta\ge\max |x_i|}$$
This indicator function at the end$$\mathbb{I}_{\theta\ge\max |x_i|}$$ is essential to restrict the set of possible $\theta$'s to $(\max |x_i|,\infty)$, rather than $(0,\infty)$. It thus determines the new parameter set given the sample.
Once the modified parameter set is determined, the likelihood function is defined on that modified set and the derivative does not cover the indicator function. For instance, in the Uniform example,
$$\frac{\partial}{\partial\theta}\frac{1}{(2\theta)^n}=-\frac{n}{2^n\theta^{n+1}}$$which is always negative, meaning that the optimum value of $\theta$ is the smallest possible one, i.e., the smallest value in the modified parameter set, $\max |x_i|$. | When using the likelihood function, where does the indicator function come from? | Most families of distributions $f_\theta$ have a fixed support,
$$\text{supp}(f_\theta)=\{x\in\mathcal{X};\ f_\theta(x)>0\}$$
like the Normal or Binomial distributions, but some have a parameter depen | When using the likelihood function, where does the indicator function come from?
Most families of distributions $f_\theta$ have a fixed support,
$$\text{supp}(f_\theta)=\{x\in\mathcal{X};\ f_\theta(x)>0\}$$
like the Normal or Binomial distributions, but some have a parameter dependent support, like uniforms $\text{U}(0,\theta)$ or $\text{U}(-\theta,\theta)$. For such families, it is important to keep the support constraint in the likelihood, because it brings in hard constraints on $\theta$, which means it can be better estimated in such settings. If one considers a sample $(x_1,\ldots,x_n)$ from the $\text{U}(-\theta,\theta)$ distribution, the likelihood is
$$\prod_{i=1}^n \frac{1}{2\theta}\times\mathbb{I}_{(-\theta,\theta)}(x_i)
=\frac{1}{(2\theta)^n}\times\prod_{i=1}^n \mathbb{I}_{-\theta\le x_i\le\theta}=\frac{1}{(2\theta)^n}\times\mathbb{I}_{\theta\ge\max |x_i|}$$
This indicator function at the end$$\mathbb{I}_{\theta\ge\max |x_i|}$$ is essential to restrict the set of possible $\theta$'s to $(\max |x_i|,\infty)$, rather than $(0,\infty)$. It thus determines the new parameter set given the sample.
Once the modified parameter set is determined, the likelihood function is defined on that modified set and the derivative does not cover the indicator function. For instance, in the Uniform example,
$$\frac{\partial}{\partial\theta}\frac{1}{(2\theta)^n}=-\frac{n}{2^n\theta^{n+1}}$$which is always negative, meaning that the optimum value of $\theta$ is the smallest possible one, i.e., the smallest value in the modified parameter set, $\max |x_i|$. | When using the likelihood function, where does the indicator function come from?
Most families of distributions $f_\theta$ have a fixed support,
$$\text{supp}(f_\theta)=\{x\in\mathcal{X};\ f_\theta(x)>0\}$$
like the Normal or Binomial distributions, but some have a parameter depen |
55,017 | Is there a corresponding bias-variance decomposition of MSE for vectors? | Simply note that
$$|| \widehat{\mu} - \mu ||^2 = \sum\limits_{i = 1}^{n} (\widehat{\mu}_{i} - \mu_{i})^2$$
Then, the answer is given by the decomposition you gave earlier:
$$ \mathbb{E}[(\widehat{\mu}_{i} - \mu_{i})^2] = Var[\widehat{\mu}_{i}] + [Bias(\widehat{\mu}_{i}, \mu_{i} )]^2 $$
Summing all up, we get
$$ \mathbb{E}[||\widehat{\mu} - \mu||^2] = \sum\limits_{i = 1}^{n} Var[\widehat{\mu}_{i}] + [Bias(\widehat{\mu}_{i}, \mu_{i} )]^2 $$
Another issue, totally different, is the covariance matrix $\mathbb{E}[(\widehat{\mu} - \mu)(\widehat{\mu} - \mu)^t]$. | Is there a corresponding bias-variance decomposition of MSE for vectors? | Simply note that
$$|| \widehat{\mu} - \mu ||^2 = \sum\limits_{i = 1}^{n} (\widehat{\mu}_{i} - \mu_{i})^2$$
Then, the answer is given by the decomposition you gave earlier:
$$ \mathbb{E}[(\widehat{\mu} | Is there a corresponding bias-variance decomposition of MSE for vectors?
Simply note that
$$|| \widehat{\mu} - \mu ||^2 = \sum\limits_{i = 1}^{n} (\widehat{\mu}_{i} - \mu_{i})^2$$
Then, the answer is given by the decomposition you gave earlier:
$$ \mathbb{E}[(\widehat{\mu}_{i} - \mu_{i})^2] = Var[\widehat{\mu}_{i}] + [Bias(\widehat{\mu}_{i}, \mu_{i} )]^2 $$
Summing all up, we get
$$ \mathbb{E}[||\widehat{\mu} - \mu||^2] = \sum\limits_{i = 1}^{n} Var[\widehat{\mu}_{i}] + [Bias(\widehat{\mu}_{i}, \mu_{i} )]^2 $$
Another issue, totally different, is the covariance matrix $\mathbb{E}[(\widehat{\mu} - \mu)(\widehat{\mu} - \mu)^t]$. | Is there a corresponding bias-variance decomposition of MSE for vectors?
Simply note that
$$|| \widehat{\mu} - \mu ||^2 = \sum\limits_{i = 1}^{n} (\widehat{\mu}_{i} - \mu_{i})^2$$
Then, the answer is given by the decomposition you gave earlier:
$$ \mathbb{E}[(\widehat{\mu} |
55,018 | How do I fit a constrained regression in R so all coefficients are positive and above 0 [duplicate] | First of all you should add set.seed(1) (or any other seed) at the start of your code, so that it's reproducible. Coming to your question:
From ?solve.QP:
meq the first meq constraints are treated as equality constraints, all further as inequality constraints (defaults to 0).
Thus, when you set meq=1, you're telling solve.QP that the first constraint must not be a inequality, but an equality: in other words, you're saying that $\pi_1$ must be 0. And as a matter of fact, solve.QP returns you a solution where $\pi_1=0$.
solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq=1, factorized = F)
$solution
[1] 0.0000000 0.3582641 0.6358091
$value
[1] -12.71193
$unconstrained.solution
[1] 0.2024494 0.2646992 0.5357331
Anyway, since you only require $\pi_i\ge 0 \quad i=1,\dots,3$, you must set meq=0, i.e., you need to tell solve.QP that all constraints are inequality constraints. You could even avoid passing a value for meq=0 in the call to solve.QP, anyway, since meq=0 is the default. Now, since the unconstrained solution satisfies the constraints, then it's also the constrained solution, and that's why the two solutions are equal:
solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq=0, factorized = F)
$solution
[1] 0.2024494 0.2646992 0.5357331
$value
[1] -12.91278
$unconstrained.solution
[1] 0.2024494 0.2646992 0.5357331
EDIT concerning the possibility of other solutions, and thus the need for global methods. This specific optimization problem belongs to the family of convex optimization problems. For this kind of problems, a local minimum is also a global minimum. Since you already found a local minimum, you don't need to do anything else. Note that for another, non-convex, optimization problem, this wouldn't be true anymore.
Two side notes:
you cannot impose strict inequality constraints with solve.QP, thus the right way to write your constraints is $\pi_i\ge 0 \quad i=1,\dots,3$, not $\pi_i\gt 0 \quad i=1,\dots,3$
I find that solve.QP from quadprog has a counterintuitive interface, and it can be less robust than alternatives. I suggest you have a look at lsei from package limSolve. You can find a worked example here. Since the interface is different, be sure to read the vignette carefully if you decide to use lsei.
EDIT: the OP seems to be expecially interested in the case of strict inequality constraints, thus I add a couple of well-known tricks to deal with them. First of all, solve the problem with not strict inequality constraints: if the constraints are not active in the solution of the constrained problem, i.e., if the solution satisfies the strict inequality constraints, then you're all set. For example, in the second case where meq=0, solve.QP, which uses non-strict inequality constraints, found a solution where all coefficients are strictly $>0$, i.e., the constraints are not active in the constrained solution.
If one or more constraints are active in the solution (for example, the solution found by solve.QP when meq=1), then a trick is to add a small positive $\epsilon$ to the constraint. For example, instead of the constraints $\pi_i\gt 0 \quad i=1,\dots,3$, you use the constraints $\pi_i\ge \epsilon \quad i=1,\dots,3$. The solution found in this case will solve the original strict inequality constraints, since $\epsilon>0$. The issue with this approach is that the solution will depend on $\epsilon$. However:
one could always solve a sequence of problems with decreasing $\epsilon$ and gain insight on the behavior of the solution with respect to $\epsilon$.
if the optimization problem models a real physical problem, then strict inequality constraints on physical variables usually require some "tolerance". For example, imagine you want to optimize the efficiency of some machine, whose efficiency is linked, among other things, to a clearance between a rotating shaft and its static casing (for example, this would be the case of rotor-stator seal clearances in a gas turbine). The distance $\delta$ between the rotating and the static part does need to be strictly positive (if rotor and stator touch, a disaster is guaranteed!). However, to be sure that rotor and stator don't touch, the clearance needs to be above some assembly/manufacturing tolerance: you cannot accept a solution with, say, $\delta=0.001$ microns, because you'll never be able to manufacture a design with such a small gap. You need to solve a problem with a constraint $\delta>\epsilon$ where, say, $\epsilon=100$ microns. Thus, when real physical systems are involved and it seems that strict inequality constraints are needed, usually what we really need are not strict inequality constraints with a "physically meaningful" tolerance. | How do I fit a constrained regression in R so all coefficients are positive and above 0 [duplicate] | First of all you should add set.seed(1) (or any other seed) at the start of your code, so that it's reproducible. Coming to your question:
From ?solve.QP:
meq the first meq constraints are treate | How do I fit a constrained regression in R so all coefficients are positive and above 0 [duplicate]
First of all you should add set.seed(1) (or any other seed) at the start of your code, so that it's reproducible. Coming to your question:
From ?solve.QP:
meq the first meq constraints are treated as equality constraints, all further as inequality constraints (defaults to 0).
Thus, when you set meq=1, you're telling solve.QP that the first constraint must not be a inequality, but an equality: in other words, you're saying that $\pi_1$ must be 0. And as a matter of fact, solve.QP returns you a solution where $\pi_1=0$.
solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq=1, factorized = F)
$solution
[1] 0.0000000 0.3582641 0.6358091
$value
[1] -12.71193
$unconstrained.solution
[1] 0.2024494 0.2646992 0.5357331
Anyway, since you only require $\pi_i\ge 0 \quad i=1,\dots,3$, you must set meq=0, i.e., you need to tell solve.QP that all constraints are inequality constraints. You could even avoid passing a value for meq=0 in the call to solve.QP, anyway, since meq=0 is the default. Now, since the unconstrained solution satisfies the constraints, then it's also the constrained solution, and that's why the two solutions are equal:
solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq=0, factorized = F)
$solution
[1] 0.2024494 0.2646992 0.5357331
$value
[1] -12.91278
$unconstrained.solution
[1] 0.2024494 0.2646992 0.5357331
EDIT concerning the possibility of other solutions, and thus the need for global methods. This specific optimization problem belongs to the family of convex optimization problems. For this kind of problems, a local minimum is also a global minimum. Since you already found a local minimum, you don't need to do anything else. Note that for another, non-convex, optimization problem, this wouldn't be true anymore.
Two side notes:
you cannot impose strict inequality constraints with solve.QP, thus the right way to write your constraints is $\pi_i\ge 0 \quad i=1,\dots,3$, not $\pi_i\gt 0 \quad i=1,\dots,3$
I find that solve.QP from quadprog has a counterintuitive interface, and it can be less robust than alternatives. I suggest you have a look at lsei from package limSolve. You can find a worked example here. Since the interface is different, be sure to read the vignette carefully if you decide to use lsei.
EDIT: the OP seems to be expecially interested in the case of strict inequality constraints, thus I add a couple of well-known tricks to deal with them. First of all, solve the problem with not strict inequality constraints: if the constraints are not active in the solution of the constrained problem, i.e., if the solution satisfies the strict inequality constraints, then you're all set. For example, in the second case where meq=0, solve.QP, which uses non-strict inequality constraints, found a solution where all coefficients are strictly $>0$, i.e., the constraints are not active in the constrained solution.
If one or more constraints are active in the solution (for example, the solution found by solve.QP when meq=1), then a trick is to add a small positive $\epsilon$ to the constraint. For example, instead of the constraints $\pi_i\gt 0 \quad i=1,\dots,3$, you use the constraints $\pi_i\ge \epsilon \quad i=1,\dots,3$. The solution found in this case will solve the original strict inequality constraints, since $\epsilon>0$. The issue with this approach is that the solution will depend on $\epsilon$. However:
one could always solve a sequence of problems with decreasing $\epsilon$ and gain insight on the behavior of the solution with respect to $\epsilon$.
if the optimization problem models a real physical problem, then strict inequality constraints on physical variables usually require some "tolerance". For example, imagine you want to optimize the efficiency of some machine, whose efficiency is linked, among other things, to a clearance between a rotating shaft and its static casing (for example, this would be the case of rotor-stator seal clearances in a gas turbine). The distance $\delta$ between the rotating and the static part does need to be strictly positive (if rotor and stator touch, a disaster is guaranteed!). However, to be sure that rotor and stator don't touch, the clearance needs to be above some assembly/manufacturing tolerance: you cannot accept a solution with, say, $\delta=0.001$ microns, because you'll never be able to manufacture a design with such a small gap. You need to solve a problem with a constraint $\delta>\epsilon$ where, say, $\epsilon=100$ microns. Thus, when real physical systems are involved and it seems that strict inequality constraints are needed, usually what we really need are not strict inequality constraints with a "physically meaningful" tolerance. | How do I fit a constrained regression in R so all coefficients are positive and above 0 [duplicate]
First of all you should add set.seed(1) (or any other seed) at the start of your code, so that it's reproducible. Coming to your question:
From ?solve.QP:
meq the first meq constraints are treate |
55,019 | What academic paper introduces CNNs in deep learning? | Two recent papers come to mind:
Lecun, Bengio and Hinton, Deep Learning, http://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf
Schmidhuber, Deep Learning in Neural Networks: An Overview,
http://people.idsia.ch/~juergen/DeepLearning2July2014.pdf | What academic paper introduces CNNs in deep learning? | Two recent papers come to mind:
Lecun, Bengio and Hinton, Deep Learning, http://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf
Schmidhuber, Deep Learning in Neural Networks: An Overview,
http:/ | What academic paper introduces CNNs in deep learning?
Two recent papers come to mind:
Lecun, Bengio and Hinton, Deep Learning, http://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf
Schmidhuber, Deep Learning in Neural Networks: An Overview,
http://people.idsia.ch/~juergen/DeepLearning2July2014.pdf | What academic paper introduces CNNs in deep learning?
Two recent papers come to mind:
Lecun, Bengio and Hinton, Deep Learning, http://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf
Schmidhuber, Deep Learning in Neural Networks: An Overview,
http:/ |
55,020 | What academic paper introduces CNNs in deep learning? | As mentioned in another answer, the recent Deep Learning Review in Nature has some good discussion of this. I am not in this field, so cannot say how much this is a balanced review vs. just the authors' perspective. However one of the authors, Yann LeCun, seems to be widely acknowledged as essentially the "father of convolutional neural networks" (a.k.a. ConvNets, CNNs). So the Nature review is a reasonable source for this topic, at the very least.
Given this, the reference chosen as the "origin of deep learning in ConvNets" there is essentially this 1990 NIPS paper:
LeCun et al. (1990) Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems.
The review mentions some earlier work on convolution-type layers, but notes that:
This is the first paper on convolutional networks trained by backpropagation for the task of classifying low-resolution images of handwritten digits.
The terms I have bolded lead me to conclude that this paper is a reasonable candidate for the "academic paper that introduced CNNs in deep learning" (i.e. BackProp would seem to be an essential component for the "deep" aspect).
However it is worth noting that in the Nature review the authors mention that the approach was significantly inspired by
Fukushima (1980) Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics.
Finally, as noted in the comments to the OP, Wikipedia has a summary of the history of ConvNets. | What academic paper introduces CNNs in deep learning? | As mentioned in another answer, the recent Deep Learning Review in Nature has some good discussion of this. I am not in this field, so cannot say how much this is a balanced review vs. just the author | What academic paper introduces CNNs in deep learning?
As mentioned in another answer, the recent Deep Learning Review in Nature has some good discussion of this. I am not in this field, so cannot say how much this is a balanced review vs. just the authors' perspective. However one of the authors, Yann LeCun, seems to be widely acknowledged as essentially the "father of convolutional neural networks" (a.k.a. ConvNets, CNNs). So the Nature review is a reasonable source for this topic, at the very least.
Given this, the reference chosen as the "origin of deep learning in ConvNets" there is essentially this 1990 NIPS paper:
LeCun et al. (1990) Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems.
The review mentions some earlier work on convolution-type layers, but notes that:
This is the first paper on convolutional networks trained by backpropagation for the task of classifying low-resolution images of handwritten digits.
The terms I have bolded lead me to conclude that this paper is a reasonable candidate for the "academic paper that introduced CNNs in deep learning" (i.e. BackProp would seem to be an essential component for the "deep" aspect).
However it is worth noting that in the Nature review the authors mention that the approach was significantly inspired by
Fukushima (1980) Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics.
Finally, as noted in the comments to the OP, Wikipedia has a summary of the history of ConvNets. | What academic paper introduces CNNs in deep learning?
As mentioned in another answer, the recent Deep Learning Review in Nature has some good discussion of this. I am not in this field, so cannot say how much this is a balanced review vs. just the author |
55,021 | Distribution of quadratic equation roots where coefficients are generated uniformly | There are two steps in any question of this nature: (1) find a useful way to characterize the event and (2) compute the probability of this event (in general, by integrating a probability density over it). When the probability is uniform, the integral is proportional to the area of the event.
I wish to emphasize the technique. It includes
Drawing a picture of the event to keep focused on the key ideas.
Minimizing the calculations performed.
Delaying calculation as much as possible until the end, in the hope of simplification along the way.
1. Find a useful characterization of the event.
You need to relate the roots $x_1, x_2$ to the coefficients $p$ and $q$. That is done by comparing coefficients in the expansion
$$x^2 - (x_1+x_2) x + x_1 x_2 = (x-x_1)(x-x_2) = x^2 + px + q,$$
whence
$$p=-(x_1+x_2).\tag{1}$$
By completing the square you can establish a criterion for both roots to be real:
$$x^2 + px + q = (x - p/2)^2 + q - p^2/4 = \frac{4q - p^2}{4}.$$
This is zero if and only if
$$(x-p/2)^2 = \frac{p^2 - 4q}{4}.\tag{2}$$
Since the left hand side, which is a square of a real number when the roots are real, is non-negative, and all non-negative numbers have real square roots, both roots are real if and only if the right hand side is non-negative. Equivalently, its numerator must be non-negative.
We can now characterize the event of the problem in directly in terms of $p$ and $q$:
From $(2)$, both roots are real if and only if $p^2 - 4q \ge 0$.
From $(1)$, their sum $x_1+x_2$ is less than $1$ if and only if $-p = x_1+x_2 \lt 1$; more simply, $p \gt -1$.
Here is the graph of the set determined by (1) and (2). It is the blue region, including all portions of its boundary except a vertical line segment along the left side.
2. Find the probability of the event.
In this figure, which uses $(p,q)$ for coordinates, the probability is uniform. That means the probability of any event is its relative area. The total area of the square is $4$, while the area of the event itself is the area between the parabola $q = p^2/4$ and the line $q=-1$, from $p=-1$ to $p=1$. This area is
$$A = \int_{-1}^1 (p^2/4 - (-1)) dp = \frac{1}{4}\int_{-1}^1 p^2 dp + \int_{-1}^1 dp = \frac{1}{4}\left(\frac{p^3}{3}\big|_{-1}^1\right) + p\big|_{-1}^1=\frac{1}{6}+2.$$
Therefore the probability is
$$\Pr\left(x_1,x_2\in\mathbb{R},\ x_1+x_2\lt 1\right) = \Pr\left(p^2-4q \ge 0,\ -p \lt 1\right) = \frac{A}{4}=\frac{\frac{1}{6} + 2}{4} = \frac{13}{24}.$$
This value, which is slightly greater than $12/24=1/2$, is consistent with the visual impression from the picture that the blue area slightly exceeds $1/2$.
Incidentally, one mistake in the posted solution occurred at the very last line: the lower limit of integration over $q$ should be $-1$ rather than $0$. Drawing the picture makes this obvious. That solution contains several other mistakes, of which the most important is the misquoting of the Quadratic Formula: the expressions for the roots are wrong. They need to be divided by $2$. | Distribution of quadratic equation roots where coefficients are generated uniformly | There are two steps in any question of this nature: (1) find a useful way to characterize the event and (2) compute the probability of this event (in general, by integrating a probability density over | Distribution of quadratic equation roots where coefficients are generated uniformly
There are two steps in any question of this nature: (1) find a useful way to characterize the event and (2) compute the probability of this event (in general, by integrating a probability density over it). When the probability is uniform, the integral is proportional to the area of the event.
I wish to emphasize the technique. It includes
Drawing a picture of the event to keep focused on the key ideas.
Minimizing the calculations performed.
Delaying calculation as much as possible until the end, in the hope of simplification along the way.
1. Find a useful characterization of the event.
You need to relate the roots $x_1, x_2$ to the coefficients $p$ and $q$. That is done by comparing coefficients in the expansion
$$x^2 - (x_1+x_2) x + x_1 x_2 = (x-x_1)(x-x_2) = x^2 + px + q,$$
whence
$$p=-(x_1+x_2).\tag{1}$$
By completing the square you can establish a criterion for both roots to be real:
$$x^2 + px + q = (x - p/2)^2 + q - p^2/4 = \frac{4q - p^2}{4}.$$
This is zero if and only if
$$(x-p/2)^2 = \frac{p^2 - 4q}{4}.\tag{2}$$
Since the left hand side, which is a square of a real number when the roots are real, is non-negative, and all non-negative numbers have real square roots, both roots are real if and only if the right hand side is non-negative. Equivalently, its numerator must be non-negative.
We can now characterize the event of the problem in directly in terms of $p$ and $q$:
From $(2)$, both roots are real if and only if $p^2 - 4q \ge 0$.
From $(1)$, their sum $x_1+x_2$ is less than $1$ if and only if $-p = x_1+x_2 \lt 1$; more simply, $p \gt -1$.
Here is the graph of the set determined by (1) and (2). It is the blue region, including all portions of its boundary except a vertical line segment along the left side.
2. Find the probability of the event.
In this figure, which uses $(p,q)$ for coordinates, the probability is uniform. That means the probability of any event is its relative area. The total area of the square is $4$, while the area of the event itself is the area between the parabola $q = p^2/4$ and the line $q=-1$, from $p=-1$ to $p=1$. This area is
$$A = \int_{-1}^1 (p^2/4 - (-1)) dp = \frac{1}{4}\int_{-1}^1 p^2 dp + \int_{-1}^1 dp = \frac{1}{4}\left(\frac{p^3}{3}\big|_{-1}^1\right) + p\big|_{-1}^1=\frac{1}{6}+2.$$
Therefore the probability is
$$\Pr\left(x_1,x_2\in\mathbb{R},\ x_1+x_2\lt 1\right) = \Pr\left(p^2-4q \ge 0,\ -p \lt 1\right) = \frac{A}{4}=\frac{\frac{1}{6} + 2}{4} = \frac{13}{24}.$$
This value, which is slightly greater than $12/24=1/2$, is consistent with the visual impression from the picture that the blue area slightly exceeds $1/2$.
Incidentally, one mistake in the posted solution occurred at the very last line: the lower limit of integration over $q$ should be $-1$ rather than $0$. Drawing the picture makes this obvious. That solution contains several other mistakes, of which the most important is the misquoting of the Quadratic Formula: the expressions for the roots are wrong. They need to be divided by $2$. | Distribution of quadratic equation roots where coefficients are generated uniformly
There are two steps in any question of this nature: (1) find a useful way to characterize the event and (2) compute the probability of this event (in general, by integrating a probability density over |
55,022 | Distribution of quadratic equation roots where coefficients are generated uniformly | It's possible the answer is a bit confusing because the question starts with one parabola (involving $p, q, x$), and the solution uses a different parabola (which should involve $p, q$, but the answer switches the variables to $x, y$, even though the second use of $x$ is unrelated to the first).
If the quadratic equation is $x^2 + px + q = 0$, then to have real roots, the discriminant must be non-negative, i.e., $p^2 \ge 4q$, or $q \le \frac{p^2}{4}$. Now consider the following digram in the $pq$ plane:
Here, $p$ is the horizontal axis, and $q$ is the vertical axis. The red square indicates the admissible area for $p, q$ values (the corners are where $p, q = \pm 1, \pm 1$). The parabola $q < \frac{p^2}{4}$ is indeed symmetric around the vertical $q$ axis. Note that this parabola is unrelated to the original one. The grey region is the region where the discriminant is positive, and, from simple geometric considerations concerning uniform distribution, the ratio of its area to that of the red square is the requested probability.
The area in the square is $4 = (1 - (-1))^2$. The grey area can be found by calculating $\int_{-1}^1\frac{p^2}{4}dp + 2$ (the sum of the two lower grey squares is 2).
For the second part of the question, note that the sum of roots is $-p$. | Distribution of quadratic equation roots where coefficients are generated uniformly | It's possible the answer is a bit confusing because the question starts with one parabola (involving $p, q, x$), and the solution uses a different parabola (which should involve $p, q$, but the answer | Distribution of quadratic equation roots where coefficients are generated uniformly
It's possible the answer is a bit confusing because the question starts with one parabola (involving $p, q, x$), and the solution uses a different parabola (which should involve $p, q$, but the answer switches the variables to $x, y$, even though the second use of $x$ is unrelated to the first).
If the quadratic equation is $x^2 + px + q = 0$, then to have real roots, the discriminant must be non-negative, i.e., $p^2 \ge 4q$, or $q \le \frac{p^2}{4}$. Now consider the following digram in the $pq$ plane:
Here, $p$ is the horizontal axis, and $q$ is the vertical axis. The red square indicates the admissible area for $p, q$ values (the corners are where $p, q = \pm 1, \pm 1$). The parabola $q < \frac{p^2}{4}$ is indeed symmetric around the vertical $q$ axis. Note that this parabola is unrelated to the original one. The grey region is the region where the discriminant is positive, and, from simple geometric considerations concerning uniform distribution, the ratio of its area to that of the red square is the requested probability.
The area in the square is $4 = (1 - (-1))^2$. The grey area can be found by calculating $\int_{-1}^1\frac{p^2}{4}dp + 2$ (the sum of the two lower grey squares is 2).
For the second part of the question, note that the sum of roots is $-p$. | Distribution of quadratic equation roots where coefficients are generated uniformly
It's possible the answer is a bit confusing because the question starts with one parabola (involving $p, q, x$), and the solution uses a different parabola (which should involve $p, q$, but the answer |
55,023 | Distribution of quadratic equation roots where coefficients are generated uniformly | OP
Looks like I found you again. The incorrect solution snapshot above is mine, and I apologize. Apparently I do not know integration limits and the quadratic equation, but that's besides the point. Here's the corrections. | Distribution of quadratic equation roots where coefficients are generated uniformly | OP
Looks like I found you again. The incorrect solution snapshot above is mine, and I apologize. Apparently I do not know integration limits and the quadratic equation, but that's besides the point. H | Distribution of quadratic equation roots where coefficients are generated uniformly
OP
Looks like I found you again. The incorrect solution snapshot above is mine, and I apologize. Apparently I do not know integration limits and the quadratic equation, but that's besides the point. Here's the corrections. | Distribution of quadratic equation roots where coefficients are generated uniformly
OP
Looks like I found you again. The incorrect solution snapshot above is mine, and I apologize. Apparently I do not know integration limits and the quadratic equation, but that's besides the point. H |
55,024 | Does the null hypothesis for a t-test have to be 0? | No, it doesn't have to be $0$. You can have your null that the difference is $5$, for example, or some other specific number (cf., here). What you can't have is a null that the difference is $\neq 0$. (For more on that distinction, see my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) | Does the null hypothesis for a t-test have to be 0? | No, it doesn't have to be $0$. You can have your null that the difference is $5$, for example, or some other specific number (cf., here). What you can't have is a null that the difference is $\neq 0 | Does the null hypothesis for a t-test have to be 0?
No, it doesn't have to be $0$. You can have your null that the difference is $5$, for example, or some other specific number (cf., here). What you can't have is a null that the difference is $\neq 0$. (For more on that distinction, see my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) | Does the null hypothesis for a t-test have to be 0?
No, it doesn't have to be $0$. You can have your null that the difference is $5$, for example, or some other specific number (cf., here). What you can't have is a null that the difference is $\neq 0 |
55,025 | Does the null hypothesis for a t-test have to be 0? | It should be some value you specify before you have any data ... but it doesn't have to be zero.
For example, imagine someone makes a claim that their tutoring program improves mathematics test scores by 5 (out of 100). You could make that 5 improvement your null value in a paired test and then do an experiment where you measure scores, give the training and then measure again after (though ideally you'd also have a control group that you just did the before and after tests on to account for the possibility that the testing alone improved the scores). | Does the null hypothesis for a t-test have to be 0? | It should be some value you specify before you have any data ... but it doesn't have to be zero.
For example, imagine someone makes a claim that their tutoring program improves mathematics test score | Does the null hypothesis for a t-test have to be 0?
It should be some value you specify before you have any data ... but it doesn't have to be zero.
For example, imagine someone makes a claim that their tutoring program improves mathematics test scores by 5 (out of 100). You could make that 5 improvement your null value in a paired test and then do an experiment where you measure scores, give the training and then measure again after (though ideally you'd also have a control group that you just did the before and after tests on to account for the possibility that the testing alone improved the scores). | Does the null hypothesis for a t-test have to be 0?
It should be some value you specify before you have any data ... but it doesn't have to be zero.
For example, imagine someone makes a claim that their tutoring program improves mathematics test score |
55,026 | measure graph complexity | Common graph measures include :
Connectivity : is the graph connected - i.e. can you reach any node from any other node, or can you break the graph down into "islands" or regions that are unreachable from one another?
Tree-test : does the graph conform to a tree-shape (i.e. can you reach any node from any other node via a single, unique path?)
Bipartite Test : some graphs are bipartite, that is, they consist of two groups of nodes, such that members of each group only connect with members of the other group. For example, a graph of buildings and connected utilities (gas, electricity, water) would be bipartite
Planarity : is the graph planar? That is, can it be drawn on a 2-dimensional surface such that no edges must be drawn intersecting one another?
Hamiltonian/Eularian tests - can every node in the graph be reached without using the same edge twice? Can a path be traced on the graph such that every edge is visited once, and only once?
Clique Analysis : what is the maximum clique number of the graph, and how many cliques of each size up to this number exist?
Center, Diameter, Eccentricity, Periphery, Girth, Expansion and Radius measures : other (non-exhaustive) metrics that describe different aspects of the graph
There's more besides, but here are some links to sites discussing metrics such as these:
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html
https://www.nas.ewi.tudelft.nl/people/Piet/papers/TUDreport20111111_MetricList.pdf
https://math.stackexchange.com/questions/301778/what-are-some-measures-of-connectedness-in-graphs
http://www.bu.edu/networks/files/2012/08/basics-of-network-analysis.pdf
And this is a fairly neat list:
http://mathworld.wolfram.com/topics/GraphProperties.html
Another resource is the networkx python library reference:
http://networkx.readthedocs.io/en/stable/reference/index.html | measure graph complexity | Common graph measures include :
Connectivity : is the graph connected - i.e. can you reach any node from any other node, or can you break the graph down into "islands" or regions that are unreachable | measure graph complexity
Common graph measures include :
Connectivity : is the graph connected - i.e. can you reach any node from any other node, or can you break the graph down into "islands" or regions that are unreachable from one another?
Tree-test : does the graph conform to a tree-shape (i.e. can you reach any node from any other node via a single, unique path?)
Bipartite Test : some graphs are bipartite, that is, they consist of two groups of nodes, such that members of each group only connect with members of the other group. For example, a graph of buildings and connected utilities (gas, electricity, water) would be bipartite
Planarity : is the graph planar? That is, can it be drawn on a 2-dimensional surface such that no edges must be drawn intersecting one another?
Hamiltonian/Eularian tests - can every node in the graph be reached without using the same edge twice? Can a path be traced on the graph such that every edge is visited once, and only once?
Clique Analysis : what is the maximum clique number of the graph, and how many cliques of each size up to this number exist?
Center, Diameter, Eccentricity, Periphery, Girth, Expansion and Radius measures : other (non-exhaustive) metrics that describe different aspects of the graph
There's more besides, but here are some links to sites discussing metrics such as these:
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html
https://www.nas.ewi.tudelft.nl/people/Piet/papers/TUDreport20111111_MetricList.pdf
https://math.stackexchange.com/questions/301778/what-are-some-measures-of-connectedness-in-graphs
http://www.bu.edu/networks/files/2012/08/basics-of-network-analysis.pdf
And this is a fairly neat list:
http://mathworld.wolfram.com/topics/GraphProperties.html
Another resource is the networkx python library reference:
http://networkx.readthedocs.io/en/stable/reference/index.html | measure graph complexity
Common graph measures include :
Connectivity : is the graph connected - i.e. can you reach any node from any other node, or can you break the graph down into "islands" or regions that are unreachable |
55,027 | What is the distribution of the maximum of a set of random variables? [duplicate] | The general solution is exactly the same as a particular solution. $$P(\max_i X_i \leq t) = \prod_i P(X_i \leq t) = \prod_i F_{X_i}(t).$$ | What is the distribution of the maximum of a set of random variables? [duplicate] | The general solution is exactly the same as a particular solution. $$P(\max_i X_i \leq t) = \prod_i P(X_i \leq t) = \prod_i F_{X_i}(t).$$ | What is the distribution of the maximum of a set of random variables? [duplicate]
The general solution is exactly the same as a particular solution. $$P(\max_i X_i \leq t) = \prod_i P(X_i \leq t) = \prod_i F_{X_i}(t).$$ | What is the distribution of the maximum of a set of random variables? [duplicate]
The general solution is exactly the same as a particular solution. $$P(\max_i X_i \leq t) = \prod_i P(X_i \leq t) = \prod_i F_{X_i}(t).$$ |
55,028 | Possible to use draws from two distributions to get draw from distribution with density their product? | From a pure simulation point of view (Warning: link to my book), aiming at simulating from a product $f\times g$ of two (positive) functions such that $fg$ is integrable over the proper region suggests some specific methods:
Sampling-importance-resampling: to apply when samples from both $f$ and $g$ can be produced and when the corresponding numerical values of the normalised versions of both $f$ and $g$ at those points can be numerically computed (resampling is not necessary for computing integrals related with $fg$).
Accept-Reject: to apply when either $f$ or $g$ is proportional to a standard probability density, and the other term in the product is bounded.
Slice sampling: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function is such that the set $\{x; f(x)\ge a\}$ is easy to construct. Or when both $f$ and $g$ are such that $\{x; f(x)\ge a\}$ is easy to construct.
Metropolis-Hastings algorithm: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function can be (numerically) computed.
Bridge sampling: to apply when both $f$ and $g$ are proportional to a standard probability density, when samples from both $f$ and $g$ can be produced in a reasonable time, and when the corresponding numerical values of both $f$ and $g$ at those points can be (numerically) computed.
Nested sampling: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function can be (numerically) computed.
Those different solutions do not necessarily take into account the fact that $f$ is a prior density and $g$ is a likelihood. Actually, it is not necessarily the best possible decomposition of the product $fg$, because $f$ can be too flat or even improper, while $g$ is not necessarily integrable in the parameter.
Note: A similar question was asked on Cross Validation last year
but did not attract any traffic. I added links to scalable Monte Carlo solutions
on that page. | Possible to use draws from two distributions to get draw from distribution with density their produc | From a pure simulation point of view (Warning: link to my book), aiming at simulating from a product $f\times g$ of two (positive) functions such that $fg$ is integrable over the proper region suggest | Possible to use draws from two distributions to get draw from distribution with density their product?
From a pure simulation point of view (Warning: link to my book), aiming at simulating from a product $f\times g$ of two (positive) functions such that $fg$ is integrable over the proper region suggests some specific methods:
Sampling-importance-resampling: to apply when samples from both $f$ and $g$ can be produced and when the corresponding numerical values of the normalised versions of both $f$ and $g$ at those points can be numerically computed (resampling is not necessary for computing integrals related with $fg$).
Accept-Reject: to apply when either $f$ or $g$ is proportional to a standard probability density, and the other term in the product is bounded.
Slice sampling: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function is such that the set $\{x; f(x)\ge a\}$ is easy to construct. Or when both $f$ and $g$ are such that $\{x; f(x)\ge a\}$ is easy to construct.
Metropolis-Hastings algorithm: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function can be (numerically) computed.
Bridge sampling: to apply when both $f$ and $g$ are proportional to a standard probability density, when samples from both $f$ and $g$ can be produced in a reasonable time, and when the corresponding numerical values of both $f$ and $g$ at those points can be (numerically) computed.
Nested sampling: to apply when either $f$ or $g$ is proportional to a standard probability density, easy to simulate, and the other function can be (numerically) computed.
Those different solutions do not necessarily take into account the fact that $f$ is a prior density and $g$ is a likelihood. Actually, it is not necessarily the best possible decomposition of the product $fg$, because $f$ can be too flat or even improper, while $g$ is not necessarily integrable in the parameter.
Note: A similar question was asked on Cross Validation last year
but did not attract any traffic. I added links to scalable Monte Carlo solutions
on that page. | Possible to use draws from two distributions to get draw from distribution with density their produc
From a pure simulation point of view (Warning: link to my book), aiming at simulating from a product $f\times g$ of two (positive) functions such that $fg$ is integrable over the proper region suggest |
55,029 | How to derive the second moment of the Chi-Square distribution with the MGF? | You made a mistake while taking the derivatives.
$$M_X'(t) = \dfrac{d}{dt} (1 - 2t)^{-r/2} = \dfrac{2r}{2} (1-2t)^{-r/2 -1} = r(1-2t)^{-\frac{r+2}{2} }.$$
The same mistake you made in taking the second derivative. | How to derive the second moment of the Chi-Square distribution with the MGF? | You made a mistake while taking the derivatives.
$$M_X'(t) = \dfrac{d}{dt} (1 - 2t)^{-r/2} = \dfrac{2r}{2} (1-2t)^{-r/2 -1} = r(1-2t)^{-\frac{r+2}{2} }.$$
The same mistake you made in taking the secon | How to derive the second moment of the Chi-Square distribution with the MGF?
You made a mistake while taking the derivatives.
$$M_X'(t) = \dfrac{d}{dt} (1 - 2t)^{-r/2} = \dfrac{2r}{2} (1-2t)^{-r/2 -1} = r(1-2t)^{-\frac{r+2}{2} }.$$
The same mistake you made in taking the second derivative. | How to derive the second moment of the Chi-Square distribution with the MGF?
You made a mistake while taking the derivatives.
$$M_X'(t) = \dfrac{d}{dt} (1 - 2t)^{-r/2} = \dfrac{2r}{2} (1-2t)^{-r/2 -1} = r(1-2t)^{-\frac{r+2}{2} }.$$
The same mistake you made in taking the secon |
55,030 | How to derive the second moment of the Chi-Square distribution with the MGF? | For the record, it would be more typical to use $\nu$ as the degrees of freedom, and $r$ as the moment number. Then the Chi-squared density function is
$$\begin{cases}
\frac{2^{-\frac{\nu }{2}} e^{-\frac{x}{2}} x^{\frac{\nu }{2}-1}}{\Gamma \left(\frac{\nu }{2}\right)} & x>0 \\
0 & \text{True}
\end{cases}$$
The moment generating function is
$$(1-2 \nu )^{-\frac{\nu }{2}}$$
The $r^{th}$ moment is
$$2^r \left(\frac{\nu }{2}\right)_r$$
where $\left({.}\right)_r$ is the Pochhammer such that first four moments are respectively
$\nu$ ,
$\nu (\nu +2)$ ,
$\nu (\nu +2) (\nu +4)$ ,
and $\nu (\nu +2) (\nu +4) (\nu +6)$. | How to derive the second moment of the Chi-Square distribution with the MGF? | For the record, it would be more typical to use $\nu$ as the degrees of freedom, and $r$ as the moment number. Then the Chi-squared density function is
$$\begin{cases}
\frac{2^{-\frac{\nu }{2}} e^{-\ | How to derive the second moment of the Chi-Square distribution with the MGF?
For the record, it would be more typical to use $\nu$ as the degrees of freedom, and $r$ as the moment number. Then the Chi-squared density function is
$$\begin{cases}
\frac{2^{-\frac{\nu }{2}} e^{-\frac{x}{2}} x^{\frac{\nu }{2}-1}}{\Gamma \left(\frac{\nu }{2}\right)} & x>0 \\
0 & \text{True}
\end{cases}$$
The moment generating function is
$$(1-2 \nu )^{-\frac{\nu }{2}}$$
The $r^{th}$ moment is
$$2^r \left(\frac{\nu }{2}\right)_r$$
where $\left({.}\right)_r$ is the Pochhammer such that first four moments are respectively
$\nu$ ,
$\nu (\nu +2)$ ,
$\nu (\nu +2) (\nu +4)$ ,
and $\nu (\nu +2) (\nu +4) (\nu +6)$. | How to derive the second moment of the Chi-Square distribution with the MGF?
For the record, it would be more typical to use $\nu$ as the degrees of freedom, and $r$ as the moment number. Then the Chi-squared density function is
$$\begin{cases}
\frac{2^{-\frac{\nu }{2}} e^{-\ |
55,031 | What does Jim Simons mean by "never override the computer"? | Simons is talking about his company, which uses quantitative methods to automate financial trades of various kinds. "Overriding the computer" means making a trade other than what the computers are currently programmed to make, or preventing a computer-scheduled trade. The reason, Simons goes on to say, is that statistically analyzing the company's performance becomes much harder in the presence of human whimsy affecting the company's trades.
This is closely related to the issue of actuarial (quantitative, automatic) versus clinical (qualitative, manual) decision-making that's considered in psychology. Not surprisingly to a statistician, actuarial methods perform better than clinical methods, and purely actuarial methods perform better than actuarial methods with optional clinical overrides. See, for example,
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668–1674. doi:10.1126/science.2648573 | What does Jim Simons mean by "never override the computer"? | Simons is talking about his company, which uses quantitative methods to automate financial trades of various kinds. "Overriding the computer" means making a trade other than what the computers are cur | What does Jim Simons mean by "never override the computer"?
Simons is talking about his company, which uses quantitative methods to automate financial trades of various kinds. "Overriding the computer" means making a trade other than what the computers are currently programmed to make, or preventing a computer-scheduled trade. The reason, Simons goes on to say, is that statistically analyzing the company's performance becomes much harder in the presence of human whimsy affecting the company's trades.
This is closely related to the issue of actuarial (quantitative, automatic) versus clinical (qualitative, manual) decision-making that's considered in psychology. Not surprisingly to a statistician, actuarial methods perform better than clinical methods, and purely actuarial methods perform better than actuarial methods with optional clinical overrides. See, for example,
Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243(4899), 1668–1674. doi:10.1126/science.2648573 | What does Jim Simons mean by "never override the computer"?
Simons is talking about his company, which uses quantitative methods to automate financial trades of various kinds. "Overriding the computer" means making a trade other than what the computers are cur |
55,032 | Smoother matrix from smooth.spline | After many hours of exploration, here is what I found:
Because smooth.spline algorithm chooses spar instead of lambda, it is only possible to (sort of) fix spar. However, lambda is a function of spar and another variable matrix. So fixing spar does not fix lambda necessarily. I have not found an easy way to extract the smoother matrix out of smooth.spline.
However, for the purpose of computing variance, the algorithm provided in https://stat.ethz.ch/pipermail/r-help/2006-June/108471.html (fix spar instead of df) is a close estimate of the true smoother matrix. The variance computed from $SyS^T$, where $S$ is the estimated smoother matrix, is pretty close to the one computed from the correct smoother matrix.
Another R package "assist" has a function "ssr()" that also does smooth spline regression. It is not as powerful as smooth.spline. But the built-in function "hat.ssr()" gives the true smoother matrix of the model obtained from "ssr()". | Smoother matrix from smooth.spline | After many hours of exploration, here is what I found:
Because smooth.spline algorithm chooses spar instead of lambda, it is only possible to (sort of) fix spar. However, lambda is a function of spar | Smoother matrix from smooth.spline
After many hours of exploration, here is what I found:
Because smooth.spline algorithm chooses spar instead of lambda, it is only possible to (sort of) fix spar. However, lambda is a function of spar and another variable matrix. So fixing spar does not fix lambda necessarily. I have not found an easy way to extract the smoother matrix out of smooth.spline.
However, for the purpose of computing variance, the algorithm provided in https://stat.ethz.ch/pipermail/r-help/2006-June/108471.html (fix spar instead of df) is a close estimate of the true smoother matrix. The variance computed from $SyS^T$, where $S$ is the estimated smoother matrix, is pretty close to the one computed from the correct smoother matrix.
Another R package "assist" has a function "ssr()" that also does smooth spline regression. It is not as powerful as smooth.spline. But the built-in function "hat.ssr()" gives the true smoother matrix of the model obtained from "ssr()". | Smoother matrix from smooth.spline
After many hours of exploration, here is what I found:
Because smooth.spline algorithm chooses spar instead of lambda, it is only possible to (sort of) fix spar. However, lambda is a function of spar |
55,033 | Smoother matrix from smooth.spline | The above answers only approximate the smoothing matrix. Here is a solution that will get you the exact smoothing matrix from the r function smooth.spline(). The key is to recognize that the smoothing matrix is only a function of the values of $x$ and the penalization parameter $\lambda$, allowing us to smooth a vector $\tilde{y} = (0,0, ..., 0, 1, 0,...,0)^{T}$, and therefore get each column of the smoothing matrix.
library(splines)
x = seq(0, 100, by=0.1)
y = x*sin(x) + rnorm(length(x), 0, 0.1)
#use cross-validation to choose best smoothing parameter
spar = seq(0.01, 1, by = 0.01)
cv = rep_len(NA, length(spar))
for(i in 1:length(spar)){
tempfit = smooth.spline(x, y, spar = spar[i], cv=TRUE, all.knots = TRUE)
cv[i] = tempfit$cv.crit
}
#use the optimal smoothing parameter to produce a final fit
fit = smooth.spline(x, y, spar = spar[which(cv == min(cv))], cv=TRUE, all.knots = TRUE)
#calculate the smoothing matrix
L = matrix(nrow = length(x), ncol = length(x))
for(j in 1:length(x)){
yi = rep_len(0, length(x))
yi[j] = 1
L[,j] = predict(smooth.spline(x, yi, lambda = fit$lambda, cv=TRUE,
all.knots = TRUE), x)$y
}
The matrix $L$ is the resulting smoothing matrix. | Smoother matrix from smooth.spline | The above answers only approximate the smoothing matrix. Here is a solution that will get you the exact smoothing matrix from the r function smooth.spline(). The key is to recognize that the smoothing | Smoother matrix from smooth.spline
The above answers only approximate the smoothing matrix. Here is a solution that will get you the exact smoothing matrix from the r function smooth.spline(). The key is to recognize that the smoothing matrix is only a function of the values of $x$ and the penalization parameter $\lambda$, allowing us to smooth a vector $\tilde{y} = (0,0, ..., 0, 1, 0,...,0)^{T}$, and therefore get each column of the smoothing matrix.
library(splines)
x = seq(0, 100, by=0.1)
y = x*sin(x) + rnorm(length(x), 0, 0.1)
#use cross-validation to choose best smoothing parameter
spar = seq(0.01, 1, by = 0.01)
cv = rep_len(NA, length(spar))
for(i in 1:length(spar)){
tempfit = smooth.spline(x, y, spar = spar[i], cv=TRUE, all.knots = TRUE)
cv[i] = tempfit$cv.crit
}
#use the optimal smoothing parameter to produce a final fit
fit = smooth.spline(x, y, spar = spar[which(cv == min(cv))], cv=TRUE, all.knots = TRUE)
#calculate the smoothing matrix
L = matrix(nrow = length(x), ncol = length(x))
for(j in 1:length(x)){
yi = rep_len(0, length(x))
yi[j] = 1
L[,j] = predict(smooth.spline(x, yi, lambda = fit$lambda, cv=TRUE,
all.knots = TRUE), x)$y
}
The matrix $L$ is the resulting smoothing matrix. | Smoother matrix from smooth.spline
The above answers only approximate the smoothing matrix. Here is a solution that will get you the exact smoothing matrix from the r function smooth.spline(). The key is to recognize that the smoothing |
55,034 | Smoother matrix from smooth.spline | The accepted answer isn't correct here - smooth.matrix is working just fine.
The only reason fromsm and fromfit aren't the same in the above example is because of misplaced parentheses.
Replace fromsm <- sm%*%(bone$spnbmd[order(bone$age)]) with fromsm <- (sm%*%bone$spnbmd)[order(bone$age)] and they are the same. | Smoother matrix from smooth.spline | The accepted answer isn't correct here - smooth.matrix is working just fine.
The only reason fromsm and fromfit aren't the same in the above example is because of misplaced parentheses.
Replace froms | Smoother matrix from smooth.spline
The accepted answer isn't correct here - smooth.matrix is working just fine.
The only reason fromsm and fromfit aren't the same in the above example is because of misplaced parentheses.
Replace fromsm <- sm%*%(bone$spnbmd[order(bone$age)]) with fromsm <- (sm%*%bone$spnbmd)[order(bone$age)] and they are the same. | Smoother matrix from smooth.spline
The accepted answer isn't correct here - smooth.matrix is working just fine.
The only reason fromsm and fromfit aren't the same in the above example is because of misplaced parentheses.
Replace froms |
55,035 | What's the official name of a 1 to 1 line plot? | The official name of the line is 'identity line' or 'line of equality'. And if you are comparing measured data with predicted data, or two different models, you should standardize the axis. The starting and ending point of both axes should be the same. You can also plot the trend line in the scatter plot (measured ~ predicted) to better visualize the difference between the 1:1 line and the actual trend in the model.
Mathematically, it can be written as the line where:
$$y = x$$ | What's the official name of a 1 to 1 line plot? | The official name of the line is 'identity line' or 'line of equality'. And if you are comparing measured data with predicted data, or two different models, you should standardize the axis. The starti | What's the official name of a 1 to 1 line plot?
The official name of the line is 'identity line' or 'line of equality'. And if you are comparing measured data with predicted data, or two different models, you should standardize the axis. The starting and ending point of both axes should be the same. You can also plot the trend line in the scatter plot (measured ~ predicted) to better visualize the difference between the 1:1 line and the actual trend in the model.
Mathematically, it can be written as the line where:
$$y = x$$ | What's the official name of a 1 to 1 line plot?
The official name of the line is 'identity line' or 'line of equality'. And if you are comparing measured data with predicted data, or two different models, you should standardize the axis. The starti |
55,036 | What's the official name of a 1 to 1 line plot? | Based on your comment, I think what you are ultimately after is to assess agreement (see Wikipedia, or John Uebersax's website). I don't think there is a name for the plot you have in mind. I would just call it a scatterplot with a 1:1 reference line plotted. I think that's probably fine to do. I would not standardize your variables first, as that would prevent the plot from showing what you want to discover. If you wanted a quantitative value to describe the level of agreement, you could compute Lin's concordance coefficient to pair with your plot.
However, note that it is typically more difficult for people to assess agreement in this way. You might prefer to create a Bland-Altman plot (also called Tukey's mean-difference plot). You can see if the differences diverge from mean $0$ (and test them with a $t$-test), if the differences vary more at higher levels, if there is any residual curvature, etc.
If you just want to see if the distributional shapes differ, you can do a qq-plot as well, but I don't think that's what you're after. | What's the official name of a 1 to 1 line plot? | Based on your comment, I think what you are ultimately after is to assess agreement (see Wikipedia, or John Uebersax's website). I don't think there is a name for the plot you have in mind. I would | What's the official name of a 1 to 1 line plot?
Based on your comment, I think what you are ultimately after is to assess agreement (see Wikipedia, or John Uebersax's website). I don't think there is a name for the plot you have in mind. I would just call it a scatterplot with a 1:1 reference line plotted. I think that's probably fine to do. I would not standardize your variables first, as that would prevent the plot from showing what you want to discover. If you wanted a quantitative value to describe the level of agreement, you could compute Lin's concordance coefficient to pair with your plot.
However, note that it is typically more difficult for people to assess agreement in this way. You might prefer to create a Bland-Altman plot (also called Tukey's mean-difference plot). You can see if the differences diverge from mean $0$ (and test them with a $t$-test), if the differences vary more at higher levels, if there is any residual curvature, etc.
If you just want to see if the distributional shapes differ, you can do a qq-plot as well, but I don't think that's what you're after. | What's the official name of a 1 to 1 line plot?
Based on your comment, I think what you are ultimately after is to assess agreement (see Wikipedia, or John Uebersax's website). I don't think there is a name for the plot you have in mind. I would |
55,037 | Does uncorrelation imply independence for marginally Gaussian random variables? | If $X = (X_1,\ldots, X_n)$ are jointly normal, too, then yes. Otherwise, no.
In this case $\Sigma = \text{diag}(\sigma_1^2,\ldots, \sigma_2^2)$ and $\mu = (\mu_1,\ldots,\mu_n)'$
\begin{align*}f_X(x) &= (2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp\left[-\frac{1}{2}(x-\mu)'\Sigma^{-1}(x-\mu) \right] \\
&= (2\pi)^{-\frac{n}{2}} (\sigma_1^2\cdots\sigma_2^2)^{-\frac{1}{2}}\exp\left[-\frac{1}{2}\sum_{i=1}^n\frac{(x_i-\mu_i^2)^2}{\sigma_i^2} \right] \\
&= \prod_{i=1}^n \left[\frac{1}{\sqrt{2\pi\sigma_i^2}} \exp\left(-\frac{(x_i-\mu_i)^2}{2\sigma_i^2} \right)\right] \\
&= \prod_{i=1}^n f_{X_i}(x_i).
\end{align*}
For an example of two dependent $X_1$ and $X_2$ that are uncorrelated, but dependent, check out the example here. You can take $n=2$. Define $X_1 \sim \text{Normal}(0,1)$, $W$ is $1$ or $-1$ with probability $.5$ and independent from $X_1$. Then define $X_2 = WX_1$.
The $X$s are un-correlated because
\begin{align*}
\text{Cov}(X_1,X_2) &= \text{Cov}(X_1,W X_1)\\
&= E[X_1^2W] \\
&= E[X_1^2]E[W] \\
&=0
\end{align*}
But they are very dependent. | Does uncorrelation imply independence for marginally Gaussian random variables? | If $X = (X_1,\ldots, X_n)$ are jointly normal, too, then yes. Otherwise, no.
In this case $\Sigma = \text{diag}(\sigma_1^2,\ldots, \sigma_2^2)$ and $\mu = (\mu_1,\ldots,\mu_n)'$
\begin{align*}f_X(x) & | Does uncorrelation imply independence for marginally Gaussian random variables?
If $X = (X_1,\ldots, X_n)$ are jointly normal, too, then yes. Otherwise, no.
In this case $\Sigma = \text{diag}(\sigma_1^2,\ldots, \sigma_2^2)$ and $\mu = (\mu_1,\ldots,\mu_n)'$
\begin{align*}f_X(x) &= (2\pi)^{-\frac{n}{2}}|\Sigma|^{-\frac{1}{2}}\exp\left[-\frac{1}{2}(x-\mu)'\Sigma^{-1}(x-\mu) \right] \\
&= (2\pi)^{-\frac{n}{2}} (\sigma_1^2\cdots\sigma_2^2)^{-\frac{1}{2}}\exp\left[-\frac{1}{2}\sum_{i=1}^n\frac{(x_i-\mu_i^2)^2}{\sigma_i^2} \right] \\
&= \prod_{i=1}^n \left[\frac{1}{\sqrt{2\pi\sigma_i^2}} \exp\left(-\frac{(x_i-\mu_i)^2}{2\sigma_i^2} \right)\right] \\
&= \prod_{i=1}^n f_{X_i}(x_i).
\end{align*}
For an example of two dependent $X_1$ and $X_2$ that are uncorrelated, but dependent, check out the example here. You can take $n=2$. Define $X_1 \sim \text{Normal}(0,1)$, $W$ is $1$ or $-1$ with probability $.5$ and independent from $X_1$. Then define $X_2 = WX_1$.
The $X$s are un-correlated because
\begin{align*}
\text{Cov}(X_1,X_2) &= \text{Cov}(X_1,W X_1)\\
&= E[X_1^2W] \\
&= E[X_1^2]E[W] \\
&=0
\end{align*}
But they are very dependent. | Does uncorrelation imply independence for marginally Gaussian random variables?
If $X = (X_1,\ldots, X_n)$ are jointly normal, too, then yes. Otherwise, no.
In this case $\Sigma = \text{diag}(\sigma_1^2,\ldots, \sigma_2^2)$ and $\mu = (\mu_1,\ldots,\mu_n)'$
\begin{align*}f_X(x) & |
55,038 | What is the normal distribution when standard deviation is zero? | When standard deviation is zero, your Gaussian (normal) PDF turns into Dirac delta function. You can't simply plug zero standard deviation into the conventional expression. For instance, if the PDF is plugged into some kind of numerical integration, this won't work. You have to modify the integrals. In the example below we calculate the mean value of function $g(x)$ using the Gaussian density $f(x|\mu,\sigma^2)$:
$$\int g(x)f(x|\mu,\sigma^2)dx$$
when you plug zero variance, this becomes delta-functional:
$$\int g(x)f(x|\mu,0)dx=\int g(x)\delta(x-\mu)dx=g(\mu)$$
Your code has to be able to recognize this, otherwise it'll fail.
One way to fix this is surprisingly simple: plug a very small value of $\sigma$ into Gaussian instead of zero. You'll have to pick the right $\sigma$ for your situation. If it's too small then it'll blow up your exponent, and the integrals will not work or the precision will be low. This goes to a known Gaussian approximation of delta function: $$\delta(x)=\lim_{\sigma\to 0}\mathcal N(0,\sigma)$$ | What is the normal distribution when standard deviation is zero? | When standard deviation is zero, your Gaussian (normal) PDF turns into Dirac delta function. You can't simply plug zero standard deviation into the conventional expression. For instance, if the PDF is | What is the normal distribution when standard deviation is zero?
When standard deviation is zero, your Gaussian (normal) PDF turns into Dirac delta function. You can't simply plug zero standard deviation into the conventional expression. For instance, if the PDF is plugged into some kind of numerical integration, this won't work. You have to modify the integrals. In the example below we calculate the mean value of function $g(x)$ using the Gaussian density $f(x|\mu,\sigma^2)$:
$$\int g(x)f(x|\mu,\sigma^2)dx$$
when you plug zero variance, this becomes delta-functional:
$$\int g(x)f(x|\mu,0)dx=\int g(x)\delta(x-\mu)dx=g(\mu)$$
Your code has to be able to recognize this, otherwise it'll fail.
One way to fix this is surprisingly simple: plug a very small value of $\sigma$ into Gaussian instead of zero. You'll have to pick the right $\sigma$ for your situation. If it's too small then it'll blow up your exponent, and the integrals will not work or the precision will be low. This goes to a known Gaussian approximation of delta function: $$\delta(x)=\lim_{\sigma\to 0}\mathcal N(0,\sigma)$$ | What is the normal distribution when standard deviation is zero?
When standard deviation is zero, your Gaussian (normal) PDF turns into Dirac delta function. You can't simply plug zero standard deviation into the conventional expression. For instance, if the PDF is |
55,039 | What is the normal distribution when standard deviation is zero? | This is a question in the Statistics textbook by Hogg and Craig! The authors give a hint: Look at the moment generating function of the normal and plug in sigma = 0.
So before going to the answer, let's remember why this works - moment generating functions are unique.
The moment generating function of the normal, N(a,b^2), M(t|a,b^2) =
exp(at + t^2 * b^2 /2)
Setting b=0, we have
M(t|a,B^2) = exp (at)
This is the moment generating function of
f(x) = a. (I wouldn't call this a Dirac Delta function, I would call this a constant. Note the Dirac Delta function is not technically a PDF.)
This result shouldn't be a surprise. As the variance decreases, the probability gets closer to the mean, and so the limiting distribution is the function equals the mean.
Of course this can be proven directly; we can look at a sequence of functions fn(x) = N(a,b^2/n) and we see the sequence has the variance approach 0 as n approaches infinity. Seeing it converges in probability to the constant is pretty easy; you can show it converges almost surely but this will take a little more work.
But that isn't the exact question - which can be answered as Hogg and Craig suggested! | What is the normal distribution when standard deviation is zero? | This is a question in the Statistics textbook by Hogg and Craig! The authors give a hint: Look at the moment generating function of the normal and plug in sigma = 0.
So before going to the answer, l | What is the normal distribution when standard deviation is zero?
This is a question in the Statistics textbook by Hogg and Craig! The authors give a hint: Look at the moment generating function of the normal and plug in sigma = 0.
So before going to the answer, let's remember why this works - moment generating functions are unique.
The moment generating function of the normal, N(a,b^2), M(t|a,b^2) =
exp(at + t^2 * b^2 /2)
Setting b=0, we have
M(t|a,B^2) = exp (at)
This is the moment generating function of
f(x) = a. (I wouldn't call this a Dirac Delta function, I would call this a constant. Note the Dirac Delta function is not technically a PDF.)
This result shouldn't be a surprise. As the variance decreases, the probability gets closer to the mean, and so the limiting distribution is the function equals the mean.
Of course this can be proven directly; we can look at a sequence of functions fn(x) = N(a,b^2/n) and we see the sequence has the variance approach 0 as n approaches infinity. Seeing it converges in probability to the constant is pretty easy; you can show it converges almost surely but this will take a little more work.
But that isn't the exact question - which can be answered as Hogg and Craig suggested! | What is the normal distribution when standard deviation is zero?
This is a question in the Statistics textbook by Hogg and Craig! The authors give a hint: Look at the moment generating function of the normal and plug in sigma = 0.
So before going to the answer, l |
55,040 | Object of type 'closure' is not subsettable [closed] | You have assigned an object to the name dt. Unfortunately dt is also the name of a built in R function.
Because of R's scoping rules (though, as @ssdecontrol points out in the comments, it's not clear why), the dt on the right hand side of your last line resolves to this function, not the data table you created. A function in R is called a closure internally, and you are trying to use the [ function on it, which is called subsetting. Hence, you are attempting to subset a closure, kaboom.
To resolve, chose a different name for your data table on the first line. | Object of type 'closure' is not subsettable [closed] | You have assigned an object to the name dt. Unfortunately dt is also the name of a built in R function.
Because of R's scoping rules (though, as @ssdecontrol points out in the comments, it's not clea | Object of type 'closure' is not subsettable [closed]
You have assigned an object to the name dt. Unfortunately dt is also the name of a built in R function.
Because of R's scoping rules (though, as @ssdecontrol points out in the comments, it's not clear why), the dt on the right hand side of your last line resolves to this function, not the data table you created. A function in R is called a closure internally, and you are trying to use the [ function on it, which is called subsetting. Hence, you are attempting to subset a closure, kaboom.
To resolve, chose a different name for your data table on the first line. | Object of type 'closure' is not subsettable [closed]
You have assigned an object to the name dt. Unfortunately dt is also the name of a built in R function.
Because of R's scoping rules (though, as @ssdecontrol points out in the comments, it's not clea |
55,041 | Clustering: k-means alternatives when its assumptions do not hold | In the answers to How to understand the drawbacks of K-means we have already discussed drawbacks of k-means in detail.
Some of them may appear easy to counter for toy examples (e.g. by undoing scaling for distorted data sets) but real data will be much more complex, and global normalization may not be enough, unfortunately.
Rather than hot-fixing these problems, you should rather verify if the k-means objective solves your problem.
It's not just about clustering points into partitions. There is an underlying problem that you want to solve. With clustering it usually is about getting "insight" into your data. A simple approach here is to study the results carefully and discard any that seem suspicious. This cannot be automated (you can automate generating diverse results to avoid redundancies though, see "alternative clustering").
k-means is better be seen as a vector quantization approach rather than clustering. It does not attempt to identify the structure that is in the data, but it postulates a structure (k centers) and then optimizes the model parameters. During optimization it assumes that all variables are equal and the cost of each instance is measured as sum-of-squares $$\text{cost}(x_i) = \min_{\text{center }c} \sum_{\text{dim }d} (x_{i,d}-\mu_{c,d})^2$$ It will fail whenever the data does not have the postulated structure, or this cost function is inappropriate.
Thus, whenever considering k-means, the first thing you need to check if minimizing above equation solves your problem. If you don't want to answer the question "which centers have the smallest cost according to above equation" then k-means is the wrong algorithm!
Here is a positive example, where k-means is a good choice:
We have an RGB image with 24 bit depth (8bpp) and we want to compress that image for fast loading on the interwebz. Instead of storing 24 bit per pixel, we want to use just 4 bit that index into a 16 color palette. Run-length encoding will further help compressing the data.
So we have to approximate every pixel with one palette entry. Larger deviations are bad, so using the square helps, and the color approximation error of a pixel can be computed as $\Delta R^2+\Delta G^2+\Delta B^2$. This matches exactly the cost function of k-means - and thus we can use this algorithm to optimize our color palette. | Clustering: k-means alternatives when its assumptions do not hold | In the answers to How to understand the drawbacks of K-means we have already discussed drawbacks of k-means in detail.
Some of them may appear easy to counter for toy examples (e.g. by undoing scaling | Clustering: k-means alternatives when its assumptions do not hold
In the answers to How to understand the drawbacks of K-means we have already discussed drawbacks of k-means in detail.
Some of them may appear easy to counter for toy examples (e.g. by undoing scaling for distorted data sets) but real data will be much more complex, and global normalization may not be enough, unfortunately.
Rather than hot-fixing these problems, you should rather verify if the k-means objective solves your problem.
It's not just about clustering points into partitions. There is an underlying problem that you want to solve. With clustering it usually is about getting "insight" into your data. A simple approach here is to study the results carefully and discard any that seem suspicious. This cannot be automated (you can automate generating diverse results to avoid redundancies though, see "alternative clustering").
k-means is better be seen as a vector quantization approach rather than clustering. It does not attempt to identify the structure that is in the data, but it postulates a structure (k centers) and then optimizes the model parameters. During optimization it assumes that all variables are equal and the cost of each instance is measured as sum-of-squares $$\text{cost}(x_i) = \min_{\text{center }c} \sum_{\text{dim }d} (x_{i,d}-\mu_{c,d})^2$$ It will fail whenever the data does not have the postulated structure, or this cost function is inappropriate.
Thus, whenever considering k-means, the first thing you need to check if minimizing above equation solves your problem. If you don't want to answer the question "which centers have the smallest cost according to above equation" then k-means is the wrong algorithm!
Here is a positive example, where k-means is a good choice:
We have an RGB image with 24 bit depth (8bpp) and we want to compress that image for fast loading on the interwebz. Instead of storing 24 bit per pixel, we want to use just 4 bit that index into a 16 color palette. Run-length encoding will further help compressing the data.
So we have to approximate every pixel with one palette entry. Larger deviations are bad, so using the square helps, and the color approximation error of a pixel can be computed as $\Delta R^2+\Delta G^2+\Delta B^2$. This matches exactly the cost function of k-means - and thus we can use this algorithm to optimize our color palette. | Clustering: k-means alternatives when its assumptions do not hold
In the answers to How to understand the drawbacks of K-means we have already discussed drawbacks of k-means in detail.
Some of them may appear easy to counter for toy examples (e.g. by undoing scaling |
55,042 | How does one decide on whether to use a GLMM versus an LME? And how do you select the random/fixed effects? | If you have count data as the response variable then you should be using a glmm. A poisson model is appropriate so long as it is not over-dispersed or zero-inflated, in which case you will need to consider other glmms.
If I understood the description correctly then have 3 repeated measures in 2 sites where each site has 5 plots. So plots are nested within sites, but you don't have enough sites, or plots, to treat them as nested with the usual syntax (1|site/plot), so instead you could use the combination of site and plot as the grouping factor (1|site:plot). Treatment is clearly a fixed effect and there is no justification for treating it as random. There are only 3 years, so this can be treated as fixed too.
So I would suggest a model such as:
glmer(Number~Year*Treatment+(1|site:plot),data=data,family=poisson) | How does one decide on whether to use a GLMM versus an LME? And how do you select the random/fixed e | If you have count data as the response variable then you should be using a glmm. A poisson model is appropriate so long as it is not over-dispersed or zero-inflated, in which case you will need to con | How does one decide on whether to use a GLMM versus an LME? And how do you select the random/fixed effects?
If you have count data as the response variable then you should be using a glmm. A poisson model is appropriate so long as it is not over-dispersed or zero-inflated, in which case you will need to consider other glmms.
If I understood the description correctly then have 3 repeated measures in 2 sites where each site has 5 plots. So plots are nested within sites, but you don't have enough sites, or plots, to treat them as nested with the usual syntax (1|site/plot), so instead you could use the combination of site and plot as the grouping factor (1|site:plot). Treatment is clearly a fixed effect and there is no justification for treating it as random. There are only 3 years, so this can be treated as fixed too.
So I would suggest a model such as:
glmer(Number~Year*Treatment+(1|site:plot),data=data,family=poisson) | How does one decide on whether to use a GLMM versus an LME? And how do you select the random/fixed e
If you have count data as the response variable then you should be using a glmm. A poisson model is appropriate so long as it is not over-dispersed or zero-inflated, in which case you will need to con |
55,043 | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV? | The equivariance to monotone transformations property that implies $Q_q(y \vert x)=\exp \{Q_q(\ln y|x )\}$ is exact only if the conditional quantile function is correctly specified. This is unlikely to be the case in practice, and is not the case in your simulation, since $\exp \{x+\varepsilon\} \ne x + \varepsilon$. The only case where the linear model will be exact is when all regressors are discrete and we specify a fully saturated model with dummy variables as regressors that exhaust all the possible interactions.
This will give you much better results, since I am adding a constant to avoid undefined logs of zeros and negatives, instead of exponentiating:
library(quantreg)
set.seed(1)
x1 = rnorm(100)
y = 10 + x1 + 3*rnorm(100)
m = rq(y ~1 + x1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~1 + x1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2)) | Why are the predictions of a quantile regression model changed by an increasing transformation of th | The equivariance to monotone transformations property that implies $Q_q(y \vert x)=\exp \{Q_q(\ln y|x )\}$ is exact only if the conditional quantile function is correctly specified. This is unlikely t | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV?
The equivariance to monotone transformations property that implies $Q_q(y \vert x)=\exp \{Q_q(\ln y|x )\}$ is exact only if the conditional quantile function is correctly specified. This is unlikely to be the case in practice, and is not the case in your simulation, since $\exp \{x+\varepsilon\} \ne x + \varepsilon$. The only case where the linear model will be exact is when all regressors are discrete and we specify a fully saturated model with dummy variables as regressors that exhaust all the possible interactions.
This will give you much better results, since I am adding a constant to avoid undefined logs of zeros and negatives, instead of exponentiating:
library(quantreg)
set.seed(1)
x1 = rnorm(100)
y = 10 + x1 + 3*rnorm(100)
m = rq(y ~1 + x1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~1 + x1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2)) | Why are the predictions of a quantile regression model changed by an increasing transformation of th
The equivariance to monotone transformations property that implies $Q_q(y \vert x)=\exp \{Q_q(\ln y|x )\}$ is exact only if the conditional quantile function is correctly specified. This is unlikely t |
55,044 | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV? | This isn't a complete answer, but it may help.
The covariate in your model may be a red herring, the issue is reproducible with an intercept only model
set.seed(1)
y = runif(100, 0, 1) # Need to keep it positive for the log.
m = rq(y ~ 1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~ 1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2))
which results in
> print(head(p1))
1 2 3 4 5 6
0.4820801 0.4820801 0.4820801 0.4820801 0.4820801 0.4820801
> print(head(p2))
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
But if we redo this whole thing with an odd number of data points
set.seed(1)
y = runif(101, 0, 1) # <- Now it's odd!
m = rq(y ~ 1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~ 1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2))
we get
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
> print(head(p2))
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
Which is on the nose.
It's easy to reproduce this without the quantile regression call
> x <- c(1, 2, 3, 4, 5, 6)
> median(x)
[1] 3.5
> exp(median(log(x)))
[1] 3.464102
>
> x <- c(1, 2, 3, 4, 5, 6, 7)
> median(x)
[1] 4
> exp(median(log(x)))
[1] 4
I suspect this has something to do with the behaviour you're seeing, but it's not clear how to incorporate the covariate into the argument. | Why are the predictions of a quantile regression model changed by an increasing transformation of th | This isn't a complete answer, but it may help.
The covariate in your model may be a red herring, the issue is reproducible with an intercept only model
set.seed(1)
y = runif(100, 0, 1) # Need to kee | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV?
This isn't a complete answer, but it may help.
The covariate in your model may be a red herring, the issue is reproducible with an intercept only model
set.seed(1)
y = runif(100, 0, 1) # Need to keep it positive for the log.
m = rq(y ~ 1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~ 1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2))
which results in
> print(head(p1))
1 2 3 4 5 6
0.4820801 0.4820801 0.4820801 0.4820801 0.4820801 0.4820801
> print(head(p2))
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
But if we redo this whole thing with an odd number of data points
set.seed(1)
y = runif(101, 0, 1) # <- Now it's odd!
m = rq(y ~ 1, tau = .5)
p1 = predict(m)
m2 = rq(log(y) ~ 1, tau = .5)
p2 = exp(predict(m2))
print(head(p1))
print(head(p2))
we get
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
> print(head(p2))
1 2 3 4 5 6
0.4935413 0.4935413 0.4935413 0.4935413 0.4935413 0.4935413
Which is on the nose.
It's easy to reproduce this without the quantile regression call
> x <- c(1, 2, 3, 4, 5, 6)
> median(x)
[1] 3.5
> exp(median(log(x)))
[1] 3.464102
>
> x <- c(1, 2, 3, 4, 5, 6, 7)
> median(x)
[1] 4
> exp(median(log(x)))
[1] 4
I suspect this has something to do with the behaviour you're seeing, but it's not clear how to incorporate the covariate into the argument. | Why are the predictions of a quantile regression model changed by an increasing transformation of th
This isn't a complete answer, but it may help.
The covariate in your model may be a red herring, the issue is reproducible with an intercept only model
set.seed(1)
y = runif(100, 0, 1) # Need to kee |
55,045 | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV? | The fitted values are calculated by substituting in the vector values of the independent variable to the coefficient estimates (in this case of the median).
$y = mx +c$.
There is a direct linear relationship between the original (m) and log (m2) predictions (ie. same x with different slope and intercept).
Here's a code snippet to get to the nuts and bolts of it:
set.seed(1)
x1 = rnorm(100)
y = exp(x1 + 3*rnorm(100))
X <- as.matrix(cbind("(Intercept)"=rep(1,length(x1)),x1))
rq1 <- rq.fit.fnb(X, y, tau = 0.5)
resid <- (y - X %*% rq1$coefficients)
fit <- X %*% rq1$coefficients
rq2 <- rq.fit.fnb(X, log(y), tau = 0.5)
residlog <- (y - X %*% rq2$coefficients)
fitlog <- X %*% rq2$coefficients
model <- coef(lm(fit~fitlog))
plot(fitlog ~ fit)
head(fit)
head((fitlog*model[2])+model[1])
[,1]
[1,] 0.4769196
[2,] 0.8292802
[3,] 0.3859367
[4,] 1.4432873
[5,] 0.8927255
[6,] 0.3925308
[,1]
[1,] 0.4769196
[2,] 0.8292802
[3,] 0.3859367
[4,] 1.4432873
[5,] 0.8927255
[6,] 0.3925308
Refs for the mathematics of quantile regression:
https://projecteuclid.org/download/pdf_1/euclid.ss/1030037960
http://www.econ.uiuc.edu/~roger/research/rq/rq.pdf | Why are the predictions of a quantile regression model changed by an increasing transformation of th | The fitted values are calculated by substituting in the vector values of the independent variable to the coefficient estimates (in this case of the median).
$y = mx +c$.
There is a direct linear rela | Why are the predictions of a quantile regression model changed by an increasing transformation of the DV?
The fitted values are calculated by substituting in the vector values of the independent variable to the coefficient estimates (in this case of the median).
$y = mx +c$.
There is a direct linear relationship between the original (m) and log (m2) predictions (ie. same x with different slope and intercept).
Here's a code snippet to get to the nuts and bolts of it:
set.seed(1)
x1 = rnorm(100)
y = exp(x1 + 3*rnorm(100))
X <- as.matrix(cbind("(Intercept)"=rep(1,length(x1)),x1))
rq1 <- rq.fit.fnb(X, y, tau = 0.5)
resid <- (y - X %*% rq1$coefficients)
fit <- X %*% rq1$coefficients
rq2 <- rq.fit.fnb(X, log(y), tau = 0.5)
residlog <- (y - X %*% rq2$coefficients)
fitlog <- X %*% rq2$coefficients
model <- coef(lm(fit~fitlog))
plot(fitlog ~ fit)
head(fit)
head((fitlog*model[2])+model[1])
[,1]
[1,] 0.4769196
[2,] 0.8292802
[3,] 0.3859367
[4,] 1.4432873
[5,] 0.8927255
[6,] 0.3925308
[,1]
[1,] 0.4769196
[2,] 0.8292802
[3,] 0.3859367
[4,] 1.4432873
[5,] 0.8927255
[6,] 0.3925308
Refs for the mathematics of quantile regression:
https://projecteuclid.org/download/pdf_1/euclid.ss/1030037960
http://www.econ.uiuc.edu/~roger/research/rq/rq.pdf | Why are the predictions of a quantile regression model changed by an increasing transformation of th
The fitted values are calculated by substituting in the vector values of the independent variable to the coefficient estimates (in this case of the median).
$y = mx +c$.
There is a direct linear rela |
55,046 | Financial Slang and NLP for Sentiment Analysis | you could try word2vec, which has a nice implementation in Gensim library.
It should end up having similar vectors for the abbreviation (or slang) and the full word, and certainly can be used with sent. analysis . You could train it on a large number of tweets if there is a financial set somewhere? You would then use a linear model like Log Reg / SVM etc to classify.
this is a large tweet model, but I dont know if it would have a financial corpus within it of sufficient size, but it is pretty large! http://www.fredericgodin.com/software/ 400 million tweets 4Gb file | Financial Slang and NLP for Sentiment Analysis | you could try word2vec, which has a nice implementation in Gensim library.
It should end up having similar vectors for the abbreviation (or slang) and the full word, and certainly can be used with se | Financial Slang and NLP for Sentiment Analysis
you could try word2vec, which has a nice implementation in Gensim library.
It should end up having similar vectors for the abbreviation (or slang) and the full word, and certainly can be used with sent. analysis . You could train it on a large number of tweets if there is a financial set somewhere? You would then use a linear model like Log Reg / SVM etc to classify.
this is a large tweet model, but I dont know if it would have a financial corpus within it of sufficient size, but it is pretty large! http://www.fredericgodin.com/software/ 400 million tweets 4Gb file | Financial Slang and NLP for Sentiment Analysis
you could try word2vec, which has a nice implementation in Gensim library.
It should end up having similar vectors for the abbreviation (or slang) and the full word, and certainly can be used with se |
55,047 | Financial Slang and NLP for Sentiment Analysis | You could use Kim's Character-Aware Neural Language Models https://arxiv.org/abs/1508.06615 as an alternative to word2vec. It uses a CNN over character inputs to produce a fixed size vector for each word. It can scale to previously unseen words. This feature of scaling to previously unseen words is why I prefer it over "simple" word2vec.
FastText by Facebook is also interesting (and quicker) than Kim's model. | Financial Slang and NLP for Sentiment Analysis | You could use Kim's Character-Aware Neural Language Models https://arxiv.org/abs/1508.06615 as an alternative to word2vec. It uses a CNN over character inputs to produce a fixed size vector for each w | Financial Slang and NLP for Sentiment Analysis
You could use Kim's Character-Aware Neural Language Models https://arxiv.org/abs/1508.06615 as an alternative to word2vec. It uses a CNN over character inputs to produce a fixed size vector for each word. It can scale to previously unseen words. This feature of scaling to previously unseen words is why I prefer it over "simple" word2vec.
FastText by Facebook is also interesting (and quicker) than Kim's model. | Financial Slang and NLP for Sentiment Analysis
You could use Kim's Character-Aware Neural Language Models https://arxiv.org/abs/1508.06615 as an alternative to word2vec. It uses a CNN over character inputs to produce a fixed size vector for each w |
55,048 | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manner? | The "suggested algorithm" is incorrect. One way to see why that is so is to count the number of equiprobable permutations performed by the algorithm. At each step there are $m$ possible values for $k$, whence after $n$ steps there are $m^n$ possible results. Although many results will be duplicated, the point is that the probability of outputting any particular permutation must be a multiple of $m^{-n}$. However, the correct probability, $1/m!$, is rarely such a multiple. For instance, with $m=3$ and $n=2$ you will produce $3^2=9$ possible permutations, each with chance $1/9$, but there are only $m!=6$ distinct permutations. Since $1/6$ is not a multiple of $1/9$, none of the permutations will be produced with a chance of $1/6$.
There are better ways. One is "Algorithm P" from Knuth's Seminumerical Algorithms, section 3.4.2:
for i := 1 to n do
Swap(x[i], x[RandInt(i,m)])
The proof that this works is by induction on $m$.
Obviously it works in the base case (either $m=0$ or $m=1$, as you prefer).
In the first step, every $x_k$ has an equal chance of being moved into the first position. The algorithm proceeds to work recursively and independently on the elements in positions $2$ through $n$, where by the inductive hypothesis every one of those elements has an equal and independent chance of appearing anywhere among the first $n-1$ positions. Consequently all $m$ of the elements of $x$ have equal and independent chances of appearing among the first $n$ positions when the algorithm terminates, QED.
Knuth's Algorithm S guarantees the output will be in the order in which they originally appeared in $x$:
Select := n
Remaining := m
for i := 1 to m do
if RandReal(0,1) * Remaining < Select then
output x[i]
Select--
Remaining--
Exercises (2), (3), and (4) in that section ask the reader to prove this algorithm works. Once again the proof is an induction on $m$.
When $m=n=1$, $x$ itself is always returned.
Otherwise, $x_1$ will be output with probability $n/m$ in the first step, which is the correct probability, and the algorithm proceeds recursively to output $n-1$ elements of $x_{-1} = (x_2, x_3, \ldots, x_m)$ in sorted order if $x_1$ was output and otherwise will output $n$ elements of $x_{-1}$ in sorted order, QED.
If you would like to follow along, here is an executable version in R, along with a quick simulation to verify that all elements of $x$ have equal chances of being included.
algorithm.S <- function(x, n=1) {
m <- length(x)
#
# Check input.
#
if (n < 0 || n > m) stop("Subset size out of range.")
#
# Handle special cases that R has trouble with.
#
if (m <= 1)
if (n==0) return (x[c()]) else return(x)
#
# The algorithm.
#
y <- x[1:n]
select <- n
remaining <- m
j <- 0
for (i in 1:m) {
if (runif(1) * remaining < select) {
j <- j+1; y[j] <- i
select <- select-1
}
remaining <- remaining-1
}
return(y)
}
x <- 1:10
sim <- replicate(1e4, algorithm.S(x, 4))
hist(sim, breaks=seq(min(x)-1/2, max(x)+1/2, by=1)) | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manne | The "suggested algorithm" is incorrect. One way to see why that is so is to count the number of equiprobable permutations performed by the algorithm. At each step there are $m$ possible values for $ | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manner?
The "suggested algorithm" is incorrect. One way to see why that is so is to count the number of equiprobable permutations performed by the algorithm. At each step there are $m$ possible values for $k$, whence after $n$ steps there are $m^n$ possible results. Although many results will be duplicated, the point is that the probability of outputting any particular permutation must be a multiple of $m^{-n}$. However, the correct probability, $1/m!$, is rarely such a multiple. For instance, with $m=3$ and $n=2$ you will produce $3^2=9$ possible permutations, each with chance $1/9$, but there are only $m!=6$ distinct permutations. Since $1/6$ is not a multiple of $1/9$, none of the permutations will be produced with a chance of $1/6$.
There are better ways. One is "Algorithm P" from Knuth's Seminumerical Algorithms, section 3.4.2:
for i := 1 to n do
Swap(x[i], x[RandInt(i,m)])
The proof that this works is by induction on $m$.
Obviously it works in the base case (either $m=0$ or $m=1$, as you prefer).
In the first step, every $x_k$ has an equal chance of being moved into the first position. The algorithm proceeds to work recursively and independently on the elements in positions $2$ through $n$, where by the inductive hypothesis every one of those elements has an equal and independent chance of appearing anywhere among the first $n-1$ positions. Consequently all $m$ of the elements of $x$ have equal and independent chances of appearing among the first $n$ positions when the algorithm terminates, QED.
Knuth's Algorithm S guarantees the output will be in the order in which they originally appeared in $x$:
Select := n
Remaining := m
for i := 1 to m do
if RandReal(0,1) * Remaining < Select then
output x[i]
Select--
Remaining--
Exercises (2), (3), and (4) in that section ask the reader to prove this algorithm works. Once again the proof is an induction on $m$.
When $m=n=1$, $x$ itself is always returned.
Otherwise, $x_1$ will be output with probability $n/m$ in the first step, which is the correct probability, and the algorithm proceeds recursively to output $n-1$ elements of $x_{-1} = (x_2, x_3, \ldots, x_m)$ in sorted order if $x_1$ was output and otherwise will output $n$ elements of $x_{-1}$ in sorted order, QED.
If you would like to follow along, here is an executable version in R, along with a quick simulation to verify that all elements of $x$ have equal chances of being included.
algorithm.S <- function(x, n=1) {
m <- length(x)
#
# Check input.
#
if (n < 0 || n > m) stop("Subset size out of range.")
#
# Handle special cases that R has trouble with.
#
if (m <= 1)
if (n==0) return (x[c()]) else return(x)
#
# The algorithm.
#
y <- x[1:n]
select <- n
remaining <- m
j <- 0
for (i in 1:m) {
if (runif(1) * remaining < select) {
j <- j+1; y[j] <- i
select <- select-1
}
remaining <- remaining-1
}
return(y)
}
x <- 1:10
sim <- replicate(1e4, algorithm.S(x, 4))
hist(sim, breaks=seq(min(x)-1/2, max(x)+1/2, by=1)) | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manne
The "suggested algorithm" is incorrect. One way to see why that is so is to count the number of equiprobable permutations performed by the algorithm. At each step there are $m$ possible values for $ |
55,049 | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manner? | Start at the beginning of the list.
Pick the element with probability equal to $\frac{n}{m}$. If it is chosen set $n = n - 1$. Set $m = m - 1$ Now pass on to the next element and repeat until either $n$ or $m$ is zero.
Not tested in detail but should work.
Or use the facilities of your favourite statistical software. | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manne | Start at the beginning of the list.
Pick the element with probability equal to $\frac{n}{m}$. If it is chosen set $n = n - 1$. Set $m = m - 1$ Now pass on to the next element and repeat until either $ | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manner?
Start at the beginning of the list.
Pick the element with probability equal to $\frac{n}{m}$. If it is chosen set $n = n - 1$. Set $m = m - 1$ Now pass on to the next element and repeat until either $n$ or $m$ is zero.
Not tested in detail but should work.
Or use the facilities of your favourite statistical software. | How to efficiently choose $n$ subset out of a set of $m$ many numbers, in a randomized uniform manne
Start at the beginning of the list.
Pick the element with probability equal to $\frac{n}{m}$. If it is chosen set $n = n - 1$. Set $m = m - 1$ Now pass on to the next element and repeat until either $ |
55,050 | Propensity Score Matching for more than 2 groups | Check out the WeightIt package. You can simply provide a factor treatment variable and covariates and it will estimate balancing weights for that treatment. It provides an interface to other packages and methods that do this using a unified syntax.
Currently, it provides support for estimating balancing weighting for multinomial treatments using multinomial logistic and probit regression propensity scores, generalized boosted modeling propensity scores (through twang), covariate balancing propensity score (through CBPS), entropy balancing (through ebal), optimization-based weights (through optweight), empirical balancing calibration weights (through ATE), and SuperLearner propensity scores (through SuperLearner). The syntax is the following:
w.out <- weightit(treat ~ cov1 + cov2 + cov3, data = data, method = "ps")
You can change the estimation method with the method parameter and others. | Propensity Score Matching for more than 2 groups | Check out the WeightIt package. You can simply provide a factor treatment variable and covariates and it will estimate balancing weights for that treatment. It provides an interface to other packages | Propensity Score Matching for more than 2 groups
Check out the WeightIt package. You can simply provide a factor treatment variable and covariates and it will estimate balancing weights for that treatment. It provides an interface to other packages and methods that do this using a unified syntax.
Currently, it provides support for estimating balancing weighting for multinomial treatments using multinomial logistic and probit regression propensity scores, generalized boosted modeling propensity scores (through twang), covariate balancing propensity score (through CBPS), entropy balancing (through ebal), optimization-based weights (through optweight), empirical balancing calibration weights (through ATE), and SuperLearner propensity scores (through SuperLearner). The syntax is the following:
w.out <- weightit(treat ~ cov1 + cov2 + cov3, data = data, method = "ps")
You can change the estimation method with the method parameter and others. | Propensity Score Matching for more than 2 groups
Check out the WeightIt package. You can simply provide a factor treatment variable and covariates and it will estimate balancing weights for that treatment. It provides an interface to other packages |
55,051 | Propensity Score Matching for more than 2 groups | Propensity score calculation and subsequent paired analysis is possible in several ways. There are already some overlapping Q&A in CV that you might wish to look at:
Propensity Score Matching in R with Multiple Treatments
Software that matches 6 groups by propensity score?
Comparing
two or more treatments with inverse probablity of treatment
weighting
My advice would be to use the twang R package. | Propensity Score Matching for more than 2 groups | Propensity score calculation and subsequent paired analysis is possible in several ways. There are already some overlapping Q&A in CV that you might wish to look at:
Propensity Score Matching in R wi | Propensity Score Matching for more than 2 groups
Propensity score calculation and subsequent paired analysis is possible in several ways. There are already some overlapping Q&A in CV that you might wish to look at:
Propensity Score Matching in R with Multiple Treatments
Software that matches 6 groups by propensity score?
Comparing
two or more treatments with inverse probablity of treatment
weighting
My advice would be to use the twang R package. | Propensity Score Matching for more than 2 groups
Propensity score calculation and subsequent paired analysis is possible in several ways. There are already some overlapping Q&A in CV that you might wish to look at:
Propensity Score Matching in R wi |
55,052 | Propensity Score Matching for more than 2 groups | I have mostly used PSM for 2 class problems. We predict the probability of the treatment. And then compare effect of treatment vs control in same decile of our probability scores. Customers in same decile typically are similar and so comparable.
So you can repeat the PSM twice. Control each time being people who have taken medicine on time. Though i am not sure if doing the test will lead to higher type 1 errors like in multiple t tests.
For PSM in r for 2 variables there is a package matchit but have not used it. | Propensity Score Matching for more than 2 groups | I have mostly used PSM for 2 class problems. We predict the probability of the treatment. And then compare effect of treatment vs control in same decile of our probability scores. Customers in same de | Propensity Score Matching for more than 2 groups
I have mostly used PSM for 2 class problems. We predict the probability of the treatment. And then compare effect of treatment vs control in same decile of our probability scores. Customers in same decile typically are similar and so comparable.
So you can repeat the PSM twice. Control each time being people who have taken medicine on time. Though i am not sure if doing the test will lead to higher type 1 errors like in multiple t tests.
For PSM in r for 2 variables there is a package matchit but have not used it. | Propensity Score Matching for more than 2 groups
I have mostly used PSM for 2 class problems. We predict the probability of the treatment. And then compare effect of treatment vs control in same decile of our probability scores. Customers in same de |
55,053 | Generating random numbers from a multiplication of CDFs | If the $X_i$s are independent, $G(x)$ is the CDF of the maximum of $X_1,\ldots,X_n$, since
\begin{equation}
\mathbb{P}(\max ( X_1,X_2,\ldots,X_n) \leq x) = \mathbb{P}(X_1, \leq x, X_2\leq x \ldots, X_n \leq x) \\ = \mathbb{P}(X_1 \leq x)\,\mathbb{P}(X_2 \leq x)\,\ldots\,\mathbb{P}(X_n \leq x) = F_1(x)\,F_2(x)\,\ldots\,F_n(x).
\end{equation}
Thus, a realization from the distribution $G$ may be generated by simulating independent realizations from all $F_i$s and taking the maximum. 100 independent random numbers may be generated by repeating this 100 times.
More efficient (using less random variables) sampling may be attainable in special cases (such as if $G$ itself has an easy-to-sample-from form), but assuming only the premises of the question, I don't see how there could be any better method than taking the maximum. | Generating random numbers from a multiplication of CDFs | If the $X_i$s are independent, $G(x)$ is the CDF of the maximum of $X_1,\ldots,X_n$, since
\begin{equation}
\mathbb{P}(\max ( X_1,X_2,\ldots,X_n) \leq x) = \mathbb{P}(X_1, \leq x, X_2\leq x \ldots, X_ | Generating random numbers from a multiplication of CDFs
If the $X_i$s are independent, $G(x)$ is the CDF of the maximum of $X_1,\ldots,X_n$, since
\begin{equation}
\mathbb{P}(\max ( X_1,X_2,\ldots,X_n) \leq x) = \mathbb{P}(X_1, \leq x, X_2\leq x \ldots, X_n \leq x) \\ = \mathbb{P}(X_1 \leq x)\,\mathbb{P}(X_2 \leq x)\,\ldots\,\mathbb{P}(X_n \leq x) = F_1(x)\,F_2(x)\,\ldots\,F_n(x).
\end{equation}
Thus, a realization from the distribution $G$ may be generated by simulating independent realizations from all $F_i$s and taking the maximum. 100 independent random numbers may be generated by repeating this 100 times.
More efficient (using less random variables) sampling may be attainable in special cases (such as if $G$ itself has an easy-to-sample-from form), but assuming only the premises of the question, I don't see how there could be any better method than taking the maximum. | Generating random numbers from a multiplication of CDFs
If the $X_i$s are independent, $G(x)$ is the CDF of the maximum of $X_1,\ldots,X_n$, since
\begin{equation}
\mathbb{P}(\max ( X_1,X_2,\ldots,X_n) \leq x) = \mathbb{P}(X_1, \leq x, X_2\leq x \ldots, X_ |
55,054 | How can calculate number of weights in LSTM | Each cell in the LSTM has four components: the cell weights, the input gate, the forget gate, and the output gate. Each component has weights associated with all of its input from the previous layer, plus input from the previous time step. So if there are $n_i$ cells in an LSTM layer, and $n_{i-1}$ in the earlier layer, there will be $n_{i-1} + n_i$ inputs to each component of the cell. Since there are four components, that means there are $4(n_{i-1} + n_i)$ weights associated with each cell. And since we have $n_i$ cells, that means there are $4 n_i (n_{i-1} + n_i)$ weights associated with that layer.
In your first example, we have $n_0 = 39$, $n_1 = n_2 = n_3 = 1024$, and $n_4 = 34$. So the overall number of weights is about 21M.
In the second example we have $n_0 = 205$, $n_1 = ... = n_5 = 700$, and $n_6 = 205$. So the total number of weights is about 19M. | How can calculate number of weights in LSTM | Each cell in the LSTM has four components: the cell weights, the input gate, the forget gate, and the output gate. Each component has weights associated with all of its input from the previous layer, | How can calculate number of weights in LSTM
Each cell in the LSTM has four components: the cell weights, the input gate, the forget gate, and the output gate. Each component has weights associated with all of its input from the previous layer, plus input from the previous time step. So if there are $n_i$ cells in an LSTM layer, and $n_{i-1}$ in the earlier layer, there will be $n_{i-1} + n_i$ inputs to each component of the cell. Since there are four components, that means there are $4(n_{i-1} + n_i)$ weights associated with each cell. And since we have $n_i$ cells, that means there are $4 n_i (n_{i-1} + n_i)$ weights associated with that layer.
In your first example, we have $n_0 = 39$, $n_1 = n_2 = n_3 = 1024$, and $n_4 = 34$. So the overall number of weights is about 21M.
In the second example we have $n_0 = 205$, $n_1 = ... = n_5 = 700$, and $n_6 = 205$. So the total number of weights is about 19M. | How can calculate number of weights in LSTM
Each cell in the LSTM has four components: the cell weights, the input gate, the forget gate, and the output gate. Each component has weights associated with all of its input from the previous layer, |
55,055 | How can calculate number of weights in LSTM | I think this may be my answer.
If LSTM don't use recurrent projection layer and non-recurrent projection layer, use this equivalent.
(nc*nc*4*di)+(ni*nc*4*di)+(nc*no)+(nc*3*di)
nc = number of LSTM cells
ni = number of input
no = numbre of output
di = number of layers
if LSTM use projection layer, number of weights obtain from this:
(nc*nr*4*di)+(ni*nc*4*di)+((nr+np)*no)+(nc*nr*di)+(nc*3*di)+(nr*nc*4*(di-1))
nr = number of recurrent projection layer
np = number of non-recurrent projection layer
For my exmaple :
I use recurrent layer too.
Recurrent projection layer = 256
Non-recurrent projection layer = 256
(1024*256*4*3)+(39*1024*4*3)+((256+256)*34)+(1024*256*3)+(1024*3*3)+(256*1025*4*(2)) = 6535168 = 6.5M
I used this reference:
LONG SHORT-TERM MEMORY BASED RECURRENT NEURAL NETWORK ARCHITECTURES FOR LARGE VOCABULARY SPEECH RECOGNITION | How can calculate number of weights in LSTM | I think this may be my answer.
If LSTM don't use recurrent projection layer and non-recurrent projection layer, use this equivalent.
(nc*nc*4*di)+(ni*nc*4*di)+(nc*no)+(nc*3*di)
nc = number of LSTM ce | How can calculate number of weights in LSTM
I think this may be my answer.
If LSTM don't use recurrent projection layer and non-recurrent projection layer, use this equivalent.
(nc*nc*4*di)+(ni*nc*4*di)+(nc*no)+(nc*3*di)
nc = number of LSTM cells
ni = number of input
no = numbre of output
di = number of layers
if LSTM use projection layer, number of weights obtain from this:
(nc*nr*4*di)+(ni*nc*4*di)+((nr+np)*no)+(nc*nr*di)+(nc*3*di)+(nr*nc*4*(di-1))
nr = number of recurrent projection layer
np = number of non-recurrent projection layer
For my exmaple :
I use recurrent layer too.
Recurrent projection layer = 256
Non-recurrent projection layer = 256
(1024*256*4*3)+(39*1024*4*3)+((256+256)*34)+(1024*256*3)+(1024*3*3)+(256*1025*4*(2)) = 6535168 = 6.5M
I used this reference:
LONG SHORT-TERM MEMORY BASED RECURRENT NEURAL NETWORK ARCHITECTURES FOR LARGE VOCABULARY SPEECH RECOGNITION | How can calculate number of weights in LSTM
I think this may be my answer.
If LSTM don't use recurrent projection layer and non-recurrent projection layer, use this equivalent.
(nc*nc*4*di)+(ni*nc*4*di)+(nc*no)+(nc*3*di)
nc = number of LSTM ce |
55,056 | How can calculate number of weights in LSTM | Total number of weights in LSTM N/W = 4 x inp_dim x (inp_dim + out_dim + 1)
So, in your first model:
For Stage-1(input --> h1):
inp_dim = 39; out_dim = 1024
Therefore, weights of stage-1 = 4 x 39 x (39 + 1024 + 1) = 0.165M
For Stage-2(h1 --> h2):
inp_dim = 1024; out_dim = 1024
Therefore, weights of stage-2 = 4 x 1024 x (1024 + 1024 + 1) = 8.392M
For Stage-3(h2 --> h3):
inp_dim = 1024; out_dim = 1024
Therefore, weights of stage-3 = 4 x 1024 x (1024 + 1024 + 1) = 8.392M
For Stage-4(h3 --> output):
inp_dim = 1024; out_dim = 34
Therefore, weights of stage-4 = 4 x 1024 x (1024 + 34 + 1) = 4.337M
Thus, total weights = 0.165M + 8.392M + 8.392M + 4.337M = 21.2M (approx)
And, in your second model:
For Stage-1(input --> h1):
inp_dim = 205; out_dim = 700
Therefore, weights of stage-1 = 4 x 205 x (205 + 700 + 1) = 0.742M
For Stage-2(h1 --> h2):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-2 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-3(h2 --> h3):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-3 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-4(h3 --> h4):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-4 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-5(h4 --> h5):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-5 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-6(h5 --> output):
inp_dim = 700; out_dim = 205
Therefore, weights of stage-6 = 4 x 700 x (700 + 205 + 1) = 2.536M
Thus, total weights = 0.742M + (4 x 3.922M) + 2.536M = 19M (approx) | How can calculate number of weights in LSTM | Total number of weights in LSTM N/W = 4 x inp_dim x (inp_dim + out_dim + 1)
So, in your first model:
For Stage-1(input --> h1):
inp_dim = 39; out_dim = 1024
Therefore, weights of stage-1 = 4 x 39 x (3 | How can calculate number of weights in LSTM
Total number of weights in LSTM N/W = 4 x inp_dim x (inp_dim + out_dim + 1)
So, in your first model:
For Stage-1(input --> h1):
inp_dim = 39; out_dim = 1024
Therefore, weights of stage-1 = 4 x 39 x (39 + 1024 + 1) = 0.165M
For Stage-2(h1 --> h2):
inp_dim = 1024; out_dim = 1024
Therefore, weights of stage-2 = 4 x 1024 x (1024 + 1024 + 1) = 8.392M
For Stage-3(h2 --> h3):
inp_dim = 1024; out_dim = 1024
Therefore, weights of stage-3 = 4 x 1024 x (1024 + 1024 + 1) = 8.392M
For Stage-4(h3 --> output):
inp_dim = 1024; out_dim = 34
Therefore, weights of stage-4 = 4 x 1024 x (1024 + 34 + 1) = 4.337M
Thus, total weights = 0.165M + 8.392M + 8.392M + 4.337M = 21.2M (approx)
And, in your second model:
For Stage-1(input --> h1):
inp_dim = 205; out_dim = 700
Therefore, weights of stage-1 = 4 x 205 x (205 + 700 + 1) = 0.742M
For Stage-2(h1 --> h2):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-2 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-3(h2 --> h3):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-3 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-4(h3 --> h4):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-4 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-5(h4 --> h5):
inp_dim = 700; out_dim = 700
Therefore, weights of stage-5 = 4 x 700 x (700 + 700 + 1) = 3.922M
For Stage-6(h5 --> output):
inp_dim = 700; out_dim = 205
Therefore, weights of stage-6 = 4 x 700 x (700 + 205 + 1) = 2.536M
Thus, total weights = 0.742M + (4 x 3.922M) + 2.536M = 19M (approx) | How can calculate number of weights in LSTM
Total number of weights in LSTM N/W = 4 x inp_dim x (inp_dim + out_dim + 1)
So, in your first model:
For Stage-1(input --> h1):
inp_dim = 39; out_dim = 1024
Therefore, weights of stage-1 = 4 x 39 x (3 |
55,057 | How many lags should I include in time series prediction? | Looking at individual autocorrelations may help in simple cases, but this way you could miss lags that are important only jointly but not individually. Alternatively, you may try the following:
Select a large number of lags and estimate a penalized model (e.g. using LASSO, ridge or elastic net regularization). The penalization should diminish the impact of irrelevant lags and this way effectively do the selection. There would be some inconvenience in that cross validation is normally used for selecting penalty intensity, and cross validation is a bit tricky with time series. But this is still doable, no doubt about it.
Try a number of different lag combinations and either
(i) select the best of them according to an information criterion (AIC should do well in terms of forecasting as it is an efficient selector) or out-of-sample performance OR
(ii) combine some or even all of them weighting the models based on their likelihood, information criteria or the like. Refer to model averaging and forecast combination literature for detailed recipes.
(ii) would often do better than (i) in terms of forecasting, especially if you are selecting from a large number of alternatives.
Another alternative is to leave the job to some automated procedure like the auto.arima function in "forecast" package in R. The algorithm for auto.arima is available in Hyndman & Khandakar (2008).
References:
Hyndman, Rob J., and Yeasmin Khandakar. "Automatic Time Series Forecasting: The forecast Package for R." Journal of Statistical Software 27.i03 (2008). | How many lags should I include in time series prediction? | Looking at individual autocorrelations may help in simple cases, but this way you could miss lags that are important only jointly but not individually. Alternatively, you may try the following:
Selec | How many lags should I include in time series prediction?
Looking at individual autocorrelations may help in simple cases, but this way you could miss lags that are important only jointly but not individually. Alternatively, you may try the following:
Select a large number of lags and estimate a penalized model (e.g. using LASSO, ridge or elastic net regularization). The penalization should diminish the impact of irrelevant lags and this way effectively do the selection. There would be some inconvenience in that cross validation is normally used for selecting penalty intensity, and cross validation is a bit tricky with time series. But this is still doable, no doubt about it.
Try a number of different lag combinations and either
(i) select the best of them according to an information criterion (AIC should do well in terms of forecasting as it is an efficient selector) or out-of-sample performance OR
(ii) combine some or even all of them weighting the models based on their likelihood, information criteria or the like. Refer to model averaging and forecast combination literature for detailed recipes.
(ii) would often do better than (i) in terms of forecasting, especially if you are selecting from a large number of alternatives.
Another alternative is to leave the job to some automated procedure like the auto.arima function in "forecast" package in R. The algorithm for auto.arima is available in Hyndman & Khandakar (2008).
References:
Hyndman, Rob J., and Yeasmin Khandakar. "Automatic Time Series Forecasting: The forecast Package for R." Journal of Statistical Software 27.i03 (2008). | How many lags should I include in time series prediction?
Looking at individual autocorrelations may help in simple cases, but this way you could miss lags that are important only jointly but not individually. Alternatively, you may try the following:
Selec |
55,058 | Method for quantifying intervention effect in time series | In general, evaluation of pre-post effects in time-series analysis is called interrupted time series. This is a very general modeling approach that tests the strong hypothesis:
$\mathcal{H}_0: \mu_{ijt} = f_i(t)$ versus $\mathcal{H}_1 : \mu_{ijt} = f_i(t) + \beta(t)X_{ijt}
$
Where $X_{ijt}$ is the the treatment assignment for individual $i$ at time $t$. The easiest example is treating $\beta$ as a constant function and $X_{ijt}$ as a 0,1 indicator for 0: pre-intervention 1: peri-or post-intervention. Even if the actual "effect" of the intervention is different than this, this test is powered to detect differences in many types of scenarios, for instance, if $\beta(t)$ is any non-zero function, then a working constant parameter $\beta$ will estimate a time-averaged positive response to intervention and is non-zero.
A challenge in time-series analysis of pre-post interventions is using a parametric modeling approach for the auto-correlation. With many replicates of time and function, one can decompose the trend into lagged effects, seasonal effects etc. This would obviate the need for autocorrelation in the error term. Therefore it is not necessary to use forecast, but the model itself directly predicts what would have been observed in the post-intervention time period.
Consider the famous Air Passengers data in the datasets package in R.
## construct an analytic dataset to predict time trend using auto-regressive and seasonal components
AirPassengers <- data.frame('flights'=as.numeric(AirPassengers))
AirPassengers$month <- factor(month.name, levels=month.name)
AirPassengers$year <- rep(1949:1960, each=12)
AirPassengers$lag <- c(NA, AirPassengers$flights[-nrow(AirPassengers)])
plot(AirPassengers$flights, type='l')
AirPassengers$fitted <- exp(predict(lm(log(flights) ~ month + year, data=AirPassengers)))
lines(AirPassengers$fitted, col='red')
It's obvious this provides an excellent prediction of the time based trends. If, though, you were interested in a test of hypothesis as to whether "flying increased" post, say, 1955, you can update the dataset to include a 0/1 indicator for whether or not the time period is post that point and test its significance in a linear model.
For example:
library(lmtest)
library(sandwich)
AirPassengers$post <- AirPassengers$year >= 1955
fit <- lm(log(flights) ~ month + year + post, data=AirPassengers)
coeftest(fit, vcov. = vcovHC)['postTRUE', ]
Gives me:
> coeftest(fit, vcov. = vcovHC)['postTRUE', ]
Estimate Std. Error t value Pr(>|t|)
0.03720327 0.01783242 2.08627126 0.03890842
Which is a nice example of a spurious finding, and a statistically significant effect that isn't practically significant. A more general test could be had by allowing heterogeneity between the month specific effects.
nullmodel <- lm(log(flights) ~ month + year, data=AirPassengers)
fullmodel <- lm(log(flights) ~ post*month + year, data=AirPassengers)
waldtest(nullmodel, fullmodel, vcov=vcovHC, test='Chisq')
Both of these are examples of the general approach to "interrupted time series" for segmented regression. It is a loosely defined term and I'm a little disappointed with how little detail the authors use in describing their exact approach in most cases. | Method for quantifying intervention effect in time series | In general, evaluation of pre-post effects in time-series analysis is called interrupted time series. This is a very general modeling approach that tests the strong hypothesis:
$\mathcal{H}_0: \mu_{ij | Method for quantifying intervention effect in time series
In general, evaluation of pre-post effects in time-series analysis is called interrupted time series. This is a very general modeling approach that tests the strong hypothesis:
$\mathcal{H}_0: \mu_{ijt} = f_i(t)$ versus $\mathcal{H}_1 : \mu_{ijt} = f_i(t) + \beta(t)X_{ijt}
$
Where $X_{ijt}$ is the the treatment assignment for individual $i$ at time $t$. The easiest example is treating $\beta$ as a constant function and $X_{ijt}$ as a 0,1 indicator for 0: pre-intervention 1: peri-or post-intervention. Even if the actual "effect" of the intervention is different than this, this test is powered to detect differences in many types of scenarios, for instance, if $\beta(t)$ is any non-zero function, then a working constant parameter $\beta$ will estimate a time-averaged positive response to intervention and is non-zero.
A challenge in time-series analysis of pre-post interventions is using a parametric modeling approach for the auto-correlation. With many replicates of time and function, one can decompose the trend into lagged effects, seasonal effects etc. This would obviate the need for autocorrelation in the error term. Therefore it is not necessary to use forecast, but the model itself directly predicts what would have been observed in the post-intervention time period.
Consider the famous Air Passengers data in the datasets package in R.
## construct an analytic dataset to predict time trend using auto-regressive and seasonal components
AirPassengers <- data.frame('flights'=as.numeric(AirPassengers))
AirPassengers$month <- factor(month.name, levels=month.name)
AirPassengers$year <- rep(1949:1960, each=12)
AirPassengers$lag <- c(NA, AirPassengers$flights[-nrow(AirPassengers)])
plot(AirPassengers$flights, type='l')
AirPassengers$fitted <- exp(predict(lm(log(flights) ~ month + year, data=AirPassengers)))
lines(AirPassengers$fitted, col='red')
It's obvious this provides an excellent prediction of the time based trends. If, though, you were interested in a test of hypothesis as to whether "flying increased" post, say, 1955, you can update the dataset to include a 0/1 indicator for whether or not the time period is post that point and test its significance in a linear model.
For example:
library(lmtest)
library(sandwich)
AirPassengers$post <- AirPassengers$year >= 1955
fit <- lm(log(flights) ~ month + year + post, data=AirPassengers)
coeftest(fit, vcov. = vcovHC)['postTRUE', ]
Gives me:
> coeftest(fit, vcov. = vcovHC)['postTRUE', ]
Estimate Std. Error t value Pr(>|t|)
0.03720327 0.01783242 2.08627126 0.03890842
Which is a nice example of a spurious finding, and a statistically significant effect that isn't practically significant. A more general test could be had by allowing heterogeneity between the month specific effects.
nullmodel <- lm(log(flights) ~ month + year, data=AirPassengers)
fullmodel <- lm(log(flights) ~ post*month + year, data=AirPassengers)
waldtest(nullmodel, fullmodel, vcov=vcovHC, test='Chisq')
Both of these are examples of the general approach to "interrupted time series" for segmented regression. It is a loosely defined term and I'm a little disappointed with how little detail the authors use in describing their exact approach in most cases. | Method for quantifying intervention effect in time series
In general, evaluation of pre-post effects in time-series analysis is called interrupted time series. This is a very general modeling approach that tests the strong hypothesis:
$\mathcal{H}_0: \mu_{ij |
55,059 | Method for quantifying intervention effect in time series | Repeating your data-generating code for convenience ...
set.seed(101) ## don't to forget to set the seed for reproducibility
##generate a comparable time-series ts()
base <- rnbinom(n = 120, size = 1400, prob = 0.5)
season <- rep(c(600, 400, 150, 0, -50, -80, -300, -600, 50, 100, 200, 300), 10)
## dangerous to name your time-series the same as the ts() function
ts0 <- ts(base + season, start = c(2000,1), end = c(2009,12), frequency = 12)
##generate the independent variables
month.f <- factor(rep(1:12, 10))
dummy.months <- model.matrix(~month.f +0)
lag1 <- lag(ts0); lag2 <- lag(lag(ts0))
time.pre <- 72
time.post <- 48
I'm changing your dummy variables a little bit. For maximum interpretability I think you want to have the intercept and the pre-intervention slope applied all the way through the data set, so that the "intervention indicator" and the "post-intervention slope" represent differences from the pre-intervention behaviour.
pre.interv.trend <- 1:(time.pre+time.post)
interv.indicator <- c(rep(0, time.pre), rep(1, time.post))
post.interv.trend <- c(rep(0, time.pre), 1:time.post)
df <- data.frame(ts0, dummy.months,lag1,lag2,interv.indicator,
pre.interv.trend,post.interv.trend)
The fitted model:
fit <- MASS::glm.nb(ts0 ~. + 0, data = df)
The terms we want to isolate (the only ones that represent the intervention effects):
params <- c("interv.indicator","post.interv.trend")
Using predict(.,type="terms") almost but doesn't quite do what you want. It will get the right prediction, but the confidence intervals fail to take the covariance between the terms into account ...
pp <- predict(fit,type="terms",terms=params,se.fit=TRUE)
pp.fit <- rowSums(pp$fit)
pp.se <- sqrt(rowSums(pp$se.fit^2))
This is a little bit more tedious but is actually correct ...
m <- model.matrix(ts0~.+0,data=df)
m2 <- m[,params]
pred <- drop(m2 %*% coef(fit)[params])
predsd <- sqrt(diag(m2 %*% vcov(fit)[params,params] %*% t(m2)))
Plot the results (thin lines ignore covariance, thick lines include it):
par(las=1,bty="l")
matplot(cbind(pp.fit-1.96*pp.se,pp.fit,pp.fit+1.96*pp.se),
type="l",lty=c(2,1,2),col=c(1,2,1),
ylab="intervention effect (link scale)")
matlines(cbind(pred-1.96*predsd,pred,pred+1.96*predsd),
lty=c(2,1,2),type="l",col=c(1,2,1),lwd=2) | Method for quantifying intervention effect in time series | Repeating your data-generating code for convenience ...
set.seed(101) ## don't to forget to set the seed for reproducibility
##generate a comparable time-series ts()
base <- rnbinom(n = 120, size = | Method for quantifying intervention effect in time series
Repeating your data-generating code for convenience ...
set.seed(101) ## don't to forget to set the seed for reproducibility
##generate a comparable time-series ts()
base <- rnbinom(n = 120, size = 1400, prob = 0.5)
season <- rep(c(600, 400, 150, 0, -50, -80, -300, -600, 50, 100, 200, 300), 10)
## dangerous to name your time-series the same as the ts() function
ts0 <- ts(base + season, start = c(2000,1), end = c(2009,12), frequency = 12)
##generate the independent variables
month.f <- factor(rep(1:12, 10))
dummy.months <- model.matrix(~month.f +0)
lag1 <- lag(ts0); lag2 <- lag(lag(ts0))
time.pre <- 72
time.post <- 48
I'm changing your dummy variables a little bit. For maximum interpretability I think you want to have the intercept and the pre-intervention slope applied all the way through the data set, so that the "intervention indicator" and the "post-intervention slope" represent differences from the pre-intervention behaviour.
pre.interv.trend <- 1:(time.pre+time.post)
interv.indicator <- c(rep(0, time.pre), rep(1, time.post))
post.interv.trend <- c(rep(0, time.pre), 1:time.post)
df <- data.frame(ts0, dummy.months,lag1,lag2,interv.indicator,
pre.interv.trend,post.interv.trend)
The fitted model:
fit <- MASS::glm.nb(ts0 ~. + 0, data = df)
The terms we want to isolate (the only ones that represent the intervention effects):
params <- c("interv.indicator","post.interv.trend")
Using predict(.,type="terms") almost but doesn't quite do what you want. It will get the right prediction, but the confidence intervals fail to take the covariance between the terms into account ...
pp <- predict(fit,type="terms",terms=params,se.fit=TRUE)
pp.fit <- rowSums(pp$fit)
pp.se <- sqrt(rowSums(pp$se.fit^2))
This is a little bit more tedious but is actually correct ...
m <- model.matrix(ts0~.+0,data=df)
m2 <- m[,params]
pred <- drop(m2 %*% coef(fit)[params])
predsd <- sqrt(diag(m2 %*% vcov(fit)[params,params] %*% t(m2)))
Plot the results (thin lines ignore covariance, thick lines include it):
par(las=1,bty="l")
matplot(cbind(pp.fit-1.96*pp.se,pp.fit,pp.fit+1.96*pp.se),
type="l",lty=c(2,1,2),col=c(1,2,1),
ylab="intervention effect (link scale)")
matlines(cbind(pred-1.96*predsd,pred,pred+1.96*predsd),
lty=c(2,1,2),type="l",col=c(1,2,1),lwd=2) | Method for quantifying intervention effect in time series
Repeating your data-generating code for convenience ...
set.seed(101) ## don't to forget to set the seed for reproducibility
##generate a comparable time-series ts()
base <- rnbinom(n = 120, size = |
55,060 | Logistic regression with only categorical predictors | Yeah, it's perfectly acceptable for a logistic regression to contain only categorical predictors. Remember that we code categorical predictors numerically (e.g., 0 and 1, -1 and 1, etc.), so the distinction between categorical and continuous doesn't really exist for the regression.
As for how to plot the effect, I would typically use a bar plot with each bar representing the estimated probability of observing a specific outcome in that condition. These estimates can be calculated using the coefficients of the model. Remember that while a logistic regression produces estimates in logit space, with an inverse logit transformation you can turn predicted values into probabilities.
The below graph actually plots the proportions (so what was observed in the data that was subsequently analysed with a logistic regression, rather than what was estimated by the regression), but it should convey the general idea. | Logistic regression with only categorical predictors | Yeah, it's perfectly acceptable for a logistic regression to contain only categorical predictors. Remember that we code categorical predictors numerically (e.g., 0 and 1, -1 and 1, etc.), so the disti | Logistic regression with only categorical predictors
Yeah, it's perfectly acceptable for a logistic regression to contain only categorical predictors. Remember that we code categorical predictors numerically (e.g., 0 and 1, -1 and 1, etc.), so the distinction between categorical and continuous doesn't really exist for the regression.
As for how to plot the effect, I would typically use a bar plot with each bar representing the estimated probability of observing a specific outcome in that condition. These estimates can be calculated using the coefficients of the model. Remember that while a logistic regression produces estimates in logit space, with an inverse logit transformation you can turn predicted values into probabilities.
The below graph actually plots the proportions (so what was observed in the data that was subsequently analysed with a logistic regression, rather than what was estimated by the regression), but it should convey the general idea. | Logistic regression with only categorical predictors
Yeah, it's perfectly acceptable for a logistic regression to contain only categorical predictors. Remember that we code categorical predictors numerically (e.g., 0 and 1, -1 and 1, etc.), so the disti |
55,061 | Logistic regression with only categorical predictors | It certainly is allowed. I would suggest you choose a new approach to visualizing your data, other than the sigmoid curve.
As you said being categorical means you do not have a range of data to assess your probabilities over. Categorical variables are either there or not. One approach that comes to my mind, is to plot circles or squares whose size is proportional to your category's parameter (the $\beta$ or $\theta$). I.e in case of gender, 1 female, 0 male, then your $\beta_{female}$ might be say 0.5. Then print a circle whose radius is 0.5, and you can compare other features' importance in predicting your class. Alternatively you can use a pie chart, again to show importance. Using squares has the benefit that you can show the marginal rate of substitution too.
Assume you have the following data:
gender_female likes_plum likes_peach label
1 1 1 0 1
2 1 0 0 0
3 0 0 1 1
...
Then after your regression, you might have: $\beta_{female}=0.5,\beta_{plum}=1.4, \beta_{peach}=1$. Then if you draw squares whose sides are equal to the $\beta$s you can show how many of each $\beta$ fits in the other, effectively showing their relative importance. I.e. for our example being a peach lover has double the effect of being a female in being classified as class 1. | Logistic regression with only categorical predictors | It certainly is allowed. I would suggest you choose a new approach to visualizing your data, other than the sigmoid curve.
As you said being categorical means you do not have a range of data to assess | Logistic regression with only categorical predictors
It certainly is allowed. I would suggest you choose a new approach to visualizing your data, other than the sigmoid curve.
As you said being categorical means you do not have a range of data to assess your probabilities over. Categorical variables are either there or not. One approach that comes to my mind, is to plot circles or squares whose size is proportional to your category's parameter (the $\beta$ or $\theta$). I.e in case of gender, 1 female, 0 male, then your $\beta_{female}$ might be say 0.5. Then print a circle whose radius is 0.5, and you can compare other features' importance in predicting your class. Alternatively you can use a pie chart, again to show importance. Using squares has the benefit that you can show the marginal rate of substitution too.
Assume you have the following data:
gender_female likes_plum likes_peach label
1 1 1 0 1
2 1 0 0 0
3 0 0 1 1
...
Then after your regression, you might have: $\beta_{female}=0.5,\beta_{plum}=1.4, \beta_{peach}=1$. Then if you draw squares whose sides are equal to the $\beta$s you can show how many of each $\beta$ fits in the other, effectively showing their relative importance. I.e. for our example being a peach lover has double the effect of being a female in being classified as class 1. | Logistic regression with only categorical predictors
It certainly is allowed. I would suggest you choose a new approach to visualizing your data, other than the sigmoid curve.
As you said being categorical means you do not have a range of data to assess |
55,062 | Confusion about when to use least-squares regression analysis | The two sections you quote are not directly comparable, I think. Some background: In OLS regression, the assumptions of the process (which are not always considered by its users, as noted by the author of that article) include the requirement that the independent variable(s) be measured perfectly--that is, with zero measurement error. This condition, like many in formal statistical models, is impossible to meet in any many real data situations (not _any_... thanks, glen_b). However--like with many statistical problems--failing to meet this condition is often considered (by the experts) to have little negative consequence for data analysis. In reality, the measurement error of both the IVs and the DV are probably represented in the overall "badness of fit" of the model (the "residuals" and their many associated statistics).
Despite the above, I think the author is saying that failing to account for the reality of measurement error in all variables (i.e., IVs) can be problematic in many situations. I assume this is why there is a family of models that include estimation of measurement error in the IVs. I think structural equation modeling is like this, too, as most variables in SEM have error terms associated with them.
Finally trying to directly answer your question: The first quote is saying that in OLS regression we assume the IV is measured with no measurement error, which is clearly impossible, but (I think it's saying) that is OK when using OLS for prediction. The second quote seems to be something different: part of the description of the tasks necessary for functional analysis (probably, since that's what the article is about). In the second quote, the author is explaining the difficulty of modeling when the residual (i.e., error) variance in a model must be accurately identified as either coming from the IV or from the DV, instead of (as in OLS) just assuming it all comes from the DV. This is not directly relevant to the first quote in the context of your question.
...both methods should be used when both variables are in error.
All measured variables always have measurement error, so that statement might be based on your misunderstanding of some issues. Rather, the author seems to be saying (in the first quote) exactly what is in the article's abstract: that OLS is not appropriate in earth science except in prediction situations. The second quote is not directly related to your question, as it is a description of a part of the alternative method that the author favors. | Confusion about when to use least-squares regression analysis | The two sections you quote are not directly comparable, I think. Some background: In OLS regression, the assumptions of the process (which are not always considered by its users, as noted by the autho | Confusion about when to use least-squares regression analysis
The two sections you quote are not directly comparable, I think. Some background: In OLS regression, the assumptions of the process (which are not always considered by its users, as noted by the author of that article) include the requirement that the independent variable(s) be measured perfectly--that is, with zero measurement error. This condition, like many in formal statistical models, is impossible to meet in any many real data situations (not _any_... thanks, glen_b). However--like with many statistical problems--failing to meet this condition is often considered (by the experts) to have little negative consequence for data analysis. In reality, the measurement error of both the IVs and the DV are probably represented in the overall "badness of fit" of the model (the "residuals" and their many associated statistics).
Despite the above, I think the author is saying that failing to account for the reality of measurement error in all variables (i.e., IVs) can be problematic in many situations. I assume this is why there is a family of models that include estimation of measurement error in the IVs. I think structural equation modeling is like this, too, as most variables in SEM have error terms associated with them.
Finally trying to directly answer your question: The first quote is saying that in OLS regression we assume the IV is measured with no measurement error, which is clearly impossible, but (I think it's saying) that is OK when using OLS for prediction. The second quote seems to be something different: part of the description of the tasks necessary for functional analysis (probably, since that's what the article is about). In the second quote, the author is explaining the difficulty of modeling when the residual (i.e., error) variance in a model must be accurately identified as either coming from the IV or from the DV, instead of (as in OLS) just assuming it all comes from the DV. This is not directly relevant to the first quote in the context of your question.
...both methods should be used when both variables are in error.
All measured variables always have measurement error, so that statement might be based on your misunderstanding of some issues. Rather, the author seems to be saying (in the first quote) exactly what is in the article's abstract: that OLS is not appropriate in earth science except in prediction situations. The second quote is not directly related to your question, as it is a description of a part of the alternative method that the author favors. | Confusion about when to use least-squares regression analysis
The two sections you quote are not directly comparable, I think. Some background: In OLS regression, the assumptions of the process (which are not always considered by its users, as noted by the autho |
55,063 | Confusion about when to use least-squares regression analysis | Just to clarify some points from DL Rogers' answer. OLS regression estimates the mean of the conditional distribution $E(y|x)$. Clearly, in a prediction problem, this is what we want: we are given $x$ and want to predict $y$. Since the $x$ is hypothetical, it can be assumed to be without error. In prediction, we're saying, "suppose the value of $X=x$ ... now what's my best guess for $y$?"
There are many situations in which $X$ can reasonably be assumed to have no error. Dummy variables for categories are one example. Another important class is where the $X$ values are design values from an experiment. Also, when the IV is space or time, and we know where we are and when. At a practical level, if variation in the response is several orders of magnitude greater than error in the IV's, we can safely ignore the latter.
Problems arise if the error in the IV's is comparable to the error in the response and we are interested in the correlation (or slope parameter) between the unobserved constructs. The standard regression parameter overestimates the magnitude of the true slope, so if you use OLS values when you should have used a measurement error model, you will overestimate the strength of the correlation between the constructs of interest. This would be bad.
The paper you reference is behind a paywall, so I couldn't take a look, but judging from the abstract, the author seems to be trying a measurement error model of some kind. The phrase "functional data analysis" is used by different authors to reference a variety of different methods, so I'm not sure what this paper is actually doing. For what it's worth, in the functional data analysis of Ramsay and Silverman, the IV's are assumed to be known without error. | Confusion about when to use least-squares regression analysis | Just to clarify some points from DL Rogers' answer. OLS regression estimates the mean of the conditional distribution $E(y|x)$. Clearly, in a prediction problem, this is what we want: we are given $x$ | Confusion about when to use least-squares regression analysis
Just to clarify some points from DL Rogers' answer. OLS regression estimates the mean of the conditional distribution $E(y|x)$. Clearly, in a prediction problem, this is what we want: we are given $x$ and want to predict $y$. Since the $x$ is hypothetical, it can be assumed to be without error. In prediction, we're saying, "suppose the value of $X=x$ ... now what's my best guess for $y$?"
There are many situations in which $X$ can reasonably be assumed to have no error. Dummy variables for categories are one example. Another important class is where the $X$ values are design values from an experiment. Also, when the IV is space or time, and we know where we are and when. At a practical level, if variation in the response is several orders of magnitude greater than error in the IV's, we can safely ignore the latter.
Problems arise if the error in the IV's is comparable to the error in the response and we are interested in the correlation (or slope parameter) between the unobserved constructs. The standard regression parameter overestimates the magnitude of the true slope, so if you use OLS values when you should have used a measurement error model, you will overestimate the strength of the correlation between the constructs of interest. This would be bad.
The paper you reference is behind a paywall, so I couldn't take a look, but judging from the abstract, the author seems to be trying a measurement error model of some kind. The phrase "functional data analysis" is used by different authors to reference a variety of different methods, so I'm not sure what this paper is actually doing. For what it's worth, in the functional data analysis of Ramsay and Silverman, the IV's are assumed to be known without error. | Confusion about when to use least-squares regression analysis
Just to clarify some points from DL Rogers' answer. OLS regression estimates the mean of the conditional distribution $E(y|x)$. Clearly, in a prediction problem, this is what we want: we are given $x$ |
55,064 | Does a confidence interval carry some extra error for non perfectly normal distributions? | When we talk about a 95% confidence interval for that sample mean,
i understand it as saying "given that we assume a normal distribution
for the sampling distribution of the sample means of samples of size
n=100, there's a 95% chance that the mean of our sample falls
at a point under our sampling distribution (normal) curve such that
the true mean of the population is going to be within 2 standard
deviations of our sample mean". (is this correct?)
This is not quite correct, and is one of the most common misconceptions in statistics. You can find more discussion here. The 95% confidence interval means that we are 95% confident that the parameter lies in our interval. Another way to think about this is, if we were to repeat the sampling many more times, and create confidence intervals every time, then approximately 95% of those confidence intervals will contain the true parameter.
Otherwise, you are correct that when we appeal to the CLT in this way (in assuming normality of the sample mean), there is an added error since the distribution is never going to be exact. However, our confidence in the estimate might either go up or down.
Here is a quote from Tsou and Royall(1995)
A popular $95\%$ confidence interval for $E(X)$ based on $n$
observations is the $t$ interval, $\bar{x} \pm t_{n-1} sn^{-1/2}$,
where $s^2 = \sum (x - \bar{x}^2)/(n-1)$. This is actually a $95\%$
confidence interval if the $X$'s are iid $N(\theta, \sigma^2)$. But if
this model is incorrect, then it is no longer true that the coverage
probability equals the nominal confidence coefficient, .95.
I am going to simulate this behavior in the following R code. I draw samples of size $N = 50$ from first a Normal(5, 1) population and then a $t_{1}$ distribution with mean $\mu = 5$. In the first case, since the population is truly normal, the distribution of the sample mean is exactly normal. In the second case the distribution is a shifted $t_1$ distribution which has much longer tails than the normal distribution. For each of these I simulate a sample of size $N = 50$, 1000 times, make the confidence interval for each time, and check whether $\mu = 5$ is in the interval or not. I return the proportion of time $\mu$ was in the interval, and this number is expected to be .95 if all assumptions hold.
set.seed(100)
## True value of mu
mu <- 5
reps <- 1000 # to demonstrate definition of CI
N <- 50
counting <- vector(length = reps)
## making 95% CI
for(i in 1:reps)
{
## When data is really normal and true mean is mu = 5
## So CLT hold exactly
data <- mu + rnorm(N, mean = 0, sd = 1)
mu.hat <- mean(data)
se <- sd(data)/sqrt(N)
quantile <- qt(.975, df = N-1)
upper <- mu.hat + quantile*se
lower <- mu.hat - quantile*se
## Demonstrating how many of the CIs have mu in them
counting[i] <- ifelse(upper > mu && lower < mu, 1, 0)
}
mean(counting)
# [1] 0.946
counting <- vector(length = reps)
## making 95% CI
for(i in 1:reps)
{
## When data is from t distribution and true mean is still mu = 5
## With N = 100 CLT is only approximate
data <- mu + rt(N, df = 1)
mu.hat <- mean(data)
se <- sd(data)/sqrt(N)
quantile <- qt(.975, df = N-1)
upper <- mu.hat + quantile*se
lower <- mu.hat - quantile*se
## Demonstrating how many of the CIs have mu in them
counting[i] <- ifelse(upper > mu && lower < mu, 1, 0)
}
mean(counting)
# [1] 0.986
The first time, I get very close to .95 but the second time I am much higher. So yes, our confidence will be different from .95 if our assumptions don't hold. However if $N$ is large and the data distribution is close to normal, it won't be too far off. | Does a confidence interval carry some extra error for non perfectly normal distributions? | When we talk about a 95% confidence interval for that sample mean,
i understand it as saying "given that we assume a normal distribution
for the sampling distribution of the sample means of sample | Does a confidence interval carry some extra error for non perfectly normal distributions?
When we talk about a 95% confidence interval for that sample mean,
i understand it as saying "given that we assume a normal distribution
for the sampling distribution of the sample means of samples of size
n=100, there's a 95% chance that the mean of our sample falls
at a point under our sampling distribution (normal) curve such that
the true mean of the population is going to be within 2 standard
deviations of our sample mean". (is this correct?)
This is not quite correct, and is one of the most common misconceptions in statistics. You can find more discussion here. The 95% confidence interval means that we are 95% confident that the parameter lies in our interval. Another way to think about this is, if we were to repeat the sampling many more times, and create confidence intervals every time, then approximately 95% of those confidence intervals will contain the true parameter.
Otherwise, you are correct that when we appeal to the CLT in this way (in assuming normality of the sample mean), there is an added error since the distribution is never going to be exact. However, our confidence in the estimate might either go up or down.
Here is a quote from Tsou and Royall(1995)
A popular $95\%$ confidence interval for $E(X)$ based on $n$
observations is the $t$ interval, $\bar{x} \pm t_{n-1} sn^{-1/2}$,
where $s^2 = \sum (x - \bar{x}^2)/(n-1)$. This is actually a $95\%$
confidence interval if the $X$'s are iid $N(\theta, \sigma^2)$. But if
this model is incorrect, then it is no longer true that the coverage
probability equals the nominal confidence coefficient, .95.
I am going to simulate this behavior in the following R code. I draw samples of size $N = 50$ from first a Normal(5, 1) population and then a $t_{1}$ distribution with mean $\mu = 5$. In the first case, since the population is truly normal, the distribution of the sample mean is exactly normal. In the second case the distribution is a shifted $t_1$ distribution which has much longer tails than the normal distribution. For each of these I simulate a sample of size $N = 50$, 1000 times, make the confidence interval for each time, and check whether $\mu = 5$ is in the interval or not. I return the proportion of time $\mu$ was in the interval, and this number is expected to be .95 if all assumptions hold.
set.seed(100)
## True value of mu
mu <- 5
reps <- 1000 # to demonstrate definition of CI
N <- 50
counting <- vector(length = reps)
## making 95% CI
for(i in 1:reps)
{
## When data is really normal and true mean is mu = 5
## So CLT hold exactly
data <- mu + rnorm(N, mean = 0, sd = 1)
mu.hat <- mean(data)
se <- sd(data)/sqrt(N)
quantile <- qt(.975, df = N-1)
upper <- mu.hat + quantile*se
lower <- mu.hat - quantile*se
## Demonstrating how many of the CIs have mu in them
counting[i] <- ifelse(upper > mu && lower < mu, 1, 0)
}
mean(counting)
# [1] 0.946
counting <- vector(length = reps)
## making 95% CI
for(i in 1:reps)
{
## When data is from t distribution and true mean is still mu = 5
## With N = 100 CLT is only approximate
data <- mu + rt(N, df = 1)
mu.hat <- mean(data)
se <- sd(data)/sqrt(N)
quantile <- qt(.975, df = N-1)
upper <- mu.hat + quantile*se
lower <- mu.hat - quantile*se
## Demonstrating how many of the CIs have mu in them
counting[i] <- ifelse(upper > mu && lower < mu, 1, 0)
}
mean(counting)
# [1] 0.986
The first time, I get very close to .95 but the second time I am much higher. So yes, our confidence will be different from .95 if our assumptions don't hold. However if $N$ is large and the data distribution is close to normal, it won't be too far off. | Does a confidence interval carry some extra error for non perfectly normal distributions?
When we talk about a 95% confidence interval for that sample mean,
i understand it as saying "given that we assume a normal distribution
for the sampling distribution of the sample means of sample |
55,065 | Does a confidence interval carry some extra error for non perfectly normal distributions? | The answer by @Greenparker is an excellent answer that highlights a common misconception, but I want to try to address your question more directly. You are correct to say that if your model is wrong (student heights are not normal) then your confidence intervals might not be as accurate as they claim. The issue then lies with identifying an appropriate model and then constructing confidence intervals from that model for the parameters involved (confidence intervals are not something tied solely to models with the normality assumption).
In some cases, even if your model is wrong, your statistics may be robust to deviations from your model assumptions (e.g. The normal assumption for your heights was wrong, but you end up with 94% confidence intervals rather than 95% confidence intervals...which is just fine).
And of course as @Greenparker illustrates, the Central Limit theorem is very powerful and holds in many situations. So if your sample size is large enough you can often get away with Normality assumptions.
All this to say that statistics is quite nuanced, problem specific. | Does a confidence interval carry some extra error for non perfectly normal distributions? | The answer by @Greenparker is an excellent answer that highlights a common misconception, but I want to try to address your question more directly. You are correct to say that if your model is wrong ( | Does a confidence interval carry some extra error for non perfectly normal distributions?
The answer by @Greenparker is an excellent answer that highlights a common misconception, but I want to try to address your question more directly. You are correct to say that if your model is wrong (student heights are not normal) then your confidence intervals might not be as accurate as they claim. The issue then lies with identifying an appropriate model and then constructing confidence intervals from that model for the parameters involved (confidence intervals are not something tied solely to models with the normality assumption).
In some cases, even if your model is wrong, your statistics may be robust to deviations from your model assumptions (e.g. The normal assumption for your heights was wrong, but you end up with 94% confidence intervals rather than 95% confidence intervals...which is just fine).
And of course as @Greenparker illustrates, the Central Limit theorem is very powerful and holds in many situations. So if your sample size is large enough you can often get away with Normality assumptions.
All this to say that statistics is quite nuanced, problem specific. | Does a confidence interval carry some extra error for non perfectly normal distributions?
The answer by @Greenparker is an excellent answer that highlights a common misconception, but I want to try to address your question more directly. You are correct to say that if your model is wrong ( |
55,066 | Dynamic treatment timing in a panel-DiD framework | You construct the policy dummy the way you first describe it, i.e. create a column of zeroes. Then for each firm you replace this with ones if a firm is in the treatment group AND it is in the post-treatment period. Something like this
$$
\begin{array}{ccccc}
\text{firm} & \text{time} & \text{treated} & \text{post} & \text{policy} \\
\hline
1 & 1 & 0 & 0 & 0 \\
1 & 2 & 0 & 0 & 0 \\
1 & 3 & 0 & 1 & 0 \\
1 & 4 & 0 & 1 & 0 \\
\hline
2 & 1 & 1 & 0 & 0 \\
2 & 2 & 1 & 0 & 0 \\
2 & 3 & 1 & 1 & 1 \\
2 & 4 & 1 & 1 & 1 \\
\hline
3 & 1 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 & 0 \\
3 & 3 & 1 & 0 & 0 \\
3 & 4 & 1 & 1 & 1 \\
\end{array}
$$
where $\text{post}$ is an indicator for the post treatment period. In your equation above, the $\alpha_0$ and $\text{Treat}_i$ are going to be absorbed in the firm fixed effects.
Regarding the interpretation, this setting makes an assumption which I probably did not state in the previous answer. The assumption is that the treatment effect is the same across all periods. This means that if a firm is treated yesterday and has a gain of 2, then a firm which is treated today also has a gain of 2 (relative to firms which are never treated). I made a graph to show what this assumption means
In case you would like a reference for this, you can check out Jeff Wooldridge's notes on difference in differences and the section on extensions for multiple groups and time periods: http://www.nber.org/WNE/Slides7-31-07/slides_10_diffindiffs.pdf (What’s New in Econometrics? Lecture 10 Difference-in-Differences Estimation, Wooldridge 2007). | Dynamic treatment timing in a panel-DiD framework | You construct the policy dummy the way you first describe it, i.e. create a column of zeroes. Then for each firm you replace this with ones if a firm is in the treatment group AND it is in the post-tr | Dynamic treatment timing in a panel-DiD framework
You construct the policy dummy the way you first describe it, i.e. create a column of zeroes. Then for each firm you replace this with ones if a firm is in the treatment group AND it is in the post-treatment period. Something like this
$$
\begin{array}{ccccc}
\text{firm} & \text{time} & \text{treated} & \text{post} & \text{policy} \\
\hline
1 & 1 & 0 & 0 & 0 \\
1 & 2 & 0 & 0 & 0 \\
1 & 3 & 0 & 1 & 0 \\
1 & 4 & 0 & 1 & 0 \\
\hline
2 & 1 & 1 & 0 & 0 \\
2 & 2 & 1 & 0 & 0 \\
2 & 3 & 1 & 1 & 1 \\
2 & 4 & 1 & 1 & 1 \\
\hline
3 & 1 & 1 & 0 & 0 \\
3 & 2 & 1 & 0 & 0 \\
3 & 3 & 1 & 0 & 0 \\
3 & 4 & 1 & 1 & 1 \\
\end{array}
$$
where $\text{post}$ is an indicator for the post treatment period. In your equation above, the $\alpha_0$ and $\text{Treat}_i$ are going to be absorbed in the firm fixed effects.
Regarding the interpretation, this setting makes an assumption which I probably did not state in the previous answer. The assumption is that the treatment effect is the same across all periods. This means that if a firm is treated yesterday and has a gain of 2, then a firm which is treated today also has a gain of 2 (relative to firms which are never treated). I made a graph to show what this assumption means
In case you would like a reference for this, you can check out Jeff Wooldridge's notes on difference in differences and the section on extensions for multiple groups and time periods: http://www.nber.org/WNE/Slides7-31-07/slides_10_diffindiffs.pdf (What’s New in Econometrics? Lecture 10 Difference-in-Differences Estimation, Wooldridge 2007). | Dynamic treatment timing in a panel-DiD framework
You construct the policy dummy the way you first describe it, i.e. create a column of zeroes. Then for each firm you replace this with ones if a firm is in the treatment group AND it is in the post-tr |
55,067 | Correlation multinomial distribution | The probability generating function is
$$\eqalign{
f(x_1,\ldots, x_c) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) x_1^{k_1}\cdots x_c^{k_c}\\
&= \sum_{k_1,\ldots,k_c} \binom{n}{k_1\cdots k_c} (\pi_1 x_1)^{k_1}\cdots (\pi_c x_c)^{k_c} \\
&= (\pi_1 x_1 + \cdots + \pi_c x_c)^n.\tag{1}
}$$
The first equality is the definition if the pgf, the second is the formula for the Multinomial distribution, and the third one generalizes the Binomial Theorem (and often is taken to define the multinomial coefficients $\binom{n}{k_1\cdots k_c}$, whose values we do not need to know!).
Consequently (for $n \ge 2$ and $i\ne j$) the expectation of $X_iX_j$ is
$$\eqalign{\mathbb{E}(X_iX_j) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) k_i k_j\\
&=\left(x_i x_j\frac{\partial^2}{\partial x_i \partial x_j}f\right)(1,1,\ldots,1) \\
&= (1)(1)n(n-1)\pi_i \pi_j (\pi_1 1 + \cdots + \pi_c 1)^{n-2} \\
&= n(n-1)\pi_i \pi_j.
}$$
The first equality is the definition of expectation, the second is the result of differentiating the preceding sum term-by-term, the third is the result of differentiating formula $(1)$ instead, and the fourth follows from the law of total probability, $\pi_1 + \cdots + \pi_c = 1$.
(Obviously this formula for the expectation continues to hold when $n=0$ or $n=1$.)
Therefore (using a well-known formula for the covariance in terms of the first two moments and recognizing that $\mathbb{E}(X_k) = n\pi_k$ for any $k$),
$$\operatorname{Cov}(X_i, X_j) = \mathbb{E}(X_iX_j) -\mathbb{E}(X_i)\mathbb{E}(X_j) = n(n-1)\pi_i\pi_j - (n\pi_i)(n\pi_j) = -n\pi_i\pi_j.$$
The rest is easy algebra. | Correlation multinomial distribution | The probability generating function is
$$\eqalign{
f(x_1,\ldots, x_c) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) x_1^{k_1}\cdots x_c^{k_c}\\
&= \sum_{k_1,\ldots,k_c} \binom{n}{ | Correlation multinomial distribution
The probability generating function is
$$\eqalign{
f(x_1,\ldots, x_c) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) x_1^{k_1}\cdots x_c^{k_c}\\
&= \sum_{k_1,\ldots,k_c} \binom{n}{k_1\cdots k_c} (\pi_1 x_1)^{k_1}\cdots (\pi_c x_c)^{k_c} \\
&= (\pi_1 x_1 + \cdots + \pi_c x_c)^n.\tag{1}
}$$
The first equality is the definition if the pgf, the second is the formula for the Multinomial distribution, and the third one generalizes the Binomial Theorem (and often is taken to define the multinomial coefficients $\binom{n}{k_1\cdots k_c}$, whose values we do not need to know!).
Consequently (for $n \ge 2$ and $i\ne j$) the expectation of $X_iX_j$ is
$$\eqalign{\mathbb{E}(X_iX_j) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) k_i k_j\\
&=\left(x_i x_j\frac{\partial^2}{\partial x_i \partial x_j}f\right)(1,1,\ldots,1) \\
&= (1)(1)n(n-1)\pi_i \pi_j (\pi_1 1 + \cdots + \pi_c 1)^{n-2} \\
&= n(n-1)\pi_i \pi_j.
}$$
The first equality is the definition of expectation, the second is the result of differentiating the preceding sum term-by-term, the third is the result of differentiating formula $(1)$ instead, and the fourth follows from the law of total probability, $\pi_1 + \cdots + \pi_c = 1$.
(Obviously this formula for the expectation continues to hold when $n=0$ or $n=1$.)
Therefore (using a well-known formula for the covariance in terms of the first two moments and recognizing that $\mathbb{E}(X_k) = n\pi_k$ for any $k$),
$$\operatorname{Cov}(X_i, X_j) = \mathbb{E}(X_iX_j) -\mathbb{E}(X_i)\mathbb{E}(X_j) = n(n-1)\pi_i\pi_j - (n\pi_i)(n\pi_j) = -n\pi_i\pi_j.$$
The rest is easy algebra. | Correlation multinomial distribution
The probability generating function is
$$\eqalign{
f(x_1,\ldots, x_c) &= \sum_{k_1, \ldots, k_c} \Pr((X_1,\ldots,X_c)=(k_1,\ldots, k_c)) x_1^{k_1}\cdots x_c^{k_c}\\
&= \sum_{k_1,\ldots,k_c} \binom{n}{ |
55,068 | Sampling without Replacement and Non-uniform Distribution | Sounds like you are looking for the multivariate Wallenius' noncentral hypergeometric distribution. ie weighted balls, multiple colors, multiple draws.wikipedia link
I did try to find something analytic, but even in simple cases this problem becomes increasingly complicated. I would advise looking at some of the packages in the link. | Sampling without Replacement and Non-uniform Distribution | Sounds like you are looking for the multivariate Wallenius' noncentral hypergeometric distribution. ie weighted balls, multiple colors, multiple draws.wikipedia link
I did try to find something analyt | Sampling without Replacement and Non-uniform Distribution
Sounds like you are looking for the multivariate Wallenius' noncentral hypergeometric distribution. ie weighted balls, multiple colors, multiple draws.wikipedia link
I did try to find something analytic, but even in simple cases this problem becomes increasingly complicated. I would advise looking at some of the packages in the link. | Sampling without Replacement and Non-uniform Distribution
Sounds like you are looking for the multivariate Wallenius' noncentral hypergeometric distribution. ie weighted balls, multiple colors, multiple draws.wikipedia link
I did try to find something analyt |
55,069 | Sampling without Replacement and Non-uniform Distribution | A long comment which I have have had to post as an answer. I have
no objection if some kindly Moderator converts it into a series of
comments on the main question.
Your question is essentially unanswerable in one sense.
You have told us that the probability of drawing the $i$-th object on the first draw is $p_i$. In the general scheme of drawing with replacement, the probability of drawing the $i$-th object remains the same on all further draws.
However, in the general scheme of drawing without replacement
(which you want to consider), there is no immediate answer to the question: what is the
probability of drawing the $i$-th object on the second draw? the third draw? etc. There is also the question whether we want the unconditional
probability of drawing the $i$-th object on the second draw, or the
conditional probability of drawing the $i$-th object on the second draw given that the first draw resulted in the $j$-th object being drawn?
One possibility to consider is that the (conditional) probabilities are obtained by re-normalizing the probabilities
of the remaining balls at each step. That is, we write
\begin{align}
P(i_i) &= p_{i_1},\\
P(i_1, i_2) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}\\
P(i_1, i_2,i_3) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}
\cdot \frac{p_{i_3}}{1-p_{i_1}-p_{i_2}}\\
P(i_1, i_2,i_3,i_4) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}
\cdot \frac{p_{i_3}}{1-p_{i_1}-p_{i_2}}
\cdot \frac{p_{i_4}}{1-p_{i_1}-p_{i_2}-p_{i_3}}
\end{align}
and so on. But there could be other kinds of assumptions that might be
made about what happens on successive draws, and so unless you
specify what the probabilities are on successive draws, w=your
question is unanswerable. | Sampling without Replacement and Non-uniform Distribution | A long comment which I have have had to post as an answer. I have
no objection if some kindly Moderator converts it into a series of
comments on the main question.
Your question is essentially unansw | Sampling without Replacement and Non-uniform Distribution
A long comment which I have have had to post as an answer. I have
no objection if some kindly Moderator converts it into a series of
comments on the main question.
Your question is essentially unanswerable in one sense.
You have told us that the probability of drawing the $i$-th object on the first draw is $p_i$. In the general scheme of drawing with replacement, the probability of drawing the $i$-th object remains the same on all further draws.
However, in the general scheme of drawing without replacement
(which you want to consider), there is no immediate answer to the question: what is the
probability of drawing the $i$-th object on the second draw? the third draw? etc. There is also the question whether we want the unconditional
probability of drawing the $i$-th object on the second draw, or the
conditional probability of drawing the $i$-th object on the second draw given that the first draw resulted in the $j$-th object being drawn?
One possibility to consider is that the (conditional) probabilities are obtained by re-normalizing the probabilities
of the remaining balls at each step. That is, we write
\begin{align}
P(i_i) &= p_{i_1},\\
P(i_1, i_2) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}\\
P(i_1, i_2,i_3) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}
\cdot \frac{p_{i_3}}{1-p_{i_1}-p_{i_2}}\\
P(i_1, i_2,i_3,i_4) &= p_{i_1}\cdot \frac{p_{i_2}}{1-p_{i_1}}
\cdot \frac{p_{i_3}}{1-p_{i_1}-p_{i_2}}
\cdot \frac{p_{i_4}}{1-p_{i_1}-p_{i_2}-p_{i_3}}
\end{align}
and so on. But there could be other kinds of assumptions that might be
made about what happens on successive draws, and so unless you
specify what the probabilities are on successive draws, w=your
question is unanswerable. | Sampling without Replacement and Non-uniform Distribution
A long comment which I have have had to post as an answer. I have
no objection if some kindly Moderator converts it into a series of
comments on the main question.
Your question is essentially unansw |
55,070 | Sampling without Replacement and Non-uniform Distribution | Since you asked for an approximation, I will propose one. It's not perfect: it requires additional reasonable assumptions; I won't prove it's accurate; I'm not sure it's a valid pmf.
If $N$ is large, $K << N$, and the $p_j's$ are small, then drawing once gives a probability of $p_j$. The next draw will be only slightly better than $p_j$, etc. The draws are not independent, but are not too dependent. So I would approximate drawing the desired event in $K$ trials as drawing one sample in a poisson with $\lambda=K \times p_j$.
This gives you a probability of $K \times p_j \times e^{-K p_j}$. | Sampling without Replacement and Non-uniform Distribution | Since you asked for an approximation, I will propose one. It's not perfect: it requires additional reasonable assumptions; I won't prove it's accurate; I'm not sure it's a valid pmf.
If $N$ is large, | Sampling without Replacement and Non-uniform Distribution
Since you asked for an approximation, I will propose one. It's not perfect: it requires additional reasonable assumptions; I won't prove it's accurate; I'm not sure it's a valid pmf.
If $N$ is large, $K << N$, and the $p_j's$ are small, then drawing once gives a probability of $p_j$. The next draw will be only slightly better than $p_j$, etc. The draws are not independent, but are not too dependent. So I would approximate drawing the desired event in $K$ trials as drawing one sample in a poisson with $\lambda=K \times p_j$.
This gives you a probability of $K \times p_j \times e^{-K p_j}$. | Sampling without Replacement and Non-uniform Distribution
Since you asked for an approximation, I will propose one. It's not perfect: it requires additional reasonable assumptions; I won't prove it's accurate; I'm not sure it's a valid pmf.
If $N$ is large, |
55,071 | Intuition about f1 score | So this is a confusion matrix in case any readers haven't seen one:
$$
\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & a & b \\
Actual - & c & d \\
\end{array}
$$
And this is the formula for calculating $F_1$ from a confusion matrix:
$$F_1 = \frac{2a}{2a+b+c}$$
So if half the items are positive in reality and all are predicted positive:
$$\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & 50 & 0 \\
Actual - & 50 & 0 \\
\end{array}$$
Then $F_1$ in this case is equal to $2/3$ as you stated in your question:
$$F_1 = \frac{2(50)}{2(50)+50+0} = 0.67$$
But if half the items are positive in reality and all the items are randomly predicted, then there are many different ways for this to occur and the resulting $F_1$ score will range between $0.00$ and $1.00$. Let's imagine we have just two items, then there are four possible results for the random classifier:
$$\begin{array}{l c c c}
Item & Reality & Predict_1 & Predict_2 & Predict_3 & Predict_4 \\ \hline
1 & + & + & + & - & -\\
2 & - & + & - & + & -\\ \hline
F_1 & & 0.67 & 1.00 & 0.00 & 0.00\\
\end{array}$$
So all that was just to say that the value of $F_1$ for the random classifier is actually quite variable, unlike the always-positive classifier. The alternative I'd suggest is the $S$ score first proposed by Bennett, Alpert, & Goldstein (1954). It assumes a 50% probability of classifying any given item into its correct class "by chance" alone and discounts this from the final score. It also uses all cells of the confusion matrix (unlike $F_1$ which ignores $d$ or the number of "true negatives").
$$p_o = \frac{a+d}{a+b+c+d}$$
$$S = \frac{p_o - 0.5}{1-0.5}=2p_o-1$$
In the first example above, where $F_1=0.67$, the $S$ score would be equal to $0.00$ and would capture the idea that the predictions are doing no better than would be expected by chance.
$$p_o = \frac{50+0}{50+0+50+0}=0.50$$
$$S = 2(0.50)-1 = 0.00$$
In the second example above, where $0.00 \le F_1 \le 1.00$ with a mean of $0.42$, the $S$ score would actually range between $-1.00$ and $1.00$ with a mean of $0.00$. Thus, both classifiers would be deemed equal by the $S$ score:
$$\begin{array}{l c c c}
Item & Reality & Predict_1 & Predict_2 & Predict_3 & Predict_4 \\ \hline
1 & + & + & + & - & -\\
2 & - & + & - & + & -\\ \hline
F_1 & & 0.67 & 1.00 & 0.00 & 0.00\\
S & & 0.00 & 1.00 & -1.00 & 0.00 \\
\end{array}$$
You can find more information about classification reliability at my website, including a history of the S score and functions for calculating it in MATLAB and even generalizing it to multiple classifiers, multiple categories, and non-nominal categories. | Intuition about f1 score | So this is a confusion matrix in case any readers haven't seen one:
$$
\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & a & b \\
Actual - & c & d \\
\end{array}
$$
And this is the formula for | Intuition about f1 score
So this is a confusion matrix in case any readers haven't seen one:
$$
\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & a & b \\
Actual - & c & d \\
\end{array}
$$
And this is the formula for calculating $F_1$ from a confusion matrix:
$$F_1 = \frac{2a}{2a+b+c}$$
So if half the items are positive in reality and all are predicted positive:
$$\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & 50 & 0 \\
Actual - & 50 & 0 \\
\end{array}$$
Then $F_1$ in this case is equal to $2/3$ as you stated in your question:
$$F_1 = \frac{2(50)}{2(50)+50+0} = 0.67$$
But if half the items are positive in reality and all the items are randomly predicted, then there are many different ways for this to occur and the resulting $F_1$ score will range between $0.00$ and $1.00$. Let's imagine we have just two items, then there are four possible results for the random classifier:
$$\begin{array}{l c c c}
Item & Reality & Predict_1 & Predict_2 & Predict_3 & Predict_4 \\ \hline
1 & + & + & + & - & -\\
2 & - & + & - & + & -\\ \hline
F_1 & & 0.67 & 1.00 & 0.00 & 0.00\\
\end{array}$$
So all that was just to say that the value of $F_1$ for the random classifier is actually quite variable, unlike the always-positive classifier. The alternative I'd suggest is the $S$ score first proposed by Bennett, Alpert, & Goldstein (1954). It assumes a 50% probability of classifying any given item into its correct class "by chance" alone and discounts this from the final score. It also uses all cells of the confusion matrix (unlike $F_1$ which ignores $d$ or the number of "true negatives").
$$p_o = \frac{a+d}{a+b+c+d}$$
$$S = \frac{p_o - 0.5}{1-0.5}=2p_o-1$$
In the first example above, where $F_1=0.67$, the $S$ score would be equal to $0.00$ and would capture the idea that the predictions are doing no better than would be expected by chance.
$$p_o = \frac{50+0}{50+0+50+0}=0.50$$
$$S = 2(0.50)-1 = 0.00$$
In the second example above, where $0.00 \le F_1 \le 1.00$ with a mean of $0.42$, the $S$ score would actually range between $-1.00$ and $1.00$ with a mean of $0.00$. Thus, both classifiers would be deemed equal by the $S$ score:
$$\begin{array}{l c c c}
Item & Reality & Predict_1 & Predict_2 & Predict_3 & Predict_4 \\ \hline
1 & + & + & + & - & -\\
2 & - & + & - & + & -\\ \hline
F_1 & & 0.67 & 1.00 & 0.00 & 0.00\\
S & & 0.00 & 1.00 & -1.00 & 0.00 \\
\end{array}$$
You can find more information about classification reliability at my website, including a history of the S score and functions for calculating it in MATLAB and even generalizing it to multiple classifiers, multiple categories, and non-nominal categories. | Intuition about f1 score
So this is a confusion matrix in case any readers haven't seen one:
$$
\begin{array}{l c c}
& Predict + & Predict-\\
Actual + & a & b \\
Actual - & c & d \\
\end{array}
$$
And this is the formula for |
55,072 | How does alpha relate to C in Scikit-Learn's SGDClassifier? | The correct scaling is C_svc * n_samples = 1 / alpha_sgd instead of C_svc = n_samples / alpha_sgd, the documentation seems to be incorrect.
In your example, it gives alpha = 0.002, and the results are similar with the SVC estimator's.
After a quick overview, it seems that the equivalence is:
1. / C_svr ~
1. / C_svc ~
1. / C_logistic ~
alpha_elastic_net * n_samples ~
alpha_lasso * n_samples ~
alpha_sgd * n_samples ~
alpha_ridge
Note that this is not a API-design failure, since scaling is more appropriate in some cases (e.g. with L1 regularization), whereas no-scaling is preferred in other cases (e.g. with L2 regularization).
See also this example. | How does alpha relate to C in Scikit-Learn's SGDClassifier? | The correct scaling is C_svc * n_samples = 1 / alpha_sgd instead of C_svc = n_samples / alpha_sgd, the documentation seems to be incorrect.
In your example, it gives alpha = 0.002, and the results are | How does alpha relate to C in Scikit-Learn's SGDClassifier?
The correct scaling is C_svc * n_samples = 1 / alpha_sgd instead of C_svc = n_samples / alpha_sgd, the documentation seems to be incorrect.
In your example, it gives alpha = 0.002, and the results are similar with the SVC estimator's.
After a quick overview, it seems that the equivalence is:
1. / C_svr ~
1. / C_svc ~
1. / C_logistic ~
alpha_elastic_net * n_samples ~
alpha_lasso * n_samples ~
alpha_sgd * n_samples ~
alpha_ridge
Note that this is not a API-design failure, since scaling is more appropriate in some cases (e.g. with L1 regularization), whereas no-scaling is preferred in other cases (e.g. with L2 regularization).
See also this example. | How does alpha relate to C in Scikit-Learn's SGDClassifier?
The correct scaling is C_svc * n_samples = 1 / alpha_sgd instead of C_svc = n_samples / alpha_sgd, the documentation seems to be incorrect.
In your example, it gives alpha = 0.002, and the results are |
55,073 | Advantages of convolutional neural networks over "simple" feed-forward networks? | Any time that you can legitimately make stronger assumptions, you can obtain stronger results. Convolutional networks make the assumption of locality, and hence are more powerful. This depends on data that in fact exhibits locality (autocorrelation) like images or time series.
Intuitively, if you are looking at an image, pixels in a region of the image are more likely to be related than pixels far away. So you can save a lot of neuron wiring if you don't (directly) wire distant pixels to the same neuron. With less wiring, you have more data per coefficient, which speeds things up and makes for better results. | Advantages of convolutional neural networks over "simple" feed-forward networks? | Any time that you can legitimately make stronger assumptions, you can obtain stronger results. Convolutional networks make the assumption of locality, and hence are more powerful. This depends on data | Advantages of convolutional neural networks over "simple" feed-forward networks?
Any time that you can legitimately make stronger assumptions, you can obtain stronger results. Convolutional networks make the assumption of locality, and hence are more powerful. This depends on data that in fact exhibits locality (autocorrelation) like images or time series.
Intuitively, if you are looking at an image, pixels in a region of the image are more likely to be related than pixels far away. So you can save a lot of neuron wiring if you don't (directly) wire distant pixels to the same neuron. With less wiring, you have more data per coefficient, which speeds things up and makes for better results. | Advantages of convolutional neural networks over "simple" feed-forward networks?
Any time that you can legitimately make stronger assumptions, you can obtain stronger results. Convolutional networks make the assumption of locality, and hence are more powerful. This depends on data |
55,074 | Advantages of convolutional neural networks over "simple" feed-forward networks? | Deep nets are a general class whose solely mandatory characteristic is an unusual number of neural layers, whereas convolutional networks are a specific technique that can be included in a deep net, alongside other techniques such as LSTMs, perceptrons, Kohonen nets, etc. Keep in mind that "deep learning" is something of a buzz word with a fuzzy definition; in practice, these other neural techniques aren't included anywhere near as often in the equally fuzzy definition of deep nets. I believe that most "deep" networks consist mainly of convolutional layers, based on the academic papers I've read so far. The term doesn't necessarily imply or demand the inclusion of convolutionals in every deep network - you could, for example, have a deep net consisting of two dozen perceptron layers, although that would hardly be practical. There are practical advantages, however, to piling convolutionals on top of each other in stacks, particularly the ability to leverage convolution filters to create hierarchical representations. There may be other subtle advantages to convolutionals, but that is by far the dominant one. I'll leave it to more experienced users to provide the mathematical and logical details as to why convolutionals can be stacked to create hierarhical representations, but suffice it to say that this is the "short form" answer to your questions: 1) they're better mainly because of the convolutional filtering operation and 2) is something of an apples and oranges comparison, since one is a
general class and the other is a specific type often included within it. | Advantages of convolutional neural networks over "simple" feed-forward networks? | Deep nets are a general class whose solely mandatory characteristic is an unusual number of neural layers, whereas convolutional networks are a specific technique that can be included in a deep net, a | Advantages of convolutional neural networks over "simple" feed-forward networks?
Deep nets are a general class whose solely mandatory characteristic is an unusual number of neural layers, whereas convolutional networks are a specific technique that can be included in a deep net, alongside other techniques such as LSTMs, perceptrons, Kohonen nets, etc. Keep in mind that "deep learning" is something of a buzz word with a fuzzy definition; in practice, these other neural techniques aren't included anywhere near as often in the equally fuzzy definition of deep nets. I believe that most "deep" networks consist mainly of convolutional layers, based on the academic papers I've read so far. The term doesn't necessarily imply or demand the inclusion of convolutionals in every deep network - you could, for example, have a deep net consisting of two dozen perceptron layers, although that would hardly be practical. There are practical advantages, however, to piling convolutionals on top of each other in stacks, particularly the ability to leverage convolution filters to create hierarchical representations. There may be other subtle advantages to convolutionals, but that is by far the dominant one. I'll leave it to more experienced users to provide the mathematical and logical details as to why convolutionals can be stacked to create hierarhical representations, but suffice it to say that this is the "short form" answer to your questions: 1) they're better mainly because of the convolutional filtering operation and 2) is something of an apples and oranges comparison, since one is a
general class and the other is a specific type often included within it. | Advantages of convolutional neural networks over "simple" feed-forward networks?
Deep nets are a general class whose solely mandatory characteristic is an unusual number of neural layers, whereas convolutional networks are a specific technique that can be included in a deep net, a |
55,075 | Advantages of convolutional neural networks over "simple" feed-forward networks? | The sentence said by @Wayne summarizes it pretty well - "Any time that you can legitimately make stronger assumptions, you can obtain stronger results". Though, a bit more detail can be added to why CNNs are superior models for image data compared to the multi-layer perceptron.
Fully connected layers are the fundamental components of a multiplayer perceptron, the simplest form of an artificial neural network. Often times, the final component in a convolutional neural network (CNN) is a fully connected layer because it performs the regression or classification task.
The components at the start of a neural network typically serve the purpose of reducing the inputs to a more digestible form for classification, regression, etc. In the context of a CNN, one of the most important components is the convolution operation and its job is to reduce the image to a more suitable form.
For the convolution operation, the input image is convolved with a filter to produce a new image, called a feature map, which reveals the parts of the input that are most similar to the filter. For example, say we want to automate the detection of hemocyanin particles in microscopy images via a CNN model. One thing a CNN might learn to do is to convolve the input image with a filter that looks like a hemocyanin particle. The pixels in the feature map will be brighter in the regions of the image that contain hemocyanin particles.
It's pretty straight forward to see why the reduced image is more useful for detection than the original. This is the advantage of a CNN. | Advantages of convolutional neural networks over "simple" feed-forward networks? | The sentence said by @Wayne summarizes it pretty well - "Any time that you can legitimately make stronger assumptions, you can obtain stronger results". Though, a bit more detail can be added to why C | Advantages of convolutional neural networks over "simple" feed-forward networks?
The sentence said by @Wayne summarizes it pretty well - "Any time that you can legitimately make stronger assumptions, you can obtain stronger results". Though, a bit more detail can be added to why CNNs are superior models for image data compared to the multi-layer perceptron.
Fully connected layers are the fundamental components of a multiplayer perceptron, the simplest form of an artificial neural network. Often times, the final component in a convolutional neural network (CNN) is a fully connected layer because it performs the regression or classification task.
The components at the start of a neural network typically serve the purpose of reducing the inputs to a more digestible form for classification, regression, etc. In the context of a CNN, one of the most important components is the convolution operation and its job is to reduce the image to a more suitable form.
For the convolution operation, the input image is convolved with a filter to produce a new image, called a feature map, which reveals the parts of the input that are most similar to the filter. For example, say we want to automate the detection of hemocyanin particles in microscopy images via a CNN model. One thing a CNN might learn to do is to convolve the input image with a filter that looks like a hemocyanin particle. The pixels in the feature map will be brighter in the regions of the image that contain hemocyanin particles.
It's pretty straight forward to see why the reduced image is more useful for detection than the original. This is the advantage of a CNN. | Advantages of convolutional neural networks over "simple" feed-forward networks?
The sentence said by @Wayne summarizes it pretty well - "Any time that you can legitimately make stronger assumptions, you can obtain stronger results". Though, a bit more detail can be added to why C |
55,076 | Maximum of a set of values from given mean and median | This is quite straightforward.
Start with any five different natural numbers* with the required properties (median and mean both 7) and then see how you can change them without changing the mean or the median.
* As Adrian notes and as discussed at the link above, whether the natural numbers includes 0 depends on who you're talking to; you will need to check which definition you're expected to use (or to do it for both cases -- once you've done it for one definition, the other is simple)
Manipulating the set of numbers while keeping the mean and median unchanged is easiest if you have the five starting numbers in sorted order.
It's trivial to keep both mean and the median the same as you change some of the numbers; in each case any change you make must leave something unchanged (you note a connection between the mean and the sum - keep that connection mind to figure out what has to stay unchanged as you change numbers).
Once you see what that thing is in each case (one for the median and for the mean), it's a simple matter to change everything else as far as it is possible to do which will increase the largest number, and finally to confirm that no further changes can be made without breaking at least one of the conditions.
You may like to get things rolling by thinking about how small the smallest number can be (why as small as can be? ... think about that, too)
Since it's been up more than a day I can perhaps add a little more information. If the OP posts some additional thoughts I may go further.
Clearly the two invariants to play with are that $\sum_i x_i = 35$ (which makes the mean correct) and that $x_{(3)}=7$ (the median is then correct). So whatever you do must satisfy those points.
Imagine we start with the numbers $(4,5,7,9,10)$ for $(x_{(1)}, ...,x_{(5)})$.
This satisfies both conditions.
Now we would like to make $x_{(5)}$ larger but must keep the sum at $35$. Which other numbers can be moved? Which direction should they go? How far can they go? | Maximum of a set of values from given mean and median | This is quite straightforward.
Start with any five different natural numbers* with the required properties (median and mean both 7) and then see how you can change them without changing the mean or th | Maximum of a set of values from given mean and median
This is quite straightforward.
Start with any five different natural numbers* with the required properties (median and mean both 7) and then see how you can change them without changing the mean or the median.
* As Adrian notes and as discussed at the link above, whether the natural numbers includes 0 depends on who you're talking to; you will need to check which definition you're expected to use (or to do it for both cases -- once you've done it for one definition, the other is simple)
Manipulating the set of numbers while keeping the mean and median unchanged is easiest if you have the five starting numbers in sorted order.
It's trivial to keep both mean and the median the same as you change some of the numbers; in each case any change you make must leave something unchanged (you note a connection between the mean and the sum - keep that connection mind to figure out what has to stay unchanged as you change numbers).
Once you see what that thing is in each case (one for the median and for the mean), it's a simple matter to change everything else as far as it is possible to do which will increase the largest number, and finally to confirm that no further changes can be made without breaking at least one of the conditions.
You may like to get things rolling by thinking about how small the smallest number can be (why as small as can be? ... think about that, too)
Since it's been up more than a day I can perhaps add a little more information. If the OP posts some additional thoughts I may go further.
Clearly the two invariants to play with are that $\sum_i x_i = 35$ (which makes the mean correct) and that $x_{(3)}=7$ (the median is then correct). So whatever you do must satisfy those points.
Imagine we start with the numbers $(4,5,7,9,10)$ for $(x_{(1)}, ...,x_{(5)})$.
This satisfies both conditions.
Now we would like to make $x_{(5)}$ larger but must keep the sum at $35$. Which other numbers can be moved? Which direction should they go? How far can they go? | Maximum of a set of values from given mean and median
This is quite straightforward.
Start with any five different natural numbers* with the required properties (median and mean both 7) and then see how you can change them without changing the mean or th |
55,077 | Simple question about Ornstein-Uhlenbeck process | The method discussed in the article you mention is directly inspired from the paper 'Statistical Arbitrage in the U.S. Equities Market' by Avellaneda & Lee (2008). Most of your questions are answered in the Appendix p.44.
Suppose we are at the end of trading day $D$. In the Avellaneda & Lee paper the s-score is defined as
$$ s = \frac{X - m}{\sigma_{eq}} $$
It is a measure of the distance to equilibrium of the current cointegration residual $X$ i.e. how far away is $X$ (in standard deviation units) from the theoretical equilibrium value $m$ predicted by a mean-reversion model yet to be estimated.
In practice, using the notations of your paper and considering a 60-days trailing estimation window, $X$ evaluates as
$$ X := \sum_{j=t_1}^{t_{60}} \epsilon_j $$
with $\{\epsilon_j\}$ the regression residuals computed as
$$ \epsilon_j = R_j^1 - \left(\hat{\beta}_0 + \hat{\beta}R_j^2\right),\ \ \ \forall j=t_1,...,t_{60} $$
where $\{R_j^i\}$ figures the 60 most recent close-to-close returns of the 2 securities of interest in case of pairs trading.
If $\hat{\beta}_0$ and $\hat{\beta}$ are OLS estimators then it is well-known that the regression residuals $\{\epsilon_j\}$ will be uncorrelated and have zero mean such that $X = 0$ and the s-score becomes
$$ s = -\frac{m}{\sigma_{eq}} $$
where the parameters $m$ (stationary mean) and $\sigma_{eq}$ (square root of stationary variance) of the Ornstein-Uhlenbeck (OU) process $(X_t)_{t\geq0}$ are estimated as follows:
Calculate the auxiliary values $\{X_k\}$ as follows
$$X_k = \sum_{j=t_1}^{k} \epsilon_j,\ \ \ \forall k = t_1,\dots,t_{60}$$
where the $\epsilon_j$ are the factor model regression residuals discussed above (note that $X = X_{60}$ by construction).
Calibrate an AR(1) model to the previous values
$$X_{k+1} = a + b X_k + \zeta_{k+1},\ \ \ \forall k = t_1,\dots,t_{59}$$
i.e. perform yet another linear (lagged-)regression to determine $\hat{a}$, $\hat{b}$ (intercept and slope) along with the regression residuals $\{\zeta_{k}\}$,.
Comparing the Euler discretisation of the Ornstein-Uhlenbek SDE (see equation 2.3 in your paper) to the AR(1) model above, infer the parameters of the OU process from $\hat{a}$, $\hat{b}$ and the auxialiary regression residuals $\{\zeta_{k}\}$, see original paper. At the end you should end up with
$$ m = \frac{\hat{a}}{1-\hat{b}} $$
$$ \sigma_{eq} = \sqrt{\frac{\text{var}(\{\zeta_{k}\})}{1-\hat{b}^2}}$$
The idea is then that, depending on the strength of the mean-reversion signal (= the value of the $s$-score), we decide on day $D$ to buy/sell at tomorrow's open (or close an existing position), see original paper.
Obviously, on day $D+1$, based on the new closing prices that you will observe end of day, you will be able to compute a new $s$-score by repeating the steps above (sticking to a 60-days trailing estimation window, which will now contain the most recent return corresponding to day $D+1$, while the oldest one corresponding to $D-61$ will have disappeared). Repeating this on different days allows you to plot a graph where the $s$-score evolves through time as in Figure 10 of the original article. | Simple question about Ornstein-Uhlenbeck process | The method discussed in the article you mention is directly inspired from the paper 'Statistical Arbitrage in the U.S. Equities Market' by Avellaneda & Lee (2008). Most of your questions are answered | Simple question about Ornstein-Uhlenbeck process
The method discussed in the article you mention is directly inspired from the paper 'Statistical Arbitrage in the U.S. Equities Market' by Avellaneda & Lee (2008). Most of your questions are answered in the Appendix p.44.
Suppose we are at the end of trading day $D$. In the Avellaneda & Lee paper the s-score is defined as
$$ s = \frac{X - m}{\sigma_{eq}} $$
It is a measure of the distance to equilibrium of the current cointegration residual $X$ i.e. how far away is $X$ (in standard deviation units) from the theoretical equilibrium value $m$ predicted by a mean-reversion model yet to be estimated.
In practice, using the notations of your paper and considering a 60-days trailing estimation window, $X$ evaluates as
$$ X := \sum_{j=t_1}^{t_{60}} \epsilon_j $$
with $\{\epsilon_j\}$ the regression residuals computed as
$$ \epsilon_j = R_j^1 - \left(\hat{\beta}_0 + \hat{\beta}R_j^2\right),\ \ \ \forall j=t_1,...,t_{60} $$
where $\{R_j^i\}$ figures the 60 most recent close-to-close returns of the 2 securities of interest in case of pairs trading.
If $\hat{\beta}_0$ and $\hat{\beta}$ are OLS estimators then it is well-known that the regression residuals $\{\epsilon_j\}$ will be uncorrelated and have zero mean such that $X = 0$ and the s-score becomes
$$ s = -\frac{m}{\sigma_{eq}} $$
where the parameters $m$ (stationary mean) and $\sigma_{eq}$ (square root of stationary variance) of the Ornstein-Uhlenbeck (OU) process $(X_t)_{t\geq0}$ are estimated as follows:
Calculate the auxiliary values $\{X_k\}$ as follows
$$X_k = \sum_{j=t_1}^{k} \epsilon_j,\ \ \ \forall k = t_1,\dots,t_{60}$$
where the $\epsilon_j$ are the factor model regression residuals discussed above (note that $X = X_{60}$ by construction).
Calibrate an AR(1) model to the previous values
$$X_{k+1} = a + b X_k + \zeta_{k+1},\ \ \ \forall k = t_1,\dots,t_{59}$$
i.e. perform yet another linear (lagged-)regression to determine $\hat{a}$, $\hat{b}$ (intercept and slope) along with the regression residuals $\{\zeta_{k}\}$,.
Comparing the Euler discretisation of the Ornstein-Uhlenbek SDE (see equation 2.3 in your paper) to the AR(1) model above, infer the parameters of the OU process from $\hat{a}$, $\hat{b}$ and the auxialiary regression residuals $\{\zeta_{k}\}$, see original paper. At the end you should end up with
$$ m = \frac{\hat{a}}{1-\hat{b}} $$
$$ \sigma_{eq} = \sqrt{\frac{\text{var}(\{\zeta_{k}\})}{1-\hat{b}^2}}$$
The idea is then that, depending on the strength of the mean-reversion signal (= the value of the $s$-score), we decide on day $D$ to buy/sell at tomorrow's open (or close an existing position), see original paper.
Obviously, on day $D+1$, based on the new closing prices that you will observe end of day, you will be able to compute a new $s$-score by repeating the steps above (sticking to a 60-days trailing estimation window, which will now contain the most recent return corresponding to day $D+1$, while the oldest one corresponding to $D-61$ will have disappeared). Repeating this on different days allows you to plot a graph where the $s$-score evolves through time as in Figure 10 of the original article. | Simple question about Ornstein-Uhlenbeck process
The method discussed in the article you mention is directly inspired from the paper 'Statistical Arbitrage in the U.S. Equities Market' by Avellaneda & Lee (2008). Most of your questions are answered |
55,078 | Does Bonferroni apply to inference from parametric and non-parametric models? | As umm... Mr. Bonferroni notes, his correction works on any p-values, regardless of their source. However, there are other procedures, like the Holm's, which are uniformly more powerful; subject to certain other restrictions, like positive dependence between the tests, other methods are even more powerful still. Sorry, Carlo.
These corrections are intended to preserve the familywise error rate, which essentially means that you want to keep the probability of making one or more errors within a "family" of tests at the same level you would accept for a single test. To do this intelligently, you need to define the families appropriately. Based on your description, it sounds like you have at least two families of tests: the first consisting of mood data and the 2nd consisting of the self-report scales. For example, you might be testing whether some manipulation causes subjects in the treatment group (vs. appropriate controls) to 1) experience a change in mood and 2) be aware of said change.
Accordingly, I'd consider adjusting the first set with $n=6$ and the latter with $n=4$. Since the one of the self-report values is the overall scale, you could potentially argue that it "protects" the three sub-scores too, so perhaps I'd consider reporting the overall score (uncorrected) and three sub-scores corrected with $n=3$, especially if the overall test is significant. If the Likert (or self-report) scales all attempt to measure the same thing, I'd be tempted to apply an omnibus test (e.g., an ANOVA) first, which might give you more power.
However, I think applying the multiple comparisons to both sets of data (with $n=10$ is over-conservative) unless these tests are intimately linked. | Does Bonferroni apply to inference from parametric and non-parametric models? | As umm... Mr. Bonferroni notes, his correction works on any p-values, regardless of their source. However, there are other procedures, like the Holm's, which are uniformly more powerful; subject to ce | Does Bonferroni apply to inference from parametric and non-parametric models?
As umm... Mr. Bonferroni notes, his correction works on any p-values, regardless of their source. However, there are other procedures, like the Holm's, which are uniformly more powerful; subject to certain other restrictions, like positive dependence between the tests, other methods are even more powerful still. Sorry, Carlo.
These corrections are intended to preserve the familywise error rate, which essentially means that you want to keep the probability of making one or more errors within a "family" of tests at the same level you would accept for a single test. To do this intelligently, you need to define the families appropriately. Based on your description, it sounds like you have at least two families of tests: the first consisting of mood data and the 2nd consisting of the self-report scales. For example, you might be testing whether some manipulation causes subjects in the treatment group (vs. appropriate controls) to 1) experience a change in mood and 2) be aware of said change.
Accordingly, I'd consider adjusting the first set with $n=6$ and the latter with $n=4$. Since the one of the self-report values is the overall scale, you could potentially argue that it "protects" the three sub-scores too, so perhaps I'd consider reporting the overall score (uncorrected) and three sub-scores corrected with $n=3$, especially if the overall test is significant. If the Likert (or self-report) scales all attempt to measure the same thing, I'd be tempted to apply an omnibus test (e.g., an ANOVA) first, which might give you more power.
However, I think applying the multiple comparisons to both sets of data (with $n=10$ is over-conservative) unless these tests are intimately linked. | Does Bonferroni apply to inference from parametric and non-parametric models?
As umm... Mr. Bonferroni notes, his correction works on any p-values, regardless of their source. However, there are other procedures, like the Holm's, which are uniformly more powerful; subject to ce |
55,079 | Does Bonferroni apply to inference from parametric and non-parametric models? | Bonferroni works for any p-values. It doesn't care where they came from. | Does Bonferroni apply to inference from parametric and non-parametric models? | Bonferroni works for any p-values. It doesn't care where they came from. | Does Bonferroni apply to inference from parametric and non-parametric models?
Bonferroni works for any p-values. It doesn't care where they came from. | Does Bonferroni apply to inference from parametric and non-parametric models?
Bonferroni works for any p-values. It doesn't care where they came from. |
55,080 | Does Bonferroni apply to inference from parametric and non-parametric models? | If you want to correct then there are stronger methods than that named after Bonferroni like Holm's method which is always applicable when Bonferroni is and is never less powerful. However is that really what you want to do if you have a precise hypothesis about which will achieve some arbitrary level of statistical significance? | Does Bonferroni apply to inference from parametric and non-parametric models? | If you want to correct then there are stronger methods than that named after Bonferroni like Holm's method which is always applicable when Bonferroni is and is never less powerful. However is that rea | Does Bonferroni apply to inference from parametric and non-parametric models?
If you want to correct then there are stronger methods than that named after Bonferroni like Holm's method which is always applicable when Bonferroni is and is never less powerful. However is that really what you want to do if you have a precise hypothesis about which will achieve some arbitrary level of statistical significance? | Does Bonferroni apply to inference from parametric and non-parametric models?
If you want to correct then there are stronger methods than that named after Bonferroni like Holm's method which is always applicable when Bonferroni is and is never less powerful. However is that rea |
55,081 | Meaning of cross validation | You are right (for K-fold CV)
K-fold CV is a metric that would provide a better measure of accuracy for your model than a regular single training-testing split. It would not do more than that. It would have no effect on the learning itself.
However, cross validation or (just validation) is used in conjunction with learning in different learning algorithms in different ways that will indeed aid in better learning by preventing overfitting. This is a separate set of called samples called the validation set disjoint from the training and testing sets for this purpose.
For instance, in ANN, the the accuracy of validation set is measured at every epoch to halt the training at the point of overfitting. This procedure is called early stopping.
Similarly validation sets are used to prune decision trees after allowing them to overfit.
Squares plot the accuracy of the training set while triangles indicate accuracy of the validation set.
Don't get confused between the two terms as they are used in different context. | Meaning of cross validation | You are right (for K-fold CV)
K-fold CV is a metric that would provide a better measure of accuracy for your model than a regular single training-testing split. It would not do more than that. It wou | Meaning of cross validation
You are right (for K-fold CV)
K-fold CV is a metric that would provide a better measure of accuracy for your model than a regular single training-testing split. It would not do more than that. It would have no effect on the learning itself.
However, cross validation or (just validation) is used in conjunction with learning in different learning algorithms in different ways that will indeed aid in better learning by preventing overfitting. This is a separate set of called samples called the validation set disjoint from the training and testing sets for this purpose.
For instance, in ANN, the the accuracy of validation set is measured at every epoch to halt the training at the point of overfitting. This procedure is called early stopping.
Similarly validation sets are used to prune decision trees after allowing them to overfit.
Squares plot the accuracy of the training set while triangles indicate accuracy of the validation set.
Don't get confused between the two terms as they are used in different context. | Meaning of cross validation
You are right (for K-fold CV)
K-fold CV is a metric that would provide a better measure of accuracy for your model than a regular single training-testing split. It would not do more than that. It wou |
55,082 | Meaning of cross validation | Yes, you are correct. Cross validation will give you an idea of your model's out-of-sample performance. It does not modify the model as such.
Typically, you will look at multiple models when you want to predict. (The situation is somewhat different when doing inferential statistics.) You will cross validate each model and pick the one that yields the lower cross validated error. In this way, you can examine the effects of adding, removing or transforming predictors.
Cross validation is a tool to help you understand your model better. You will need to do the modifying of your model yourself - guided by what cross validation tells you. | Meaning of cross validation | Yes, you are correct. Cross validation will give you an idea of your model's out-of-sample performance. It does not modify the model as such.
Typically, you will look at multiple models when you want | Meaning of cross validation
Yes, you are correct. Cross validation will give you an idea of your model's out-of-sample performance. It does not modify the model as such.
Typically, you will look at multiple models when you want to predict. (The situation is somewhat different when doing inferential statistics.) You will cross validate each model and pick the one that yields the lower cross validated error. In this way, you can examine the effects of adding, removing or transforming predictors.
Cross validation is a tool to help you understand your model better. You will need to do the modifying of your model yourself - guided by what cross validation tells you. | Meaning of cross validation
Yes, you are correct. Cross validation will give you an idea of your model's out-of-sample performance. It does not modify the model as such.
Typically, you will look at multiple models when you want |
55,083 | Maximum Likelihood Estimator for equicorrelation model | The useful formulas have been given in the answer by @Alecos
Papadopoulos, which I will refer to. Note that $\mathbf{\Sigma}$ is
positive definite iff $\rho_{\text{min}} < \rho < 1$ with
$\rho_{\text{min}} := - 1/(p-1)$.
For the ML estimation you can concentrate the vector $\boldsymbol{\mu}$
and the scalar $\sigma^2$ out of the log-likelihood function by replacing them by
their ML estimates, namely $\widehat{\boldsymbol{\mu}} = \bar{\mathbf{X}}$ and
$$
\widehat{\sigma}^2 = \frac{1}{np} \, \sum_{i = 1}^n
\left[\mathbf{X}_i - \bar{\mathbf{X}} \right]^\top \boldsymbol{\Sigma}^{\star -1}
\left[\mathbf{X}_i - \bar{\mathbf{X}}\right]
$$
where $\boldsymbol{\Sigma}^\star:= \sigma^{-2} \boldsymbol{\Sigma}$. Thus the estimation
boils down to the one-dimensional optimisation of a function of $\rho$.
An interesting point is that for a vector $\mathbf{z}$ of length $p$ we have
$$
\mathbf{z}^\top \boldsymbol{\Sigma}^{\star -1} \mathbf{z} = \frac{1}{1-\rho}
\left[\mathbf{z}^\top \mathbf{z} - \nu (\mathbf{1}_p^\top \mathbf{z})^2 \right]
$$
where $\mathbf{1}_p$ is a vector of $p$ ones and $\nu:= \rho\, / \, [1 + (p-1) \rho]$.
So taking $\mathbf{z}_i:= \mathbf{X}_i - \bar{\mathbf{X}}$ we can use the sums
$$
A := \sum_i \mathbf{z}_i^\top \mathbf{z}_i, \qquad
B := \sum_i (\mathbf{1}_p^\top \mathbf{z}_i)^2
$$
which do not depend on the parameter.
The concentrated log-likelihood obtained after replacing $\boldsymbol{\mu}$ and $\sigma^2$
by their estimate is
$$
\ell_{\text{c}}(\rho) = -\frac{np}{2} \, \log(2 \pi) -
\frac{n}{2}\,\log \widehat{\sigma}^2 -
\frac{n}{2}\,\log|\boldsymbol{\Sigma}^\star| -
\frac{np}{2}.
$$
which is easily maximised.
##' MLE of a normal vector with compound symmetric covariance.
##' @title Compound symmetry
##' @param X A matrix with n row and p columns.
##' @return A list of ML Estimates.
MLE <- function(X) {
n <- nrow(X)
p <- ncol(X)
p1 <- p - 1
rho.min <- - 1 / p1
## substract column means. We could use 'scale' as well
xbar <- apply(X, 2, mean)
Xcent <- sweep(X, MARGIN = 2, STATS = xbar, FUN = "-")
## sum of 'n' crossprods and sum of 'n' squared sum of 'p'
## components
A <- sum(apply(Xcent, MARGIN = 1, FUN = crossprod))
B <- sum(apply(Xcent, MARGIN = 1, sum)^2)
## concentrated (or profiled) deviance = - 2 log L, to be
## minimised
deviance <- function(rho1) {
nu <- rho1 / (1.0 + p1 * rho1)
sigma2.hat <- (A - nu * B) / n / p / (1.0 - rho1)
dev <- n * (p * log(sigma2.hat) + p1 * log(1.0 - rho1)
+ log(1.0 + p1 * rho1) + p)
attr(dev, "sigma2.hat") <- sigma2.hat
dev
}
opt <- optimise(deviance, interval = c(rho.min, 1.0))
list(mu = xbar,
sigma2 = attr(opt$objective, "sigma2.hat"),
rho = opt$minimum,
loglik = -opt$objective[1] / 2)
}
## Now try the function change/remove set.seed for other examples
set.seed(123)
p <- 10L; n <- 1000
rho.min <- -1 / (p - 1)
## draw 'rho', 'mu' and 'sigma2'
rho <- runif(1, min = rho.min, max = 1)
mu <- rnorm(p)
sigma2 <- rexp(1)
## build a p * p covariance matrix 'Sigma'
Sigma <- matrix(rho, nrow = p, ncol = p)
diag(Sigma) <- 1
Sigma <- sigma2 * Sigma
## build the matrix 'X'
G <- chol(Sigma)
X <- matrix(rnorm(n * p), nrow = n, ncol = p) %*% G
X <- sweep(X, MARGIN = 2, STATS = mu, FUN = "+")
##
MLE(X) | Maximum Likelihood Estimator for equicorrelation model | The useful formulas have been given in the answer by @Alecos
Papadopoulos, which I will refer to. Note that $\mathbf{\Sigma}$ is
positive definite iff $\rho_{\text{min}} < \rho < 1$ with
$\rho_{\text{ | Maximum Likelihood Estimator for equicorrelation model
The useful formulas have been given in the answer by @Alecos
Papadopoulos, which I will refer to. Note that $\mathbf{\Sigma}$ is
positive definite iff $\rho_{\text{min}} < \rho < 1$ with
$\rho_{\text{min}} := - 1/(p-1)$.
For the ML estimation you can concentrate the vector $\boldsymbol{\mu}$
and the scalar $\sigma^2$ out of the log-likelihood function by replacing them by
their ML estimates, namely $\widehat{\boldsymbol{\mu}} = \bar{\mathbf{X}}$ and
$$
\widehat{\sigma}^2 = \frac{1}{np} \, \sum_{i = 1}^n
\left[\mathbf{X}_i - \bar{\mathbf{X}} \right]^\top \boldsymbol{\Sigma}^{\star -1}
\left[\mathbf{X}_i - \bar{\mathbf{X}}\right]
$$
where $\boldsymbol{\Sigma}^\star:= \sigma^{-2} \boldsymbol{\Sigma}$. Thus the estimation
boils down to the one-dimensional optimisation of a function of $\rho$.
An interesting point is that for a vector $\mathbf{z}$ of length $p$ we have
$$
\mathbf{z}^\top \boldsymbol{\Sigma}^{\star -1} \mathbf{z} = \frac{1}{1-\rho}
\left[\mathbf{z}^\top \mathbf{z} - \nu (\mathbf{1}_p^\top \mathbf{z})^2 \right]
$$
where $\mathbf{1}_p$ is a vector of $p$ ones and $\nu:= \rho\, / \, [1 + (p-1) \rho]$.
So taking $\mathbf{z}_i:= \mathbf{X}_i - \bar{\mathbf{X}}$ we can use the sums
$$
A := \sum_i \mathbf{z}_i^\top \mathbf{z}_i, \qquad
B := \sum_i (\mathbf{1}_p^\top \mathbf{z}_i)^2
$$
which do not depend on the parameter.
The concentrated log-likelihood obtained after replacing $\boldsymbol{\mu}$ and $\sigma^2$
by their estimate is
$$
\ell_{\text{c}}(\rho) = -\frac{np}{2} \, \log(2 \pi) -
\frac{n}{2}\,\log \widehat{\sigma}^2 -
\frac{n}{2}\,\log|\boldsymbol{\Sigma}^\star| -
\frac{np}{2}.
$$
which is easily maximised.
##' MLE of a normal vector with compound symmetric covariance.
##' @title Compound symmetry
##' @param X A matrix with n row and p columns.
##' @return A list of ML Estimates.
MLE <- function(X) {
n <- nrow(X)
p <- ncol(X)
p1 <- p - 1
rho.min <- - 1 / p1
## substract column means. We could use 'scale' as well
xbar <- apply(X, 2, mean)
Xcent <- sweep(X, MARGIN = 2, STATS = xbar, FUN = "-")
## sum of 'n' crossprods and sum of 'n' squared sum of 'p'
## components
A <- sum(apply(Xcent, MARGIN = 1, FUN = crossprod))
B <- sum(apply(Xcent, MARGIN = 1, sum)^2)
## concentrated (or profiled) deviance = - 2 log L, to be
## minimised
deviance <- function(rho1) {
nu <- rho1 / (1.0 + p1 * rho1)
sigma2.hat <- (A - nu * B) / n / p / (1.0 - rho1)
dev <- n * (p * log(sigma2.hat) + p1 * log(1.0 - rho1)
+ log(1.0 + p1 * rho1) + p)
attr(dev, "sigma2.hat") <- sigma2.hat
dev
}
opt <- optimise(deviance, interval = c(rho.min, 1.0))
list(mu = xbar,
sigma2 = attr(opt$objective, "sigma2.hat"),
rho = opt$minimum,
loglik = -opt$objective[1] / 2)
}
## Now try the function change/remove set.seed for other examples
set.seed(123)
p <- 10L; n <- 1000
rho.min <- -1 / (p - 1)
## draw 'rho', 'mu' and 'sigma2'
rho <- runif(1, min = rho.min, max = 1)
mu <- rnorm(p)
sigma2 <- rexp(1)
## build a p * p covariance matrix 'Sigma'
Sigma <- matrix(rho, nrow = p, ncol = p)
diag(Sigma) <- 1
Sigma <- sigma2 * Sigma
## build the matrix 'X'
G <- chol(Sigma)
X <- matrix(rnorm(n * p), nrow = n, ncol = p) %*% G
X <- sweep(X, MARGIN = 2, STATS = mu, FUN = "+")
##
MLE(X) | Maximum Likelihood Estimator for equicorrelation model
The useful formulas have been given in the answer by @Alecos
Papadopoulos, which I will refer to. Note that $\mathbf{\Sigma}$ is
positive definite iff $\rho_{\text{min}} < \rho < 1$ with
$\rho_{\text{ |
55,084 | Maximum Likelihood Estimator for equicorrelation model | One can verify that the inverse of this matrix equals
$$\mathbf{\Sigma}^{-1} =\frac {1}{\sigma^2(1-\rho)}\left(\mathbf{I_p}-\frac{\rho}{1+(p-1)\rho}\mathbf{J_p}\right)$$
Its determinant is
$$|\mathbf{\Sigma}| = \sigma^{2p}\cdot [1+(p-1)\rho]\cdot (1-\rho)^{p-1}$$
(see Tong 1990, p.104)
I don't think it gets simpler than that (and it is not very messy either).
Beware of asymptotics (which are always prominent in maximum likelihood estimation), since the matrix may become singular at the limit. And MLE with a sigular covariance matrix is a whole different ball-game. If I remember correctly, the existing theoretical results are acrimonious to follow in such a case. | Maximum Likelihood Estimator for equicorrelation model | One can verify that the inverse of this matrix equals
$$\mathbf{\Sigma}^{-1} =\frac {1}{\sigma^2(1-\rho)}\left(\mathbf{I_p}-\frac{\rho}{1+(p-1)\rho}\mathbf{J_p}\right)$$
Its determinant is
$$|\mathbf{ | Maximum Likelihood Estimator for equicorrelation model
One can verify that the inverse of this matrix equals
$$\mathbf{\Sigma}^{-1} =\frac {1}{\sigma^2(1-\rho)}\left(\mathbf{I_p}-\frac{\rho}{1+(p-1)\rho}\mathbf{J_p}\right)$$
Its determinant is
$$|\mathbf{\Sigma}| = \sigma^{2p}\cdot [1+(p-1)\rho]\cdot (1-\rho)^{p-1}$$
(see Tong 1990, p.104)
I don't think it gets simpler than that (and it is not very messy either).
Beware of asymptotics (which are always prominent in maximum likelihood estimation), since the matrix may become singular at the limit. And MLE with a sigular covariance matrix is a whole different ball-game. If I remember correctly, the existing theoretical results are acrimonious to follow in such a case. | Maximum Likelihood Estimator for equicorrelation model
One can verify that the inverse of this matrix equals
$$\mathbf{\Sigma}^{-1} =\frac {1}{\sigma^2(1-\rho)}\left(\mathbf{I_p}-\frac{\rho}{1+(p-1)\rho}\mathbf{J_p}\right)$$
Its determinant is
$$|\mathbf{ |
55,085 | Maximum Likelihood Estimator for equicorrelation model | There exists explicit (closed-form) solutions for the MLE's you are struggling to find. In fact, a little intuition, after the re-parametrization in step 1 shown below, reveals that the average of the diagonal elements of xx' is a reasonable estimator for the variance and the average of the off-diagonal elements of xx' is a reasonable estimator for the covariance. But to be specific, follow these steps. For simplicity, let the mean vector be equal to the zero vector.
1) Parameterize the problem so that the diagonal elements of the variance-covariance matrix are each "a" and the off-diagonal elements are each "b".
2) Use well known formulas for the inverse and determinant of the matrix constructed in step 1 to set-up the likelihood equation.
3) Re-parameterize the problem again by letting c = a - b and d = a + b*(p-1).
4) Find the MLE of c and d (very easy): you should obtain c = (x'x)/(n-1) - (x'j)^2/(n(n-1) and d = (x'j)^2/n
5) Use the invariance property of the MLE to show that a = ((n-1)*c + d)/n and b = (d-c)/p | Maximum Likelihood Estimator for equicorrelation model | There exists explicit (closed-form) solutions for the MLE's you are struggling to find. In fact, a little intuition, after the re-parametrization in step 1 shown below, reveals that the average of th | Maximum Likelihood Estimator for equicorrelation model
There exists explicit (closed-form) solutions for the MLE's you are struggling to find. In fact, a little intuition, after the re-parametrization in step 1 shown below, reveals that the average of the diagonal elements of xx' is a reasonable estimator for the variance and the average of the off-diagonal elements of xx' is a reasonable estimator for the covariance. But to be specific, follow these steps. For simplicity, let the mean vector be equal to the zero vector.
1) Parameterize the problem so that the diagonal elements of the variance-covariance matrix are each "a" and the off-diagonal elements are each "b".
2) Use well known formulas for the inverse and determinant of the matrix constructed in step 1 to set-up the likelihood equation.
3) Re-parameterize the problem again by letting c = a - b and d = a + b*(p-1).
4) Find the MLE of c and d (very easy): you should obtain c = (x'x)/(n-1) - (x'j)^2/(n(n-1) and d = (x'j)^2/n
5) Use the invariance property of the MLE to show that a = ((n-1)*c + d)/n and b = (d-c)/p | Maximum Likelihood Estimator for equicorrelation model
There exists explicit (closed-form) solutions for the MLE's you are struggling to find. In fact, a little intuition, after the re-parametrization in step 1 shown below, reveals that the average of th |
55,086 | Maximum Likelihood Estimator for equicorrelation model | I found this question in an exercise of Seber's Multivariate Observations (1984).
I don't have a proof but apparently the maximum likelihood estimators of $\sigma^2$ and $\rho$ are given by
$$\hat\sigma^2=\frac1p\sum_{j=1}^p s_{jj}$$
and $$\hat\rho=\frac1{\hat\sigma^2}\sum_{j\ne k}\frac{s_{jk}}{p(p-1)}\,,$$
where $s_{jk}$ is the $(j,k)$th element of $\boldsymbol S=\frac1{n-1}\sum\limits_{i=1}^n (\boldsymbol X_i-\boldsymbol{\overline X})(\boldsymbol X_i-\boldsymbol{\overline X})^T$.
The relevant extract from page 95 of the book:
The hint given in the exercise is to find $\boldsymbol \Sigma^{-1}$ (which exists for $\rho\in \left(-\frac1{p-1},1\right)$ as pointed out by @Yves) and $\det\boldsymbol \Sigma$, as given in the answer by @Alecos Papadopoulos. | Maximum Likelihood Estimator for equicorrelation model | I found this question in an exercise of Seber's Multivariate Observations (1984).
I don't have a proof but apparently the maximum likelihood estimators of $\sigma^2$ and $\rho$ are given by
$$\hat\sig | Maximum Likelihood Estimator for equicorrelation model
I found this question in an exercise of Seber's Multivariate Observations (1984).
I don't have a proof but apparently the maximum likelihood estimators of $\sigma^2$ and $\rho$ are given by
$$\hat\sigma^2=\frac1p\sum_{j=1}^p s_{jj}$$
and $$\hat\rho=\frac1{\hat\sigma^2}\sum_{j\ne k}\frac{s_{jk}}{p(p-1)}\,,$$
where $s_{jk}$ is the $(j,k)$th element of $\boldsymbol S=\frac1{n-1}\sum\limits_{i=1}^n (\boldsymbol X_i-\boldsymbol{\overline X})(\boldsymbol X_i-\boldsymbol{\overline X})^T$.
The relevant extract from page 95 of the book:
The hint given in the exercise is to find $\boldsymbol \Sigma^{-1}$ (which exists for $\rho\in \left(-\frac1{p-1},1\right)$ as pointed out by @Yves) and $\det\boldsymbol \Sigma$, as given in the answer by @Alecos Papadopoulos. | Maximum Likelihood Estimator for equicorrelation model
I found this question in an exercise of Seber's Multivariate Observations (1984).
I don't have a proof but apparently the maximum likelihood estimators of $\sigma^2$ and $\rho$ are given by
$$\hat\sig |
55,087 | What is the domain of this random variable? | Often orthography hints at meaning. With this as a point of departure, along with an understanding of what a random variable is, we can figure out the rest. Rather than just giving an interpretation, I will first take you through the reasoning used to parse these expressions: that might help you make sense of related material elsewhere in the book.
Random variables
We might take the capitalization of $X_1, \ldots, X_n$ and $Y$ to be hints that they are intended to denote random variables. Recall that abstractly this means there is a probability space on some (abstract, typically unnamed) underlying set $\Omega$ and that each of these quantities is a measurable real-valued function with domain $\Omega$. We can write this with arrows in the form
$$X_1, \ldots, X_n, Y: \Omega \to \mathbb{R}.$$
$\Omega$ is a mathematical representation of the real-world objects in which we are interested. These objects typically are not themselves mathematical and have no numbers intrinsically associated with them. They could be people, bugs, stars, economies, production processes, or whatever. The $X_i$ and $Y$ represent numerical properties of these objects.
Vectors
The random variable $X$ is vector-valued. Formally,
$$X:\Omega\to \mathbb{R}^p$$
is defined in terms of the $p$ canonical projection functions
$$\eqalign{
\pi_i:\mathbb{R}^p \to \mathbb{R} \\
\pi_i(x_1, \ldots, x_{i-1}, x_i, x_{i+1}, \ldots, x_p) = x_i
}$$
as being the unique function for which $\pi_i \circ X = X_i$. With arrows we may depict it thus:
$$X_i: \Omega \xrightarrow{X} \mathbb{R}^p \xrightarrow{\pi_i} \mathbb{R}.$$
$X$ conveniently gathers the $p$ predictors $X_i$ into a single mathematical object.
Responses, model functions, and errors
In the expression
$$Y = f(X) + \epsilon,$$
we know that the value of $Y$ is a real number, whence both $f(X)$ and $\epsilon$ are real numbers. The only reasonable meaning is that $f:\mathbb{R}^p \to \mathbb{R}$, thus:
$$f:\mathbb{R}^p \to \mathbb{R.}$$
The model function $f$ simply is a function of $p$ real numbers. If you give me the values of $p$ predictors $X_i(\omega)$, $i=1, \ldots, p$, then $f$ will give me some guess about the corresponding value of $Y(\omega)$
The quantity $\epsilon$ is unlikely just to be a fixed number: it represents the difference between $Y$ and what $f$ predicts from $X$. Therefore we should understand that
$$\epsilon:\Omega \to \mathbb{R}$$
is a random variable (the "error") and the "$+$" represents pointwise addition of functions in the usual way:
$$Y(\omega) = (f(X) + \epsilon)(\omega) = f(X(\omega)) + \epsilon(\omega)$$
for all $\omega\in\Omega$. This is the usual way to add the random variables $f\circ X$ and $\epsilon$.
If you want to be formal about it you will collect $f$ and $\epsilon$ into an ordered pair of functions and identify that with a function into the Cartesian product of their ranges, giving this picture of $Y$:
$$Y: \Omega \xrightarrow{(X, \epsilon)} \mathbb{R}^p \times \mathbb{R} \xrightarrow{(f, \operatorname{id})} \mathbb{R}\times\mathbb{R}\xrightarrow{+}\mathbb{R}.\tag{1}$$
"$\operatorname{id}$" denotes the identity function on $\mathbb{R}$, $x\to x$ for all numbers $x$.
Predictors
Since $\hat f$ is an "approximation function," we should try to understand it as being some version of $f$: it's a guess concerning what $f$ is. Consequently $\hat f$ also must be a function of $p$ variables. Let's draw the arrows for $\hat{Y}=\hat f(X)$:
$$\hat{Y}: \Omega \xrightarrow{X} \mathbb{R}^p \xrightarrow{\hat f} \mathbb{R}.\tag{2}$$
This clearly shows that $\hat Y = \hat{f}\circ X$ is the same kind of object as $Y$: namely, a random variable.
Answers
Answer 1
The arrows in $(1)$ and $(2)$ show that both $Y$ and $\hat Y$ are random variables: their domains are $\Omega$.
Answer 2
When an "observation" of $X$ is drawn, it gives us a "realization" $x_1$. Let's do this $n \ge 0$ times (often independently, but not necessarily), producing a sample $x_1, x_2, \ldots, x_n$. Each of the $x_i$ is a $p$-vector. The expressions $f(x_i)$ and $\hat f(x_i)$ require no additional parsing: by definition, $f$ and $\hat f$ are functions whose domains are $\mathbb{R}^p$. Here, they have been applied to a specific element $x_i \in \mathbb{R}^p$.
Answer 3
The expression "$f(X_1, \ldots, X_p)$" does make sense: it represents $f\circ X: \Omega\to\mathbb{R}$. As the arrow indicates, it is applied to elements $\omega\in\Omega$, not to individual vectors $x_i$. Although we could understand "$(f(X_1, \ldots, X_p))(\omega)$" in this way, "$(f(X_1, \ldots, X_p))(x_i)$" would be nonsense unless we stipulated that $\Omega=\mathbb{R}^p$. Although sometimes this is done, and many textbooks (especially of the most elementary sort) can be read in this way, conceptually it is clearer not to confuse objects in $\Omega$ with the $p$-tuples of numbers used to characterize some of their properties. Far from being "loose," this is fairly rigorous notation. | What is the domain of this random variable? | Often orthography hints at meaning. With this as a point of departure, along with an understanding of what a random variable is, we can figure out the rest. Rather than just giving an interpretation | What is the domain of this random variable?
Often orthography hints at meaning. With this as a point of departure, along with an understanding of what a random variable is, we can figure out the rest. Rather than just giving an interpretation, I will first take you through the reasoning used to parse these expressions: that might help you make sense of related material elsewhere in the book.
Random variables
We might take the capitalization of $X_1, \ldots, X_n$ and $Y$ to be hints that they are intended to denote random variables. Recall that abstractly this means there is a probability space on some (abstract, typically unnamed) underlying set $\Omega$ and that each of these quantities is a measurable real-valued function with domain $\Omega$. We can write this with arrows in the form
$$X_1, \ldots, X_n, Y: \Omega \to \mathbb{R}.$$
$\Omega$ is a mathematical representation of the real-world objects in which we are interested. These objects typically are not themselves mathematical and have no numbers intrinsically associated with them. They could be people, bugs, stars, economies, production processes, or whatever. The $X_i$ and $Y$ represent numerical properties of these objects.
Vectors
The random variable $X$ is vector-valued. Formally,
$$X:\Omega\to \mathbb{R}^p$$
is defined in terms of the $p$ canonical projection functions
$$\eqalign{
\pi_i:\mathbb{R}^p \to \mathbb{R} \\
\pi_i(x_1, \ldots, x_{i-1}, x_i, x_{i+1}, \ldots, x_p) = x_i
}$$
as being the unique function for which $\pi_i \circ X = X_i$. With arrows we may depict it thus:
$$X_i: \Omega \xrightarrow{X} \mathbb{R}^p \xrightarrow{\pi_i} \mathbb{R}.$$
$X$ conveniently gathers the $p$ predictors $X_i$ into a single mathematical object.
Responses, model functions, and errors
In the expression
$$Y = f(X) + \epsilon,$$
we know that the value of $Y$ is a real number, whence both $f(X)$ and $\epsilon$ are real numbers. The only reasonable meaning is that $f:\mathbb{R}^p \to \mathbb{R}$, thus:
$$f:\mathbb{R}^p \to \mathbb{R.}$$
The model function $f$ simply is a function of $p$ real numbers. If you give me the values of $p$ predictors $X_i(\omega)$, $i=1, \ldots, p$, then $f$ will give me some guess about the corresponding value of $Y(\omega)$
The quantity $\epsilon$ is unlikely just to be a fixed number: it represents the difference between $Y$ and what $f$ predicts from $X$. Therefore we should understand that
$$\epsilon:\Omega \to \mathbb{R}$$
is a random variable (the "error") and the "$+$" represents pointwise addition of functions in the usual way:
$$Y(\omega) = (f(X) + \epsilon)(\omega) = f(X(\omega)) + \epsilon(\omega)$$
for all $\omega\in\Omega$. This is the usual way to add the random variables $f\circ X$ and $\epsilon$.
If you want to be formal about it you will collect $f$ and $\epsilon$ into an ordered pair of functions and identify that with a function into the Cartesian product of their ranges, giving this picture of $Y$:
$$Y: \Omega \xrightarrow{(X, \epsilon)} \mathbb{R}^p \times \mathbb{R} \xrightarrow{(f, \operatorname{id})} \mathbb{R}\times\mathbb{R}\xrightarrow{+}\mathbb{R}.\tag{1}$$
"$\operatorname{id}$" denotes the identity function on $\mathbb{R}$, $x\to x$ for all numbers $x$.
Predictors
Since $\hat f$ is an "approximation function," we should try to understand it as being some version of $f$: it's a guess concerning what $f$ is. Consequently $\hat f$ also must be a function of $p$ variables. Let's draw the arrows for $\hat{Y}=\hat f(X)$:
$$\hat{Y}: \Omega \xrightarrow{X} \mathbb{R}^p \xrightarrow{\hat f} \mathbb{R}.\tag{2}$$
This clearly shows that $\hat Y = \hat{f}\circ X$ is the same kind of object as $Y$: namely, a random variable.
Answers
Answer 1
The arrows in $(1)$ and $(2)$ show that both $Y$ and $\hat Y$ are random variables: their domains are $\Omega$.
Answer 2
When an "observation" of $X$ is drawn, it gives us a "realization" $x_1$. Let's do this $n \ge 0$ times (often independently, but not necessarily), producing a sample $x_1, x_2, \ldots, x_n$. Each of the $x_i$ is a $p$-vector. The expressions $f(x_i)$ and $\hat f(x_i)$ require no additional parsing: by definition, $f$ and $\hat f$ are functions whose domains are $\mathbb{R}^p$. Here, they have been applied to a specific element $x_i \in \mathbb{R}^p$.
Answer 3
The expression "$f(X_1, \ldots, X_p)$" does make sense: it represents $f\circ X: \Omega\to\mathbb{R}$. As the arrow indicates, it is applied to elements $\omega\in\Omega$, not to individual vectors $x_i$. Although we could understand "$(f(X_1, \ldots, X_p))(\omega)$" in this way, "$(f(X_1, \ldots, X_p))(x_i)$" would be nonsense unless we stipulated that $\Omega=\mathbb{R}^p$. Although sometimes this is done, and many textbooks (especially of the most elementary sort) can be read in this way, conceptually it is clearer not to confuse objects in $\Omega$ with the $p$-tuples of numbers used to characterize some of their properties. Far from being "loose," this is fairly rigorous notation. | What is the domain of this random variable?
Often orthography hints at meaning. With this as a point of departure, along with an understanding of what a random variable is, we can figure out the rest. Rather than just giving an interpretation |
55,088 | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation? | Since ABC is an approximate method, the simplest answer to your question is that you can use any distance you find to your taste! Provided it is a true distance, the basis justification holds that
$$\pi_\epsilon(\theta|y^{\text{obs}})=\pi(\theta|d(y(\theta), y^{\text{obs}})<\epsilon)$$where $y(\theta)\sim f(y|\theta)$ is the pseudo-data, converges to$$\pi(\theta|y^{\text{obs}})$$as $\epsilon$ goes to zero. | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation | Since ABC is an approximate method, the simplest answer to your question is that you can use any distance you find to your taste! Provided it is a true distance, the basis justification holds that
$$\ | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation?
Since ABC is an approximate method, the simplest answer to your question is that you can use any distance you find to your taste! Provided it is a true distance, the basis justification holds that
$$\pi_\epsilon(\theta|y^{\text{obs}})=\pi(\theta|d(y(\theta), y^{\text{obs}})<\epsilon)$$where $y(\theta)\sim f(y|\theta)$ is the pseudo-data, converges to$$\pi(\theta|y^{\text{obs}})$$as $\epsilon$ goes to zero. | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation
Since ABC is an approximate method, the simplest answer to your question is that you can use any distance you find to your taste! Provided it is a true distance, the basis justification holds that
$$\ |
55,089 | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation? | The Bhattacharyya distance is a distance between distributions while, what you need to conduct ABC, is a distance between summary statistics (this is, vectors of numbers that summarise the information in the sample), unless your summary statistic is a functional approximation to the distribution, which I doubt. For this reason, people typically employ the Euclidean distance (more sophisticated and better distances have also been studied).
Have a look at the following references for more details on the ingredients of ABC:
https://darrenjw.wordpress.com/2013/03/31/introduction-to-approximate-bayesian-computation-abc/
http://www0.cs.ucl.ac.uk/staff/C.Archambeau/AIS/Talks/rwilkinson_ais08.pdf
https://www.youtube.com/watch?v=8TGkrkK6pq4 | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation | The Bhattacharyya distance is a distance between distributions while, what you need to conduct ABC, is a distance between summary statistics (this is, vectors of numbers that summarise the information | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation?
The Bhattacharyya distance is a distance between distributions while, what you need to conduct ABC, is a distance between summary statistics (this is, vectors of numbers that summarise the information in the sample), unless your summary statistic is a functional approximation to the distribution, which I doubt. For this reason, people typically employ the Euclidean distance (more sophisticated and better distances have also been studied).
Have a look at the following references for more details on the ingredients of ABC:
https://darrenjw.wordpress.com/2013/03/31/introduction-to-approximate-bayesian-computation-abc/
http://www0.cs.ucl.ac.uk/staff/C.Archambeau/AIS/Talks/rwilkinson_ais08.pdf
https://www.youtube.com/watch?v=8TGkrkK6pq4 | Can I use the Bhattacharyya distance as an acceptance criterion for Approximate Bayesian Computation
The Bhattacharyya distance is a distance between distributions while, what you need to conduct ABC, is a distance between summary statistics (this is, vectors of numbers that summarise the information |
55,090 | Self-studying the normal linear model: what can I do with it? | This is by no means authoritative, but hopefully it gives you somewhere to start.
You've studied the model $Y = X\beta + \varepsilon$ with $\varepsilon \sim \mathcal N(0, \sigma^2 I)$.
There are many extensions of this model. Some are:
Explore different assumptions on $\varepsilon$. Maybe you only assume $E(\varepsilon) = 0$, $Var(\varepsilon) = D$. What happens if $D \neq \sigma^2 I_n$? If $D$ is diagonal you can get weighted least squares; if $D$ is not even diagonal then you can use generalized least squares. Also, what happens if $\varepsilon$ has specified but not normal distributions (Laplace, for example)? This turns out to be closely related to considering loss functions other than squared loss.
Consider other distributions of $Y_i$. This can lead to GLMs
Explore penalized regressions. This is a big area. I'd start with the lasso and ridge regression.
Ridge regression and the lasso involve tuning parameters. This leads you to learning about cross validation. Or maybe you'd prefer to use information criteria like the AIC and BIC. There's a lot to study here, such as where exactly they come from and asymptotic relationships with different kinds of cross validations. Significance testing also gets tricky. Maybe you'll need to use the bootstrap more often.
Ridge regression and the lasso also have Bayesian interpretations. This can lead you to going full Bayesian and investigating the effects of various priors and multilevel models. This might require the use of MCMC which is a big thing in and of itself.
Very closely related to the previous point is the delightful world of mixed effects models, which is also very closely related to generalized least squares
Maybe you've got continuous predictors and want to use polynomials in them, but you don't like global polynomials. You can consider basis expansion methods like splines. I'd suggest you take a look at the Elements of Statistical Learning (available free online).
From here you could study kernel methods in general; this includes methods like the support vector machine.
Now you've got both simple interpretable tools like iid normal-errors linear regression, nonparametric flexible methods like SVM, and many things in between. | Self-studying the normal linear model: what can I do with it? | This is by no means authoritative, but hopefully it gives you somewhere to start.
You've studied the model $Y = X\beta + \varepsilon$ with $\varepsilon \sim \mathcal N(0, \sigma^2 I)$.
There are many | Self-studying the normal linear model: what can I do with it?
This is by no means authoritative, but hopefully it gives you somewhere to start.
You've studied the model $Y = X\beta + \varepsilon$ with $\varepsilon \sim \mathcal N(0, \sigma^2 I)$.
There are many extensions of this model. Some are:
Explore different assumptions on $\varepsilon$. Maybe you only assume $E(\varepsilon) = 0$, $Var(\varepsilon) = D$. What happens if $D \neq \sigma^2 I_n$? If $D$ is diagonal you can get weighted least squares; if $D$ is not even diagonal then you can use generalized least squares. Also, what happens if $\varepsilon$ has specified but not normal distributions (Laplace, for example)? This turns out to be closely related to considering loss functions other than squared loss.
Consider other distributions of $Y_i$. This can lead to GLMs
Explore penalized regressions. This is a big area. I'd start with the lasso and ridge regression.
Ridge regression and the lasso involve tuning parameters. This leads you to learning about cross validation. Or maybe you'd prefer to use information criteria like the AIC and BIC. There's a lot to study here, such as where exactly they come from and asymptotic relationships with different kinds of cross validations. Significance testing also gets tricky. Maybe you'll need to use the bootstrap more often.
Ridge regression and the lasso also have Bayesian interpretations. This can lead you to going full Bayesian and investigating the effects of various priors and multilevel models. This might require the use of MCMC which is a big thing in and of itself.
Very closely related to the previous point is the delightful world of mixed effects models, which is also very closely related to generalized least squares
Maybe you've got continuous predictors and want to use polynomials in them, but you don't like global polynomials. You can consider basis expansion methods like splines. I'd suggest you take a look at the Elements of Statistical Learning (available free online).
From here you could study kernel methods in general; this includes methods like the support vector machine.
Now you've got both simple interpretable tools like iid normal-errors linear regression, nonparametric flexible methods like SVM, and many things in between. | Self-studying the normal linear model: what can I do with it?
This is by no means authoritative, but hopefully it gives you somewhere to start.
You've studied the model $Y = X\beta + \varepsilon$ with $\varepsilon \sim \mathcal N(0, \sigma^2 I)$.
There are many |
55,091 | Posteriors and Sample Sizes | The posterior is (generally) going to be a function of the sample size. One way of determining propriety of the posterior is to check the same size for which the parameters in the posterior are not defined. The example below is one such construction.
$X_1, \dots, X_n \sim N(\theta, \sigma^2)$. For simplicity, let $\theta = 0$ be fixed (you could put a normal prior on $\theta$) and let $\sigma^2$ have prior $\pi(\sigma^2) \propto \left(\sigma^2 \right)^s$ for some $s$. Then the prior on $\sigma^2$ is improper.
The posterior can be found by
\begin{align*}
\pi(\sigma^2|\mathbf{x}) & = f(\mathbf{x}|\sigma^2)\pi(\sigma^2)\\
& \propto \left(\sigma^2\right)^s \prod_{i=1}^{n} (\sigma^2)^{-1/2} \exp\left \{-\dfrac{x_i^2}{2\sigma^2} \right \}\\
& = (\sigma^2)^{-n/2 + s+1 - 1} \exp \left\{-\dfrac{\sum x_i^2}{2 \sigma^2} \right \}
\end{align*}
Thus the posterior distribution is Inverse Gamma$(n/2 -s - 1, \sum x_i^2/2)$, which is proper only if
$$\dfrac{n}{2} - s - 1 > 0.$$
Thus for $s = 0$, it is improper when $n = 1$, but proper when $n \geq 3$. | Posteriors and Sample Sizes | The posterior is (generally) going to be a function of the sample size. One way of determining propriety of the posterior is to check the same size for which the parameters in the posterior are not de | Posteriors and Sample Sizes
The posterior is (generally) going to be a function of the sample size. One way of determining propriety of the posterior is to check the same size for which the parameters in the posterior are not defined. The example below is one such construction.
$X_1, \dots, X_n \sim N(\theta, \sigma^2)$. For simplicity, let $\theta = 0$ be fixed (you could put a normal prior on $\theta$) and let $\sigma^2$ have prior $\pi(\sigma^2) \propto \left(\sigma^2 \right)^s$ for some $s$. Then the prior on $\sigma^2$ is improper.
The posterior can be found by
\begin{align*}
\pi(\sigma^2|\mathbf{x}) & = f(\mathbf{x}|\sigma^2)\pi(\sigma^2)\\
& \propto \left(\sigma^2\right)^s \prod_{i=1}^{n} (\sigma^2)^{-1/2} \exp\left \{-\dfrac{x_i^2}{2\sigma^2} \right \}\\
& = (\sigma^2)^{-n/2 + s+1 - 1} \exp \left\{-\dfrac{\sum x_i^2}{2 \sigma^2} \right \}
\end{align*}
Thus the posterior distribution is Inverse Gamma$(n/2 -s - 1, \sum x_i^2/2)$, which is proper only if
$$\dfrac{n}{2} - s - 1 > 0.$$
Thus for $s = 0$, it is improper when $n = 1$, but proper when $n \geq 3$. | Posteriors and Sample Sizes
The posterior is (generally) going to be a function of the sample size. One way of determining propriety of the posterior is to check the same size for which the parameters in the posterior are not de |
55,092 | Posteriors and Sample Sizes | A simple example of a prior remaining improper no matter how large the sample size is goes as follows: take $\pi(\alpha)=\exp\{\alpha^2\}$ and the sample iid $\mathcal{E}(\alpha)$ [which is equivalent to the normal case when the mean is set to zero since exponential and chi-square distributions are almost the same] Then
$$\pi(\alpha|x_1,\ldots,x_n) \propto \alpha^n \exp\left\{-\alpha\sum_i x_i+\alpha^2\right\}$$
which does not integrate, no matter what $n$ is.
A more realistic example is provided by mixtures of distributions: for instance, take a sample from $$\frac{4}{5}\mathcal{N}(0,1)+\frac{1}{5}\mathcal{N}(\mu,\sigma^2)$$ and take the improper prior $\pi(\mu,\sigma)=1/\sigma$. For any sample size $n$ the posterior on $(\mu,\sigma)$ remains improper. | Posteriors and Sample Sizes | A simple example of a prior remaining improper no matter how large the sample size is goes as follows: take $\pi(\alpha)=\exp\{\alpha^2\}$ and the sample iid $\mathcal{E}(\alpha)$ [which is equivalent | Posteriors and Sample Sizes
A simple example of a prior remaining improper no matter how large the sample size is goes as follows: take $\pi(\alpha)=\exp\{\alpha^2\}$ and the sample iid $\mathcal{E}(\alpha)$ [which is equivalent to the normal case when the mean is set to zero since exponential and chi-square distributions are almost the same] Then
$$\pi(\alpha|x_1,\ldots,x_n) \propto \alpha^n \exp\left\{-\alpha\sum_i x_i+\alpha^2\right\}$$
which does not integrate, no matter what $n$ is.
A more realistic example is provided by mixtures of distributions: for instance, take a sample from $$\frac{4}{5}\mathcal{N}(0,1)+\frac{1}{5}\mathcal{N}(\mu,\sigma^2)$$ and take the improper prior $\pi(\mu,\sigma)=1/\sigma$. For any sample size $n$ the posterior on $(\mu,\sigma)$ remains improper. | Posteriors and Sample Sizes
A simple example of a prior remaining improper no matter how large the sample size is goes as follows: take $\pi(\alpha)=\exp\{\alpha^2\}$ and the sample iid $\mathcal{E}(\alpha)$ [which is equivalent |
55,093 | Power of Meta Regression | The N for the main effect in a meta-analysis is something close to the number of people in all of the studies. That's a lot of people, hence a lot of power.
For a meta-regression, the N is much closer to the number of studies, which is a much smaller number. The actual hit in power that you take is a function of the degree of heterogeneity. It's equivalent to looking at a multilevel model (or clustered trial) with a very high ICC.
Here's a paper where we discuss some of the issues: Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis. The graphs show the amount of power, for different moderator effects, numbers of studies, sample sizes, and heterogeneity. With a high value of tau^2, you need a huge number of large studies, and a large moderator effect, to have decent power. | Power of Meta Regression | The N for the main effect in a meta-analysis is something close to the number of people in all of the studies. That's a lot of people, hence a lot of power.
For a meta-regression, the N is much closer | Power of Meta Regression
The N for the main effect in a meta-analysis is something close to the number of people in all of the studies. That's a lot of people, hence a lot of power.
For a meta-regression, the N is much closer to the number of studies, which is a much smaller number. The actual hit in power that you take is a function of the degree of heterogeneity. It's equivalent to looking at a multilevel model (or clustered trial) with a very high ICC.
Here's a paper where we discuss some of the issues: Risk of bias: a simulation study of power to detect study-level moderator effects in meta-analysis. The graphs show the amount of power, for different moderator effects, numbers of studies, sample sizes, and heterogeneity. With a high value of tau^2, you need a huge number of large studies, and a large moderator effect, to have decent power. | Power of Meta Regression
The N for the main effect in a meta-analysis is something close to the number of people in all of the studies. That's a lot of people, hence a lot of power.
For a meta-regression, the N is much closer |
55,094 | Power of Meta Regression | This is kind of a dogma for meta-analysts, yet there is not much out there.
I have researched Google, Google Scholar and PubMed, looking for power, metaregression, meta-regression.
These are the items I have found which could be of use:
Thompson and Higgins, Statistics in Medicine 2002
Higgins and Thompson, Statistics in Medicine 2004
Borenstein et al, Introduction to Meta-Analysis 2009
Whittington, Statalist 2009
Laurin, University of Notre Dame 2014
Lopez-Lopez et al, British Journal of Mathematical and Statistical Psychology 2014
This passage from Borenstein et al, Introduction to Meta-Analysis 2009, is pertinent to your question, despite being somewhat generic:
Power depends on the size of the effect and the precision with which
we measure the effect... For meta-regression this means that power
will increase as the magnitude of the relationship between the
covariate and effect size increases, and/or the precision of the
estimate increases... a key factor driving the precision of the
estimate will be the total number of individual subjects across all
studies and (for random effects) the total number of studies. While
there is a general perception that power for testing the main effect
is consistently high in meta-analysis, this perception is not correct
... and certainly does not extend to tests of subgroup differences or
to meta-regression. The failure to find a statistically significant
p-value when comparing subgroups or in meta-regression could mean that
the effect (if any) is quite small, but could also mean that the
analysis had poor power to detect even a large effect. One should
never use a nonsignificant finding to conclude that the true means in
subgroups are the same, or that a covariate is not related to effect
size.
My personal take is that many factors play a role, and so computing power/minimal sample size is not that straightforward. However, we may simply consider meta-regression as a type of weighted regression or mixed model, and so power considerations pertinent to them should be valid. | Power of Meta Regression | This is kind of a dogma for meta-analysts, yet there is not much out there.
I have researched Google, Google Scholar and PubMed, looking for power, metaregression, meta-regression.
These are the item | Power of Meta Regression
This is kind of a dogma for meta-analysts, yet there is not much out there.
I have researched Google, Google Scholar and PubMed, looking for power, metaregression, meta-regression.
These are the items I have found which could be of use:
Thompson and Higgins, Statistics in Medicine 2002
Higgins and Thompson, Statistics in Medicine 2004
Borenstein et al, Introduction to Meta-Analysis 2009
Whittington, Statalist 2009
Laurin, University of Notre Dame 2014
Lopez-Lopez et al, British Journal of Mathematical and Statistical Psychology 2014
This passage from Borenstein et al, Introduction to Meta-Analysis 2009, is pertinent to your question, despite being somewhat generic:
Power depends on the size of the effect and the precision with which
we measure the effect... For meta-regression this means that power
will increase as the magnitude of the relationship between the
covariate and effect size increases, and/or the precision of the
estimate increases... a key factor driving the precision of the
estimate will be the total number of individual subjects across all
studies and (for random effects) the total number of studies. While
there is a general perception that power for testing the main effect
is consistently high in meta-analysis, this perception is not correct
... and certainly does not extend to tests of subgroup differences or
to meta-regression. The failure to find a statistically significant
p-value when comparing subgroups or in meta-regression could mean that
the effect (if any) is quite small, but could also mean that the
analysis had poor power to detect even a large effect. One should
never use a nonsignificant finding to conclude that the true means in
subgroups are the same, or that a covariate is not related to effect
size.
My personal take is that many factors play a role, and so computing power/minimal sample size is not that straightforward. However, we may simply consider meta-regression as a type of weighted regression or mixed model, and so power considerations pertinent to them should be valid. | Power of Meta Regression
This is kind of a dogma for meta-analysts, yet there is not much out there.
I have researched Google, Google Scholar and PubMed, looking for power, metaregression, meta-regression.
These are the item |
55,095 | Power of Meta Regression | People usually do a power calculation before starting work to establish the sample size but in meta-analysis the sample size is seldom something you can choose. After the event all the information about precision is contained in the confidence intervals for your coefficients so power is now irrelevant. | Power of Meta Regression | People usually do a power calculation before starting work to establish the sample size but in meta-analysis the sample size is seldom something you can choose. After the event all the information abo | Power of Meta Regression
People usually do a power calculation before starting work to establish the sample size but in meta-analysis the sample size is seldom something you can choose. After the event all the information about precision is contained in the confidence intervals for your coefficients so power is now irrelevant. | Power of Meta Regression
People usually do a power calculation before starting work to establish the sample size but in meta-analysis the sample size is seldom something you can choose. After the event all the information abo |
55,096 | XGBoost (Extreme Gradient Boosting) or Elastic Net More Robust to Outliers | 1 Yes boosted trees would more easily fit unknown non-linear effects or interactions than regularized linear regression. However, as soon as you are aware of some specific non-linearity you could simply transform data to linearity and continue to use a linaer learner.
2 That depends on how you train the models. If you're new to boosted trees, check out some tutorials on how to avoid overfitting. I cannot see from your plots what kind of cross-validation was used. Use a thorough outer cross-validation and perhaps compare the results to a random forest model. RF models are much easier to handle, and default settings are often near optimal. A crude thumb rule; if your RF performs better than boosted trees(measured by outer cross validation), you either have chosen sub optimal training paramters for your boosted trees model or your data is quite noisy. | XGBoost (Extreme Gradient Boosting) or Elastic Net More Robust to Outliers | 1 Yes boosted trees would more easily fit unknown non-linear effects or interactions than regularized linear regression. However, as soon as you are aware of some specific non-linearity you could simp | XGBoost (Extreme Gradient Boosting) or Elastic Net More Robust to Outliers
1 Yes boosted trees would more easily fit unknown non-linear effects or interactions than regularized linear regression. However, as soon as you are aware of some specific non-linearity you could simply transform data to linearity and continue to use a linaer learner.
2 That depends on how you train the models. If you're new to boosted trees, check out some tutorials on how to avoid overfitting. I cannot see from your plots what kind of cross-validation was used. Use a thorough outer cross-validation and perhaps compare the results to a random forest model. RF models are much easier to handle, and default settings are often near optimal. A crude thumb rule; if your RF performs better than boosted trees(measured by outer cross validation), you either have chosen sub optimal training paramters for your boosted trees model or your data is quite noisy. | XGBoost (Extreme Gradient Boosting) or Elastic Net More Robust to Outliers
1 Yes boosted trees would more easily fit unknown non-linear effects or interactions than regularized linear regression. However, as soon as you are aware of some specific non-linearity you could simp |
55,097 | Speed up optimization in K-fold cross validation | Warning: this is essentially a non-answer.
I your $k$-fold cross validation is to be used for validation purposes, the independence of the optimization is actually crucial. If you use model parameters obtained on the full model for initializing any of the surrogate model trainings, that surrogate model is not independent any more on the full data set. I.e., there is dependence also between the surrogate model and its test data! The same is true for any two surrogate models of the cross valiation: the test data of one surrogate model is always training data for the other.
It is quite possible that such an exploit of the assumed (!) similarity will lead to more similar models than you'd have obtained otherwise.
I'd therefore say that the set of sped-up optimized surrogate models is actually produced by a (slightly) different training algorithm, and therefore are different models.
One of the reasons why I actually do cross valiation is to demonstrate that the obtained (surrogate) models are very similar, even if optimized/trained independently, i.e. I use (iterated/repeated) $k$-fold cross validation to measure model stability.
That is, the similarity you want to exploit for speed-up is actually something I consider part of the things to be measured/validated by the cross validation.
Update to answer @lacerbi's comments
Assuming vs. constructing similarity: We are talking of similarity here at at least 3 different levels - here's how I see this:
We construct very similar training data.
We assume (or hope) that very similar training data will lead to very similar models as well.
However, this is not guaranteed: we call models that have this characteristic stable (wrt. small differences/perturbations in the training data).
Likewise we assume that the predictions of the models are very similar (for the same test cases). This is an assumption that underlies the whole idea of resampling validation, it allows us to use the predictions of the cross validation models as surrogate for independent test case predictions of the model in question.
(I distinguish between 2 and 3 as I work with vibrational spectra where the physical data generation process leads to correlations which can cause model parameters to be less stable than the corresponding predictions)
it seems unsmart to me to throw away this information
This is one very valid point of view. However, double use of data will cause dependence. Which use you put the data to is up to you to decide. All I'm saying is that if you use the data to generate good starting values for your training, it is used up as far as validation with independent data is concerned.
As far as validating the model you get by using all data for training is concerned: you state that we can assume $k$ to be large. You explain that surrogate models trained on $\frac{k-1}{k}$ of the data set are actually unstable (see also below). Now I'd argue that the larger $k$, the less plausible is the assumption that the model trained on all data is stable (found the global minimum) given that (some) surrogate models had problems finding the global minimum. If possible, the validation procedure should therefore show the instability of the surrogate models: such instability is no "accident" of the surrogate models - it is a problem that applies to the model trained on all data as well, and there you do not have an initialization similar to the one you propose for the surrogate models.
assume independence which is not there.
The way I see cross validation, it traditionally does not assume independence but rather almost perfect similarity between the models: IMHO it assumes
"surrogate models" trained on $\frac{k - 1}{k}$ of the data are almost the same as the model trained on all the data.
As 1. regularly shows signs of violation (such as the well-known pessimistic bias), the weaker assumption is that at least all $k$ surrogate models are similar enough to each other, so that we are allowed to pool their testing results.
Independence is only assumed between each of the surrogate models and its corresponding test set.
not exchanging information between training folds might prevent one or more folds from finding the global optimum (this happened to me with real data)
This is a concern about the training algorithm/its resulting models being unstable wrt. initialization and also wrt. to the actual training samples.
This is an important characteristic of the model. One of the advantages of cross validation is that it can (to some extent) measure such an instability. This possibility is of course the more important, the more unstable your models are.
In other words, I'd see saving computational time by such an initialization as a legitimate and rather harmless shortcut as long as there is evidence that stability of the solutions is not a concern. But in your case the evidence wrt. model stability is rather that instability is a concern, and therefore I do not recommend the proposed shortcut.
Note that model aggregation as one way of dealing with unstable models does pretty much the opposite of your idea: instead of forcing the solution towards one of the unstable solutions, aggregating tries to cover as much as possible the actually occuring variation, and aggregates (averages) the solutions.
bias-variance-tradeoff
The variance reduction you propose in the extreme corresponds to picking one of the solutions, say, that of the 1st fold. I don't see how that can help very much. However, I do see that e.g. picking the most frequent solution (or some kind of median or average model) can help.
The point here is that we are then actually talking about some kind of aggregated model, which is something different from the individual models that enter this aggregation. And you should also be very clear about your proposed aggregation scheme invalidating the out-of-bag shortcut for calculating performance estimates because you traded the independence of the test data for better training. | Speed up optimization in K-fold cross validation | Warning: this is essentially a non-answer.
I your $k$-fold cross validation is to be used for validation purposes, the independence of the optimization is actually crucial. If you use model parameters | Speed up optimization in K-fold cross validation
Warning: this is essentially a non-answer.
I your $k$-fold cross validation is to be used for validation purposes, the independence of the optimization is actually crucial. If you use model parameters obtained on the full model for initializing any of the surrogate model trainings, that surrogate model is not independent any more on the full data set. I.e., there is dependence also between the surrogate model and its test data! The same is true for any two surrogate models of the cross valiation: the test data of one surrogate model is always training data for the other.
It is quite possible that such an exploit of the assumed (!) similarity will lead to more similar models than you'd have obtained otherwise.
I'd therefore say that the set of sped-up optimized surrogate models is actually produced by a (slightly) different training algorithm, and therefore are different models.
One of the reasons why I actually do cross valiation is to demonstrate that the obtained (surrogate) models are very similar, even if optimized/trained independently, i.e. I use (iterated/repeated) $k$-fold cross validation to measure model stability.
That is, the similarity you want to exploit for speed-up is actually something I consider part of the things to be measured/validated by the cross validation.
Update to answer @lacerbi's comments
Assuming vs. constructing similarity: We are talking of similarity here at at least 3 different levels - here's how I see this:
We construct very similar training data.
We assume (or hope) that very similar training data will lead to very similar models as well.
However, this is not guaranteed: we call models that have this characteristic stable (wrt. small differences/perturbations in the training data).
Likewise we assume that the predictions of the models are very similar (for the same test cases). This is an assumption that underlies the whole idea of resampling validation, it allows us to use the predictions of the cross validation models as surrogate for independent test case predictions of the model in question.
(I distinguish between 2 and 3 as I work with vibrational spectra where the physical data generation process leads to correlations which can cause model parameters to be less stable than the corresponding predictions)
it seems unsmart to me to throw away this information
This is one very valid point of view. However, double use of data will cause dependence. Which use you put the data to is up to you to decide. All I'm saying is that if you use the data to generate good starting values for your training, it is used up as far as validation with independent data is concerned.
As far as validating the model you get by using all data for training is concerned: you state that we can assume $k$ to be large. You explain that surrogate models trained on $\frac{k-1}{k}$ of the data set are actually unstable (see also below). Now I'd argue that the larger $k$, the less plausible is the assumption that the model trained on all data is stable (found the global minimum) given that (some) surrogate models had problems finding the global minimum. If possible, the validation procedure should therefore show the instability of the surrogate models: such instability is no "accident" of the surrogate models - it is a problem that applies to the model trained on all data as well, and there you do not have an initialization similar to the one you propose for the surrogate models.
assume independence which is not there.
The way I see cross validation, it traditionally does not assume independence but rather almost perfect similarity between the models: IMHO it assumes
"surrogate models" trained on $\frac{k - 1}{k}$ of the data are almost the same as the model trained on all the data.
As 1. regularly shows signs of violation (such as the well-known pessimistic bias), the weaker assumption is that at least all $k$ surrogate models are similar enough to each other, so that we are allowed to pool their testing results.
Independence is only assumed between each of the surrogate models and its corresponding test set.
not exchanging information between training folds might prevent one or more folds from finding the global optimum (this happened to me with real data)
This is a concern about the training algorithm/its resulting models being unstable wrt. initialization and also wrt. to the actual training samples.
This is an important characteristic of the model. One of the advantages of cross validation is that it can (to some extent) measure such an instability. This possibility is of course the more important, the more unstable your models are.
In other words, I'd see saving computational time by such an initialization as a legitimate and rather harmless shortcut as long as there is evidence that stability of the solutions is not a concern. But in your case the evidence wrt. model stability is rather that instability is a concern, and therefore I do not recommend the proposed shortcut.
Note that model aggregation as one way of dealing with unstable models does pretty much the opposite of your idea: instead of forcing the solution towards one of the unstable solutions, aggregating tries to cover as much as possible the actually occuring variation, and aggregates (averages) the solutions.
bias-variance-tradeoff
The variance reduction you propose in the extreme corresponds to picking one of the solutions, say, that of the 1st fold. I don't see how that can help very much. However, I do see that e.g. picking the most frequent solution (or some kind of median or average model) can help.
The point here is that we are then actually talking about some kind of aggregated model, which is something different from the individual models that enter this aggregation. And you should also be very clear about your proposed aggregation scheme invalidating the out-of-bag shortcut for calculating performance estimates because you traded the independence of the test data for better training. | Speed up optimization in K-fold cross validation
Warning: this is essentially a non-answer.
I your $k$-fold cross validation is to be used for validation purposes, the independence of the optimization is actually crucial. If you use model parameters |
55,098 | Speed up optimization in K-fold cross validation | There is a generic framework for improving cross-validation [2]. It does not reuse optimal values obtained on other folds (@cebeleites already mentioned the dangers of such an approach), but instead use relations that may exist between models trained on different datasets and defines the notion of "merging" two models.
If merging can be performed independently of the size of the set the model was trained on, the $k$ fold cross validation can be performed in a $O(n+k)$ operations.
Think per example about two naive Bayes model (i.e naive Bayes models trained on two different training sets). Multiplying the conditional probabilities (and reweighting each probability depending on the size of the training set) of each model is equivalent to training the naive Bayes on the whole data. Nearest centroid methods enjoy the same properties.
Unfortunately, there does not seem to be much research in this field...
Edit
As there are actually a lot of specialized methods, I ended up writing a short blog post on various ways to speed up cross validations for different models.
[2] M. Izbicki, “Algebraic classifiers: a generic approach to fast cross-validation, online training, and parallel training,” | Speed up optimization in K-fold cross validation | There is a generic framework for improving cross-validation [2]. It does not reuse optimal values obtained on other folds (@cebeleites already mentioned the dangers of such an approach), but instead u | Speed up optimization in K-fold cross validation
There is a generic framework for improving cross-validation [2]. It does not reuse optimal values obtained on other folds (@cebeleites already mentioned the dangers of such an approach), but instead use relations that may exist between models trained on different datasets and defines the notion of "merging" two models.
If merging can be performed independently of the size of the set the model was trained on, the $k$ fold cross validation can be performed in a $O(n+k)$ operations.
Think per example about two naive Bayes model (i.e naive Bayes models trained on two different training sets). Multiplying the conditional probabilities (and reweighting each probability depending on the size of the training set) of each model is equivalent to training the naive Bayes on the whole data. Nearest centroid methods enjoy the same properties.
Unfortunately, there does not seem to be much research in this field...
Edit
As there are actually a lot of specialized methods, I ended up writing a short blog post on various ways to speed up cross validations for different models.
[2] M. Izbicki, “Algebraic classifiers: a generic approach to fast cross-validation, online training, and parallel training,” | Speed up optimization in K-fold cross validation
There is a generic framework for improving cross-validation [2]. It does not reuse optimal values obtained on other folds (@cebeleites already mentioned the dangers of such an approach), but instead u |
55,099 | Initializing EM algorithm [duplicate] | Here is a self-contained exercise from our book Introducing Monte Carlo methods with R:
Consider a sample of size $n$ from a mixture distribution with unknown
weights, $$ X_i \sim \theta g(x) + (1-\theta) h(x),\quad i=1, \ldots,
> n, $$ where $g(\cdot)$ and $h(\cdot)$ are known.
Introduce $Z_1, \ldots, Z_n$, where $Z_i$ indicates the distribution
from which $X_i$ has been drawn, so $$ X_i | Z_i=1 \sim g(x),\quad
X_i | Z_i=0 \sim h(x)\,. $$ Show that the complete-data likelihood
can be written as $$ L^c (\theta|{\mathbf x}, {\mathbf z})=\prod_{i=1}^n\left[z_ig(x_i)+(1-z_i)h(x_i)\right]
\theta^{z_i}(1-\theta)^{1-z_i}. $$
Show that $$\mathbb{E}[Z_i| \theta, x_i] = \theta g(x_i)/[\theta
g(x_i)+(1-\theta)h(x_i)]$$ and deduce that the EM sequence is given
by $$ \hat{\theta}_{(j+1)}=\frac{1}{n}\sum_{i=1}^n \frac{\hat{\theta}_{(j)} g(x_i)} {\hat{\theta}_{(j)}
g(x_i)+(1-\hat{\theta}_{(j)})h(x_i)}. $$
Examine the convergence properties of this EM algorithm on a
simulated dataset with $n=25$, $\theta=0.3$, $h(x)=\varphi(x)$, and
$g(x)=\varphi((x-2)/2)/2$, where $\varphi$ denotes the
$\mathcal{N}(0,1)$ density.
The EM algorithm has the property that the observed likelihood is increasing at each iteration. This means that "an EM sequence $\{\hat{\theta}_{(j)}\}_j$ converges to a stationary point of $L(\theta| {\mathbf x})$, but not necessarily to the maximum likelihood estimator or even to a local maximum. In practice, running the EM algorithm several times with different, randomly chosen starting points is thus recommended if one wants to avoid using a poor approximation to the true maximum."
For mixtures in particular, the number of local modes and saddle-points on the likelihood (surface) depends on the number of components (2 in your case) and on the number of clusters in your data. By clusters, I mean that if you generate a large enough sample from the mixture$$\sum_{i=1}^{10} 10^{-1}\mathcal{N}(i,10^{-1})$$ and apply EM to that sample, there will most likely be a multiple of 10 (local) modes on the likelihood surface.
This leads to the proposal of using subgroups of the sample to initialise the parameters to realistic values. Clustering algorithms can for instance be used, but this is not necessary, as random grouping can be considered as well. | Initializing EM algorithm [duplicate] | Here is a self-contained exercise from our book Introducing Monte Carlo methods with R:
Consider a sample of size $n$ from a mixture distribution with unknown
weights, $$ X_i \sim \theta g(x) + (1- | Initializing EM algorithm [duplicate]
Here is a self-contained exercise from our book Introducing Monte Carlo methods with R:
Consider a sample of size $n$ from a mixture distribution with unknown
weights, $$ X_i \sim \theta g(x) + (1-\theta) h(x),\quad i=1, \ldots,
> n, $$ where $g(\cdot)$ and $h(\cdot)$ are known.
Introduce $Z_1, \ldots, Z_n$, where $Z_i$ indicates the distribution
from which $X_i$ has been drawn, so $$ X_i | Z_i=1 \sim g(x),\quad
X_i | Z_i=0 \sim h(x)\,. $$ Show that the complete-data likelihood
can be written as $$ L^c (\theta|{\mathbf x}, {\mathbf z})=\prod_{i=1}^n\left[z_ig(x_i)+(1-z_i)h(x_i)\right]
\theta^{z_i}(1-\theta)^{1-z_i}. $$
Show that $$\mathbb{E}[Z_i| \theta, x_i] = \theta g(x_i)/[\theta
g(x_i)+(1-\theta)h(x_i)]$$ and deduce that the EM sequence is given
by $$ \hat{\theta}_{(j+1)}=\frac{1}{n}\sum_{i=1}^n \frac{\hat{\theta}_{(j)} g(x_i)} {\hat{\theta}_{(j)}
g(x_i)+(1-\hat{\theta}_{(j)})h(x_i)}. $$
Examine the convergence properties of this EM algorithm on a
simulated dataset with $n=25$, $\theta=0.3$, $h(x)=\varphi(x)$, and
$g(x)=\varphi((x-2)/2)/2$, where $\varphi$ denotes the
$\mathcal{N}(0,1)$ density.
The EM algorithm has the property that the observed likelihood is increasing at each iteration. This means that "an EM sequence $\{\hat{\theta}_{(j)}\}_j$ converges to a stationary point of $L(\theta| {\mathbf x})$, but not necessarily to the maximum likelihood estimator or even to a local maximum. In practice, running the EM algorithm several times with different, randomly chosen starting points is thus recommended if one wants to avoid using a poor approximation to the true maximum."
For mixtures in particular, the number of local modes and saddle-points on the likelihood (surface) depends on the number of components (2 in your case) and on the number of clusters in your data. By clusters, I mean that if you generate a large enough sample from the mixture$$\sum_{i=1}^{10} 10^{-1}\mathcal{N}(i,10^{-1})$$ and apply EM to that sample, there will most likely be a multiple of 10 (local) modes on the likelihood surface.
This leads to the proposal of using subgroups of the sample to initialise the parameters to realistic values. Clustering algorithms can for instance be used, but this is not necessary, as random grouping can be considered as well. | Initializing EM algorithm [duplicate]
Here is a self-contained exercise from our book Introducing Monte Carlo methods with R:
Consider a sample of size $n$ from a mixture distribution with unknown
weights, $$ X_i \sim \theta g(x) + (1- |
55,100 | Initializing EM algorithm [duplicate] | You're running a special case of EM for Gaussian mixture models, in which case the most frequently used initialization is k-means. That can be viewed as a kind of "hard-assignment" version of GMM clustering, and so starting from there is typically reasonable.
Since you're doing this in one dimension, there are efficient and guaranteed-to-find-the-optimum algorithms for k-means available, e.g. Ckmeans.1d.dp if you happen to use R. But if speed isn't a huge concern, standard k-means implementations will probably be fine. | Initializing EM algorithm [duplicate] | You're running a special case of EM for Gaussian mixture models, in which case the most frequently used initialization is k-means. That can be viewed as a kind of "hard-assignment" version of GMM clus | Initializing EM algorithm [duplicate]
You're running a special case of EM for Gaussian mixture models, in which case the most frequently used initialization is k-means. That can be viewed as a kind of "hard-assignment" version of GMM clustering, and so starting from there is typically reasonable.
Since you're doing this in one dimension, there are efficient and guaranteed-to-find-the-optimum algorithms for k-means available, e.g. Ckmeans.1d.dp if you happen to use R. But if speed isn't a huge concern, standard k-means implementations will probably be fine. | Initializing EM algorithm [duplicate]
You're running a special case of EM for Gaussian mixture models, in which case the most frequently used initialization is k-means. That can be viewed as a kind of "hard-assignment" version of GMM clus |
Subsets and Splits