idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
54,901
What are drawbacks of isotonic regression?
Isotonic regression By Alexeicolin - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=23732999 As seen in the image, and suggested (partly) by the name, isotonic regression is monotonic increasing or monotonic decreasing. Thus, it would not be appropriate for fitting distributions that have left and right tails. Also, unlike B-splines, it does not fit derivatives, so it will not approximate smooth curves like most distribution functions. It may be useful to approximate heuristically the predicted values, but would not be especially useful for extrapolation beyond the extreme values of the x-axis data.
What are drawbacks of isotonic regression?
Isotonic regression By Alexeicolin - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=23732999 As seen in the image, and suggested (partly) by the name, isotonic regression is
What are drawbacks of isotonic regression? Isotonic regression By Alexeicolin - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=23732999 As seen in the image, and suggested (partly) by the name, isotonic regression is monotonic increasing or monotonic decreasing. Thus, it would not be appropriate for fitting distributions that have left and right tails. Also, unlike B-splines, it does not fit derivatives, so it will not approximate smooth curves like most distribution functions. It may be useful to approximate heuristically the predicted values, but would not be especially useful for extrapolation beyond the extreme values of the x-axis data.
What are drawbacks of isotonic regression? Isotonic regression By Alexeicolin - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=23732999 As seen in the image, and suggested (partly) by the name, isotonic regression is
54,902
Confidence interval on the percentage difference of two binomial distributions
You can use profile likelihood methods (other answers are surely possible, but I will show that.) You have binomial counts from two groups, men and women. We write $$ M \sim \mathcal{Bin}(m, p_m) \\ W \sim \mathcal{Bin}(w, p_w) $$ and the focus (or interest) parameter is $Q=\frac{p_w-p_m}{p_m}$. Assuming (reasonable) that the counts in the two groups are independent, we find the loglikelihood function is $$ \ell_0(p_m, p_w)\propto M\log(p_m) + (m-M)\log(1-p_m) + W\log(p_w) + \\ (w-W)\log(1-p_w) $$ A little algebra shows that $p_w=p_m(1+Q)$, substituting that above we find $$\ell(Q,p_m)=\ell_0(p_m, p_m(1+Q)) $$ and the profile likelihood for the focus parameter $Q$ is defined by $$ \ell_Q(Q)= \max_{0\le p_m\le 1}\ell(Q,p_m) $$ and we can find a confidence interval by the asymptotic theory for profile likelihood, see Profile likelihood confidence interval proof. In this case it might even be possible to do that symbolically, but I will use numerical methods in R. First, confidence intervals can be read off this plot of the profile likelihood: What is plotted here is the square root of the profile likelihood-based deviance, so a perfect V shape would indicate a quadratic profile likelihood. The code used is set.seed(7*11*13) # My public seed m <- 50; w <- 40 p_m <- 0.4; p_w <- 0.6; Q <- (p_w-p_m)/p_m M <- rbinom(1, m, p_m); W <- rbinom(1, w, p_w) # Loglikelihood function: mloglik0 <- function(p_m, p_w) -dbinom(M, m, p_m, log=TRUE) - dbinom(W, w, p_w, log=TRUE) mloglik1 <- function(Q, p_m) mloglik0(p_m, p_m*(1+Q)) mod <- bbmle::mle2(mloglik1, start=list(p_m=0.4, Q=0)) mod.prof <- bbmle::profile(mod, which=1) confint(mod.prof) 2.5 % 97.5 % -0.2176006 0.5851981 Addendum @cdalitz in another answer refers to the R package DescTools, which implements 11 different methods for the binomial difference (see that answer for references). Among those 3 different profile likelihood methods, one of them called true profile likelihood, the other two modifications using exact tail probability calculations in place of the asymptotic methods used for the true profile likelihood. What I have used here corresponds to the true profile likelihood, as I am doing no exact calculations. But there is another important difference: This answer is about the interest parameter $Q=\frac{p_w-p_m}{p_m}$, a relative difference, not the difference itself! To facilitate comparisons I redo the calculations for $Q=p_w-p_m$, and compare with results from that R package: mloglik1_diff <- function(Q, p_m) mloglik0(p_m, p_m + Q) mod_diff <- bbmle::mle2(mloglik1_diff, start=list(p_m=0.4, Q=0)) mod_diff_prof <- bbmle::profile(mod_diff, which=1) confint(mod_diff_prof) 2.5 % 97.5 % -0.1394750 0.2638033 METHODS <- c("ac", "wald", "waldcc", "score", "scorecc", "mn", "mee", "blj", "ha", "hal", "jp") CIs <- matrix(NA, length(METHODS), 3) colnames(CIs) <- c( " est", "lwr.ci", "upr.ci" ) rownames(CIs) <- METHODS for (method in seq_along(METHODS)) CIs[method, ] <- DescTools::BinomDiffCI(W, w, M, m, method=METHODS[method]) CIs est lwr.ci upr.ci ac 0.065 -0.1381246 0.2608353 wald 0.065 -0.1385663 0.2685663 waldcc 0.065 -0.1610663 0.2910663 score 0.065 -0.1360111 0.2557365 scorecc 0.065 -0.1511457 0.2699042 mn 0.065 -0.1400075 0.2617998 mee 0.065 -0.1388655 0.2607800 blj 0.065 -0.1406425 0.2668979 ha 0.065 -0.1534193 0.2834193 hal 0.065 -0.1378810 0.2607925 jp 0.065 -0.1380281 0.2609785
Confidence interval on the percentage difference of two binomial distributions
You can use profile likelihood methods (other answers are surely possible, but I will show that.) You have binomial counts from two groups, men and women. We write $$ M \sim \mathcal{Bin}(m, p_m) \
Confidence interval on the percentage difference of two binomial distributions You can use profile likelihood methods (other answers are surely possible, but I will show that.) You have binomial counts from two groups, men and women. We write $$ M \sim \mathcal{Bin}(m, p_m) \\ W \sim \mathcal{Bin}(w, p_w) $$ and the focus (or interest) parameter is $Q=\frac{p_w-p_m}{p_m}$. Assuming (reasonable) that the counts in the two groups are independent, we find the loglikelihood function is $$ \ell_0(p_m, p_w)\propto M\log(p_m) + (m-M)\log(1-p_m) + W\log(p_w) + \\ (w-W)\log(1-p_w) $$ A little algebra shows that $p_w=p_m(1+Q)$, substituting that above we find $$\ell(Q,p_m)=\ell_0(p_m, p_m(1+Q)) $$ and the profile likelihood for the focus parameter $Q$ is defined by $$ \ell_Q(Q)= \max_{0\le p_m\le 1}\ell(Q,p_m) $$ and we can find a confidence interval by the asymptotic theory for profile likelihood, see Profile likelihood confidence interval proof. In this case it might even be possible to do that symbolically, but I will use numerical methods in R. First, confidence intervals can be read off this plot of the profile likelihood: What is plotted here is the square root of the profile likelihood-based deviance, so a perfect V shape would indicate a quadratic profile likelihood. The code used is set.seed(7*11*13) # My public seed m <- 50; w <- 40 p_m <- 0.4; p_w <- 0.6; Q <- (p_w-p_m)/p_m M <- rbinom(1, m, p_m); W <- rbinom(1, w, p_w) # Loglikelihood function: mloglik0 <- function(p_m, p_w) -dbinom(M, m, p_m, log=TRUE) - dbinom(W, w, p_w, log=TRUE) mloglik1 <- function(Q, p_m) mloglik0(p_m, p_m*(1+Q)) mod <- bbmle::mle2(mloglik1, start=list(p_m=0.4, Q=0)) mod.prof <- bbmle::profile(mod, which=1) confint(mod.prof) 2.5 % 97.5 % -0.2176006 0.5851981 Addendum @cdalitz in another answer refers to the R package DescTools, which implements 11 different methods for the binomial difference (see that answer for references). Among those 3 different profile likelihood methods, one of them called true profile likelihood, the other two modifications using exact tail probability calculations in place of the asymptotic methods used for the true profile likelihood. What I have used here corresponds to the true profile likelihood, as I am doing no exact calculations. But there is another important difference: This answer is about the interest parameter $Q=\frac{p_w-p_m}{p_m}$, a relative difference, not the difference itself! To facilitate comparisons I redo the calculations for $Q=p_w-p_m$, and compare with results from that R package: mloglik1_diff <- function(Q, p_m) mloglik0(p_m, p_m + Q) mod_diff <- bbmle::mle2(mloglik1_diff, start=list(p_m=0.4, Q=0)) mod_diff_prof <- bbmle::profile(mod_diff, which=1) confint(mod_diff_prof) 2.5 % 97.5 % -0.1394750 0.2638033 METHODS <- c("ac", "wald", "waldcc", "score", "scorecc", "mn", "mee", "blj", "ha", "hal", "jp") CIs <- matrix(NA, length(METHODS), 3) colnames(CIs) <- c( " est", "lwr.ci", "upr.ci" ) rownames(CIs) <- METHODS for (method in seq_along(METHODS)) CIs[method, ] <- DescTools::BinomDiffCI(W, w, M, m, method=METHODS[method]) CIs est lwr.ci upr.ci ac 0.065 -0.1381246 0.2608353 wald 0.065 -0.1385663 0.2685663 waldcc 0.065 -0.1610663 0.2910663 score 0.065 -0.1360111 0.2557365 scorecc 0.065 -0.1511457 0.2699042 mn 0.065 -0.1400075 0.2617998 mee 0.065 -0.1388655 0.2607800 blj 0.065 -0.1406425 0.2668979 ha 0.065 -0.1534193 0.2834193 hal 0.065 -0.1378810 0.2607925 jp 0.065 -0.1380281 0.2609785
Confidence interval on the percentage difference of two binomial distributions You can use profile likelihood methods (other answers are surely possible, but I will show that.) You have binomial counts from two groups, men and women. We write $$ M \sim \mathcal{Bin}(m, p_m) \
54,903
Confidence interval on the percentage difference of two binomial distributions
Unfortunately there is no universally accepted way for computing a confidence interval for a difference in binomial proportions. The R function BinomDiffCI in the package DescTools offers eleven different options, and its help page gives references to publications. Newcombe recommends the methods by Miettinen and Nurminen (1985) or Mee (1984) (and his own methods, of course) in R.G. Newcombe: "Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods." Statistics in Medicine, 17, 873–890, 1998
Confidence interval on the percentage difference of two binomial distributions
Unfortunately there is no universally accepted way for computing a confidence interval for a difference in binomial proportions. The R function BinomDiffCI in the package DescTools offers eleven diffe
Confidence interval on the percentage difference of two binomial distributions Unfortunately there is no universally accepted way for computing a confidence interval for a difference in binomial proportions. The R function BinomDiffCI in the package DescTools offers eleven different options, and its help page gives references to publications. Newcombe recommends the methods by Miettinen and Nurminen (1985) or Mee (1984) (and his own methods, of course) in R.G. Newcombe: "Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods." Statistics in Medicine, 17, 873–890, 1998
Confidence interval on the percentage difference of two binomial distributions Unfortunately there is no universally accepted way for computing a confidence interval for a difference in binomial proportions. The R function BinomDiffCI in the package DescTools offers eleven diffe
54,904
When is a stochastic process not differentiable?
(1) Forget about random variables for a second. You have a function $Y_t=Y(t,X_t):=\ln(X_t)$. Then as a function it's partial derivative is defined in the usual sense: $\frac{\partial Y_t}{\partial X_t}=\frac{1}{X_t}$. The fact that $Y_t$ is a random variable makes no difference: your $X_t=X(t,\omega)$, but you're still asking what the change in $Y_t$ is if you keep $t$ constant, and vary $X_t$. Whenever you see such partials involved, they are always in the classical multivariable calculus sense. Don't confuse this with terms like $dY_t$ which are formally defined as stochastic differentials: (2) This can be taken as the definition of $dY_t$. In fact: $$Y_t-Y_0=\int_0^tdY_s,$$ where the integral is an Ito Integral, and not Lebesgue or Riemannian/Stieltjes. If now $Y_t=Y(t,X_t)$, then Ito's lemma in it's full form says that: $$dY_t=\frac{\partial Y}{\partial t}dt+\frac{\partial Y}{\partial X_t}dX_t+\frac{1}{2}\frac{\partial^2 Y}{\partial X_t^2}dX_t^2+...$$ This should look really familiar: It's the Taylor series of $Y_t$ expanded around $t,dX_t$, with higher order terms of $dt$ suppressed. If you now substitute $dX_t=\mu dt+\sigma dB_t$, you'll recover Ito's lemma after assuming $dt^2$ and $dtdB_t$ is higher order, and $dB_t^2=dt$. You can formaly prove those last facts using the definition of quadratic variation.
When is a stochastic process not differentiable?
(1) Forget about random variables for a second. You have a function $Y_t=Y(t,X_t):=\ln(X_t)$. Then as a function it's partial derivative is defined in the usual sense: $\frac{\partial Y_t}{\partial X_
When is a stochastic process not differentiable? (1) Forget about random variables for a second. You have a function $Y_t=Y(t,X_t):=\ln(X_t)$. Then as a function it's partial derivative is defined in the usual sense: $\frac{\partial Y_t}{\partial X_t}=\frac{1}{X_t}$. The fact that $Y_t$ is a random variable makes no difference: your $X_t=X(t,\omega)$, but you're still asking what the change in $Y_t$ is if you keep $t$ constant, and vary $X_t$. Whenever you see such partials involved, they are always in the classical multivariable calculus sense. Don't confuse this with terms like $dY_t$ which are formally defined as stochastic differentials: (2) This can be taken as the definition of $dY_t$. In fact: $$Y_t-Y_0=\int_0^tdY_s,$$ where the integral is an Ito Integral, and not Lebesgue or Riemannian/Stieltjes. If now $Y_t=Y(t,X_t)$, then Ito's lemma in it's full form says that: $$dY_t=\frac{\partial Y}{\partial t}dt+\frac{\partial Y}{\partial X_t}dX_t+\frac{1}{2}\frac{\partial^2 Y}{\partial X_t^2}dX_t^2+...$$ This should look really familiar: It's the Taylor series of $Y_t$ expanded around $t,dX_t$, with higher order terms of $dt$ suppressed. If you now substitute $dX_t=\mu dt+\sigma dB_t$, you'll recover Ito's lemma after assuming $dt^2$ and $dtdB_t$ is higher order, and $dB_t^2=dt$. You can formaly prove those last facts using the definition of quadratic variation.
When is a stochastic process not differentiable? (1) Forget about random variables for a second. You have a function $Y_t=Y(t,X_t):=\ln(X_t)$. Then as a function it's partial derivative is defined in the usual sense: $\frac{\partial Y_t}{\partial X_
54,905
Motivation for Ward's definition of error sum of squares (ESS)
\begin{align} \operatorname{Var}(\vec x) \propto \sum_{i=1}^n(x_i - \bar x)^2 &= \sum_i x_i^2 - 2\bar x \sum_ix_i + n \bar x^2 \\ &= \sum_i x_i^2 - n \bar x^2 = \sum_i x_i^2 - \frac 1n \left(\sum_i x_i\right)^2 = \text{ESS}. \end{align} I think $ESS$ is more sensible when talking about compression because $ESS = ||\vec x - \bar x \mathbf 1||^2_2$ so this coincides with the usual norm on $\mathbb R^n$.
Motivation for Ward's definition of error sum of squares (ESS)
\begin{align} \operatorname{Var}(\vec x) \propto \sum_{i=1}^n(x_i - \bar x)^2 &= \sum_i x_i^2 - 2\bar x \sum_ix_i + n \bar x^2 \\ &= \sum_i x_i^2 - n \bar x^2 = \sum_i x_i^2 - \frac 1n \left(\sum_i x
Motivation for Ward's definition of error sum of squares (ESS) \begin{align} \operatorname{Var}(\vec x) \propto \sum_{i=1}^n(x_i - \bar x)^2 &= \sum_i x_i^2 - 2\bar x \sum_ix_i + n \bar x^2 \\ &= \sum_i x_i^2 - n \bar x^2 = \sum_i x_i^2 - \frac 1n \left(\sum_i x_i\right)^2 = \text{ESS}. \end{align} I think $ESS$ is more sensible when talking about compression because $ESS = ||\vec x - \bar x \mathbf 1||^2_2$ so this coincides with the usual norm on $\mathbb R^n$.
Motivation for Ward's definition of error sum of squares (ESS) \begin{align} \operatorname{Var}(\vec x) \propto \sum_{i=1}^n(x_i - \bar x)^2 &= \sum_i x_i^2 - 2\bar x \sum_ix_i + n \bar x^2 \\ &= \sum_i x_i^2 - n \bar x^2 = \sum_i x_i^2 - \frac 1n \left(\sum_i x
54,906
Motivation for Ward's definition of error sum of squares (ESS)
Ward's ESS is the same as the SS you mention. If you distribute the terms in your formula you get: $ \sum(x_i - \bar x)^2 = \sum x_i^2 + \sum \bar x^2 - 2 \bar x \sum x_i = \sum x_i^2 - n \bar x ^2 = \sum x_i^2 - (\sum x_i)^2 / n$
Motivation for Ward's definition of error sum of squares (ESS)
Ward's ESS is the same as the SS you mention. If you distribute the terms in your formula you get: $ \sum(x_i - \bar x)^2 = \sum x_i^2 + \sum \bar x^2 - 2 \bar x \sum x_i = \sum x_i^2 - n \bar x ^2 =
Motivation for Ward's definition of error sum of squares (ESS) Ward's ESS is the same as the SS you mention. If you distribute the terms in your formula you get: $ \sum(x_i - \bar x)^2 = \sum x_i^2 + \sum \bar x^2 - 2 \bar x \sum x_i = \sum x_i^2 - n \bar x ^2 = \sum x_i^2 - (\sum x_i)^2 / n$
Motivation for Ward's definition of error sum of squares (ESS) Ward's ESS is the same as the SS you mention. If you distribute the terms in your formula you get: $ \sum(x_i - \bar x)^2 = \sum x_i^2 + \sum \bar x^2 - 2 \bar x \sum x_i = \sum x_i^2 - n \bar x ^2 =
54,907
Interpreting PCA figures in layman terms
The link ( http://stats.stackexchange.com/questions/141085 ) provided in the comments give great insight. In this post I place some additions and specific comments on the particular biplot function used I especially recommend you to look at the documentation of the biplot function (in a R console type "?biplot.princomp"). In that documentation you will read that the biplot is based on: Gabriel, K. R. (1971). The biplot graphical display of matrices with applications to principal component analysis. Biometrika, 58, 453–467. link Also the documentation explains you that there are several options to alter the appearance of the plot. For instance your question 1 "Is the upper axis bound to PC1, and the right axis to PC2?" relates to the 'choices' parameter in the function. You can choose different PCs to be on the x and y axis. As well it is worthwhile to look at the source code of the biplot: > getS3method("biplot","prcomp") function (x, choices = 1L:2L, scale = 1, pc.biplot = FALSE, ...) { if (length(choices) != 2L) stop("length of choices must be 2") if (!length(scores <- x$x)) stop(gettextf("object '%s' has no scores", deparse(substitute(x))), domain = NA) if (is.complex(scores)) stop("biplots are not defined for complex PCA") lam <- x$sdev[choices] n <- NROW(scores) lam <- lam * sqrt(n) if (scale < 0 || scale > 1) warning("'scale' is outside [0, 1]") if (scale != 0) lam <- lam^scale else lam <- 1 if (pc.biplot) lam <- lam/sqrt(n) biplot.default(t(t(scores[, choices])/lam), t(t(x$rotation[, choices]) * lam), ...) invisible() } <bytecode: 0x62870ca8> <environment: namespace:stats> with this source code you won't have any confusion about what type of scaling is used (e.g., see that a factor $\sqrt{n}$ is used instead of $\sqrt{n-1}$, the latter being mentioned in an answer to the earlier mentioned question) Also this scaling stuff answers your question 3. The princomp scales everything by the standard deviation. And the biplot does another scaling with $\sqrt{50}$ as well as the eigenvalues $\lambda$. The 2d (bi-)plots are essentially projections of multidimensional data. I always believe that a 3d view of PCA is a great help to understand that, and I hope that the following two images may provide you some additional insight for you. While your data is 4d I took the liberty to eliminate 1 of your variables in order to make the plotting possible. You will have to try to extend this idea to some imagination of a higher order dimensional space. The biplot: pok<-matrix(c("Quilava",58,64,58,80, "Goodra",90,100,70,80, "Mothim",70,94,50,66, "Marowak",60,80,110,45, "Chandelure",60,55,90,80, "Helioptile",44,38,33,70, "MeloettaAriaForme",100,77,77,90, "MetagrossMega",80,145,150,110, "Sawsbuck",80,100,70,95, "Probopass",60,55,145,40, "GiratinaAltered",150,100,120,90, "Tranquill",62,77,62,65, "Simisage",75,98,63,101, "Scizor",70,130,100,65, "Jigglypuff",115,45,20,20, "Carracosta",74,108,133,32, "Ferrothorn",74,94,131,20, "Kadabra",40,35,30,105, "Sylveon",95,65,65,60, "Golem",80,120,130,45, "Magnemite",25,35,70,45, "Vanillish",51,65,65,59, "Unown",48,72,48,48, "Snivy",45,45,55,63, "Tynamo",35,55,40,60, "Duskull",20,40,90,25, "Beautifly",60,70,50,65, "Marill",70,20,50,40, "Lunatone",70,55,65,70, "Flygon",80,100,80,100, "Bronzor",57,24,86,23, "Monferno",64,78,52,81, "Simisear",75,98,63,101, "Aromatisse",101,72,72,29, "Scraggy",50,75,70,48, "Scolipede",60,100,89,112, "Staraptor",85,120,70,100, "GyaradosMega",95,155,109,81, "Tyrunt",58,89,77,48, "Zekrom",100,150,120,90, "Gyarados",95,125,79,81, "Cobalion",91,90,129,108, "Espurr",62,48,54,68, "Spheal",70,40,50,25, "Dodrio",60,110,70,100, "Torkoal",70,85,140,20, "Cacnea",50,85,40,35, "Trubbish",50,50,62,65, "Lucario",70,110,70,90, "GiratinaOrigin",150,120,100,90),50,byrow=1) pokt <- matrix(as.numeric(pok[,-1]),50) pokn <- pok[,1] biplot(pca,scale=1,xlabs=pokn,ylabs=c("attack","defense","speed"),cex=0.7,xlab="",ylab="") mtext("PCA2 loading of vectors\n multiplied by lambda*sqrt(n)", side=4, line=3) mtext("PCA1 loading of vectors\n multiplied by lambda*sqrt(n)", side=3, line=2.5) mtext("PCA2 scores\n divided by lambda*sqrt(n)", side=2, line=2.5) mtext("PCA1 scores\n divided by lambda*sqrt(n)", side=1, line=3) You should try to see for yourself whether you can recognize the 2d-biplot in the 3d-image. Note in the 3d image how the points as well as axes(attack, defense, speed) are projected onto the plane spanned by the PC1 and PC2 vectors. points as well as axes (Geometrically a biplot is this two-times projection, showing two things together: the transformed vectors, loadings, as well as the transformed points, scores. Algebraically the biplot is the dual representation of the singular value decomposition matrices $\mathbf{U}$ and $\mathbf{V}$, ands scaled versions thereof. in the 3d image the scales do not match the same as the scales in the 2d image. That's because in the 3d image we project both loadings and scores on the same plane, with the same scale in 3d (and we adjust the scales on the edges of the 2d plane), and in the 2d images those scales are placed on different axes allowing to stretch them such that the min and max of the scales coincide$ codeblock for animation (excuse for the dirty writing with little spaces etcetera): # centering and scaling pokt <- apply(pokt,2, FUN <- function(v) {(v-mean(v))/sqrt(var(v))} ) # loading libraries (not sure all are needed) library("plotrix") library(plot3D) library(rgl) library(matlib) # prepare rgl device #rgl.close() # for debugging rgl.open() # Open a new RGL device bgplot3d({ plot.new() title(main = '', line = 3) }) par3d(windowRect = 50 + c( 0, 0, 800, 800 ) ) rgl.bg(color = "#fcfbf6") rgl.viewpoint(theta = -90, phi = 20, zoom = 1) # plot data points rgl.points(pokt[,2], pokt[,3], pokt[,4], color ="blue",size=5) # axes and square box N = 2.5 rgl.lines(c(-N, N), c(-N, -N),c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N,N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N, -N), c(-N,N), color = "gray") rgl.lines(c(N, -N), c(N, N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, -N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, N), c(N, -N), color = "gray") rgl.lines(c(N, N), c(-N, -N), c(N, -N), color = "gray") rgl.lines(c(N, -N), c(N, N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N, N), c(N, N), color = "gray") rgl.lines(c(N, -N), c(-N, -N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, -N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(N, N), c(N, -N), color = "gray") rgl.lines(c(0, N), c(0, 0),c(0, 0), color = "orange") rgl.lines(c(0, 0), c(0, N),c(0, 0), color = "orange") rgl.lines(c(0, 0), c(0, 0),c(0, N), color = "orange") cone3d(base=c(N,0,0)*0.94,tip=c(N,0,0)*0.06,radius=0.1,col="orange") cone3d(base=c(0,N,0)*0.94,tip=c(0,N,0)*0.06,radius=0.1,col="orange") cone3d(base=c(0,0,N)*0.94,tip=c(0,0,N)*0.06,radius=0.1,col="orange") rgl.texts(c(c(N,0,0),c(0,N,0),c(0,0,N)), text = c("attack","defense","speed"), color="orange", adj = c(0.0, 0.0), size = 9) # biplot calculations lam <- pca$sdev[]*sqrt(50) scores <- cbind(pokt[,2],pokt[,3],pokt[,4]) %*% pca$rotation scores_sm <- scores %*% diag(lam^-1) loadings <- pca$rotation loadings_sm <- loadings %*% diag(lam) # 2d plot in 3d # define boundaries max1 <- max(scores[,1])*1.1 max2 <- max(scores[,2])*1.1 min1 <- min(scores[,1])*1.1 min2 <- min(scores[,2])*1.1 # plot bounding box of biplot rgl.lines(c(+max1*pca$rotation[1,1]+max2*pca$rotation[1,2], -max1*pca$rotation[1,1]+max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]+max2*pca$rotation[2,2], -max1*pca$rotation[2,1]+max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]+max2*pca$rotation[3,2], -max1*pca$rotation[3,1]+max2*pca$rotation[3,2]), color = "black") rgl.lines(c(+max1*pca$rotation[1,1]-max2*pca$rotation[1,2], -max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]-max2*pca$rotation[2,2], -max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]-max2*pca$rotation[3,2], -max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") rgl.lines(c(+max1*pca$rotation[1,1]+max2*pca$rotation[1,2], +max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]+max2*pca$rotation[2,2], +max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]+max2*pca$rotation[3,2], +max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") rgl.lines(c(-max1*pca$rotation[1,1]+max2*pca$rotation[1,2], -max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(-max1*pca$rotation[2,1]+max2*pca$rotation[2,2], -max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(-max1*pca$rotation[3,1]+max2*pca$rotation[3,2], -max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") # plot projected points projected_points <- scores[,c(1,2)] %*% t(pca$rotation[,c(1,2)]) rgl.points(projected_points[,1],projected_points[,2],projected_points[,3], color ="black",size=5) # plot projection paths for points for (i in 1:50) { rgl.lines(c(pokt[i,2], projected_points[i,1]), c(pokt[i,3], projected_points[i,2]), c(pokt[i,4], projected_points[i,3]), color = "gray") } # plot projected loadings axes projected_lines <- N*loadings[,c(1,2)] %*% t(pca$rotation)[c(1,2),] rgl.lines(c(0,projected_lines[1,1]), c(0,projected_lines[1,2]), c(0,projected_lines[1,3]), color="red") rgl.lines(c(0,projected_lines[2,1]), c(0,projected_lines[2,2]), c(0,projected_lines[2,3]), color="red") rgl.lines(c(0,projected_lines[3,1]), c(0,projected_lines[3,2]), c(0,projected_lines[3,3]), color="red") # plot tickmarks for (i in -3:3) { pos = i/10*sqrt(50)*pca$sdev[2] rgl.lines(c(-max1*pca$rotation[1,1]+pos*pca$rotation[1,2], -1.05*max1*pca$rotation[1,1]+pos*pca$rotation[1,2]), c(-max1*pca$rotation[2,1]+pos*pca$rotation[2,2], -1.05*max1*pca$rotation[2,1]+pos*pca$rotation[2,2]), c(-max1*pca$rotation[3,1]+pos*pca$rotation[3,2], -1.05*max1*pca$rotation[3,1]+pos*pca$rotation[3,2]), color = "black") rgl.texts(-1.08*max1*pca$rotation[1,1]+pos*pca$rotation[1,2], -1.08*max1*pca$rotation[2,1]+pos*pca$rotation[2,2], -1.08*max1*pca$rotation[3,1]+pos*pca$rotation[3,2], text=i/10,color="black") } for (i in -3:3) { pos = i/10*sqrt(50)*pca$sdev[1] rgl.lines(c(-max2*pca$rotation[1,2]+pos*pca$rotation[1,1], -1.06*max2*pca$rotation[1,2]+pos*pca$rotation[1,1]), c(-max2*pca$rotation[2,2]+pos*pca$rotation[2,1], -1.06*max2*pca$rotation[2,2]+pos*pca$rotation[2,1]), c(-max2*pca$rotation[3,2]+pos*pca$rotation[3,1], -1.06*max2*pca$rotation[3,2]+pos*pca$rotation[3,1]), color = "black") rgl.texts(-1.1*max2*pca$rotation[1,2]+pos*pca$rotation[1,1], -1.1*max2*pca$rotation[2,2]+pos*pca$rotation[2,1], -1.1*max2*pca$rotation[3,2]+pos*pca$rotation[3,1], text=i/10,color="black") } for (i in -3:2) { pos = N*i*2/sqrt(50)/pca$sdev[1] rgl.lines(c(max2*pca$rotation[1,2]+pos*pca$rotation[1,1], 1.05*max2*pca$rotation[1,2]+pos*pca$rotation[1,1]), c(max2*pca$rotation[2,2]+pos*pca$rotation[2,1], 1.05*max2*pca$rotation[2,2]+pos*pca$rotation[2,1]), c(max2*pca$rotation[3,2]+pos*pca$rotation[3,1], 1.05*max2*pca$rotation[3,2]+pos*pca$rotation[3,1]), color = "red") rgl.texts(1.08*max2*pca$rotation[1,2]+pos*pca$rotation[1,1], 1.08*max2*pca$rotation[2,2]+pos*pca$rotation[2,1], 1.08*max2*pca$rotation[3,2]+pos*pca$rotation[3,1], text=i*2,color="red") } for (i in -3:2) { pos = N*i*2/sqrt(50)/pca$sdev[2] rgl.lines(c(max1*pca$rotation[1,1]+pos*pca$rotation[1,2], 1.05*max1*pca$rotation[1,1]+pos*pca$rotation[1,2]), c(max1*pca$rotation[2,1]+pos*pca$rotation[2,2], 1.05*max1*pca$rotation[2,1]+pos*pca$rotation[2,2]), c(max1*pca$rotation[3,1]+pos*pca$rotation[3,2], 1.05*max1*pca$rotation[3,1]+pos*pca$rotation[3,2]), color = "red") rgl.texts(1.08*max1*pca$rotation[1,1]+pos*pca$rotation[1,2], 1.08*max1*pca$rotation[2,1]+pos*pca$rotation[2,2], 1.08*max1*pca$rotation[3,1]+pos*pca$rotation[3,2], text=i*2,color="red") } # make movei and close device movie3d(spin3d(axis = c(0, 1, 0),rpm=-7.5), duration = 8, fps=10, dir = "~/gif") rgl.close()
Interpreting PCA figures in layman terms
The link ( http://stats.stackexchange.com/questions/141085 ) provided in the comments give great insight. In this post I place some additions and specific comments on the particular biplot function us
Interpreting PCA figures in layman terms The link ( http://stats.stackexchange.com/questions/141085 ) provided in the comments give great insight. In this post I place some additions and specific comments on the particular biplot function used I especially recommend you to look at the documentation of the biplot function (in a R console type "?biplot.princomp"). In that documentation you will read that the biplot is based on: Gabriel, K. R. (1971). The biplot graphical display of matrices with applications to principal component analysis. Biometrika, 58, 453–467. link Also the documentation explains you that there are several options to alter the appearance of the plot. For instance your question 1 "Is the upper axis bound to PC1, and the right axis to PC2?" relates to the 'choices' parameter in the function. You can choose different PCs to be on the x and y axis. As well it is worthwhile to look at the source code of the biplot: > getS3method("biplot","prcomp") function (x, choices = 1L:2L, scale = 1, pc.biplot = FALSE, ...) { if (length(choices) != 2L) stop("length of choices must be 2") if (!length(scores <- x$x)) stop(gettextf("object '%s' has no scores", deparse(substitute(x))), domain = NA) if (is.complex(scores)) stop("biplots are not defined for complex PCA") lam <- x$sdev[choices] n <- NROW(scores) lam <- lam * sqrt(n) if (scale < 0 || scale > 1) warning("'scale' is outside [0, 1]") if (scale != 0) lam <- lam^scale else lam <- 1 if (pc.biplot) lam <- lam/sqrt(n) biplot.default(t(t(scores[, choices])/lam), t(t(x$rotation[, choices]) * lam), ...) invisible() } <bytecode: 0x62870ca8> <environment: namespace:stats> with this source code you won't have any confusion about what type of scaling is used (e.g., see that a factor $\sqrt{n}$ is used instead of $\sqrt{n-1}$, the latter being mentioned in an answer to the earlier mentioned question) Also this scaling stuff answers your question 3. The princomp scales everything by the standard deviation. And the biplot does another scaling with $\sqrt{50}$ as well as the eigenvalues $\lambda$. The 2d (bi-)plots are essentially projections of multidimensional data. I always believe that a 3d view of PCA is a great help to understand that, and I hope that the following two images may provide you some additional insight for you. While your data is 4d I took the liberty to eliminate 1 of your variables in order to make the plotting possible. You will have to try to extend this idea to some imagination of a higher order dimensional space. The biplot: pok<-matrix(c("Quilava",58,64,58,80, "Goodra",90,100,70,80, "Mothim",70,94,50,66, "Marowak",60,80,110,45, "Chandelure",60,55,90,80, "Helioptile",44,38,33,70, "MeloettaAriaForme",100,77,77,90, "MetagrossMega",80,145,150,110, "Sawsbuck",80,100,70,95, "Probopass",60,55,145,40, "GiratinaAltered",150,100,120,90, "Tranquill",62,77,62,65, "Simisage",75,98,63,101, "Scizor",70,130,100,65, "Jigglypuff",115,45,20,20, "Carracosta",74,108,133,32, "Ferrothorn",74,94,131,20, "Kadabra",40,35,30,105, "Sylveon",95,65,65,60, "Golem",80,120,130,45, "Magnemite",25,35,70,45, "Vanillish",51,65,65,59, "Unown",48,72,48,48, "Snivy",45,45,55,63, "Tynamo",35,55,40,60, "Duskull",20,40,90,25, "Beautifly",60,70,50,65, "Marill",70,20,50,40, "Lunatone",70,55,65,70, "Flygon",80,100,80,100, "Bronzor",57,24,86,23, "Monferno",64,78,52,81, "Simisear",75,98,63,101, "Aromatisse",101,72,72,29, "Scraggy",50,75,70,48, "Scolipede",60,100,89,112, "Staraptor",85,120,70,100, "GyaradosMega",95,155,109,81, "Tyrunt",58,89,77,48, "Zekrom",100,150,120,90, "Gyarados",95,125,79,81, "Cobalion",91,90,129,108, "Espurr",62,48,54,68, "Spheal",70,40,50,25, "Dodrio",60,110,70,100, "Torkoal",70,85,140,20, "Cacnea",50,85,40,35, "Trubbish",50,50,62,65, "Lucario",70,110,70,90, "GiratinaOrigin",150,120,100,90),50,byrow=1) pokt <- matrix(as.numeric(pok[,-1]),50) pokn <- pok[,1] biplot(pca,scale=1,xlabs=pokn,ylabs=c("attack","defense","speed"),cex=0.7,xlab="",ylab="") mtext("PCA2 loading of vectors\n multiplied by lambda*sqrt(n)", side=4, line=3) mtext("PCA1 loading of vectors\n multiplied by lambda*sqrt(n)", side=3, line=2.5) mtext("PCA2 scores\n divided by lambda*sqrt(n)", side=2, line=2.5) mtext("PCA1 scores\n divided by lambda*sqrt(n)", side=1, line=3) You should try to see for yourself whether you can recognize the 2d-biplot in the 3d-image. Note in the 3d image how the points as well as axes(attack, defense, speed) are projected onto the plane spanned by the PC1 and PC2 vectors. points as well as axes (Geometrically a biplot is this two-times projection, showing two things together: the transformed vectors, loadings, as well as the transformed points, scores. Algebraically the biplot is the dual representation of the singular value decomposition matrices $\mathbf{U}$ and $\mathbf{V}$, ands scaled versions thereof. in the 3d image the scales do not match the same as the scales in the 2d image. That's because in the 3d image we project both loadings and scores on the same plane, with the same scale in 3d (and we adjust the scales on the edges of the 2d plane), and in the 2d images those scales are placed on different axes allowing to stretch them such that the min and max of the scales coincide$ codeblock for animation (excuse for the dirty writing with little spaces etcetera): # centering and scaling pokt <- apply(pokt,2, FUN <- function(v) {(v-mean(v))/sqrt(var(v))} ) # loading libraries (not sure all are needed) library("plotrix") library(plot3D) library(rgl) library(matlib) # prepare rgl device #rgl.close() # for debugging rgl.open() # Open a new RGL device bgplot3d({ plot.new() title(main = '', line = 3) }) par3d(windowRect = 50 + c( 0, 0, 800, 800 ) ) rgl.bg(color = "#fcfbf6") rgl.viewpoint(theta = -90, phi = 20, zoom = 1) # plot data points rgl.points(pokt[,2], pokt[,3], pokt[,4], color ="blue",size=5) # axes and square box N = 2.5 rgl.lines(c(-N, N), c(-N, -N),c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N,N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N, -N), c(-N,N), color = "gray") rgl.lines(c(N, -N), c(N, N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, -N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, N), c(N, -N), color = "gray") rgl.lines(c(N, N), c(-N, -N), c(N, -N), color = "gray") rgl.lines(c(N, -N), c(N, N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(-N, N), c(N, N), color = "gray") rgl.lines(c(N, -N), c(-N, -N), c(N, N), color = "gray") rgl.lines(c(N, N), c(N, -N), c(-N, -N), color = "gray") rgl.lines(c(-N, -N), c(N, N), c(N, -N), color = "gray") rgl.lines(c(0, N), c(0, 0),c(0, 0), color = "orange") rgl.lines(c(0, 0), c(0, N),c(0, 0), color = "orange") rgl.lines(c(0, 0), c(0, 0),c(0, N), color = "orange") cone3d(base=c(N,0,0)*0.94,tip=c(N,0,0)*0.06,radius=0.1,col="orange") cone3d(base=c(0,N,0)*0.94,tip=c(0,N,0)*0.06,radius=0.1,col="orange") cone3d(base=c(0,0,N)*0.94,tip=c(0,0,N)*0.06,radius=0.1,col="orange") rgl.texts(c(c(N,0,0),c(0,N,0),c(0,0,N)), text = c("attack","defense","speed"), color="orange", adj = c(0.0, 0.0), size = 9) # biplot calculations lam <- pca$sdev[]*sqrt(50) scores <- cbind(pokt[,2],pokt[,3],pokt[,4]) %*% pca$rotation scores_sm <- scores %*% diag(lam^-1) loadings <- pca$rotation loadings_sm <- loadings %*% diag(lam) # 2d plot in 3d # define boundaries max1 <- max(scores[,1])*1.1 max2 <- max(scores[,2])*1.1 min1 <- min(scores[,1])*1.1 min2 <- min(scores[,2])*1.1 # plot bounding box of biplot rgl.lines(c(+max1*pca$rotation[1,1]+max2*pca$rotation[1,2], -max1*pca$rotation[1,1]+max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]+max2*pca$rotation[2,2], -max1*pca$rotation[2,1]+max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]+max2*pca$rotation[3,2], -max1*pca$rotation[3,1]+max2*pca$rotation[3,2]), color = "black") rgl.lines(c(+max1*pca$rotation[1,1]-max2*pca$rotation[1,2], -max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]-max2*pca$rotation[2,2], -max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]-max2*pca$rotation[3,2], -max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") rgl.lines(c(+max1*pca$rotation[1,1]+max2*pca$rotation[1,2], +max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(+max1*pca$rotation[2,1]+max2*pca$rotation[2,2], +max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(+max1*pca$rotation[3,1]+max2*pca$rotation[3,2], +max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") rgl.lines(c(-max1*pca$rotation[1,1]+max2*pca$rotation[1,2], -max1*pca$rotation[1,1]-max2*pca$rotation[1,2]), c(-max1*pca$rotation[2,1]+max2*pca$rotation[2,2], -max1*pca$rotation[2,1]-max2*pca$rotation[2,2]), c(-max1*pca$rotation[3,1]+max2*pca$rotation[3,2], -max1*pca$rotation[3,1]-max2*pca$rotation[3,2]), color = "black") # plot projected points projected_points <- scores[,c(1,2)] %*% t(pca$rotation[,c(1,2)]) rgl.points(projected_points[,1],projected_points[,2],projected_points[,3], color ="black",size=5) # plot projection paths for points for (i in 1:50) { rgl.lines(c(pokt[i,2], projected_points[i,1]), c(pokt[i,3], projected_points[i,2]), c(pokt[i,4], projected_points[i,3]), color = "gray") } # plot projected loadings axes projected_lines <- N*loadings[,c(1,2)] %*% t(pca$rotation)[c(1,2),] rgl.lines(c(0,projected_lines[1,1]), c(0,projected_lines[1,2]), c(0,projected_lines[1,3]), color="red") rgl.lines(c(0,projected_lines[2,1]), c(0,projected_lines[2,2]), c(0,projected_lines[2,3]), color="red") rgl.lines(c(0,projected_lines[3,1]), c(0,projected_lines[3,2]), c(0,projected_lines[3,3]), color="red") # plot tickmarks for (i in -3:3) { pos = i/10*sqrt(50)*pca$sdev[2] rgl.lines(c(-max1*pca$rotation[1,1]+pos*pca$rotation[1,2], -1.05*max1*pca$rotation[1,1]+pos*pca$rotation[1,2]), c(-max1*pca$rotation[2,1]+pos*pca$rotation[2,2], -1.05*max1*pca$rotation[2,1]+pos*pca$rotation[2,2]), c(-max1*pca$rotation[3,1]+pos*pca$rotation[3,2], -1.05*max1*pca$rotation[3,1]+pos*pca$rotation[3,2]), color = "black") rgl.texts(-1.08*max1*pca$rotation[1,1]+pos*pca$rotation[1,2], -1.08*max1*pca$rotation[2,1]+pos*pca$rotation[2,2], -1.08*max1*pca$rotation[3,1]+pos*pca$rotation[3,2], text=i/10,color="black") } for (i in -3:3) { pos = i/10*sqrt(50)*pca$sdev[1] rgl.lines(c(-max2*pca$rotation[1,2]+pos*pca$rotation[1,1], -1.06*max2*pca$rotation[1,2]+pos*pca$rotation[1,1]), c(-max2*pca$rotation[2,2]+pos*pca$rotation[2,1], -1.06*max2*pca$rotation[2,2]+pos*pca$rotation[2,1]), c(-max2*pca$rotation[3,2]+pos*pca$rotation[3,1], -1.06*max2*pca$rotation[3,2]+pos*pca$rotation[3,1]), color = "black") rgl.texts(-1.1*max2*pca$rotation[1,2]+pos*pca$rotation[1,1], -1.1*max2*pca$rotation[2,2]+pos*pca$rotation[2,1], -1.1*max2*pca$rotation[3,2]+pos*pca$rotation[3,1], text=i/10,color="black") } for (i in -3:2) { pos = N*i*2/sqrt(50)/pca$sdev[1] rgl.lines(c(max2*pca$rotation[1,2]+pos*pca$rotation[1,1], 1.05*max2*pca$rotation[1,2]+pos*pca$rotation[1,1]), c(max2*pca$rotation[2,2]+pos*pca$rotation[2,1], 1.05*max2*pca$rotation[2,2]+pos*pca$rotation[2,1]), c(max2*pca$rotation[3,2]+pos*pca$rotation[3,1], 1.05*max2*pca$rotation[3,2]+pos*pca$rotation[3,1]), color = "red") rgl.texts(1.08*max2*pca$rotation[1,2]+pos*pca$rotation[1,1], 1.08*max2*pca$rotation[2,2]+pos*pca$rotation[2,1], 1.08*max2*pca$rotation[3,2]+pos*pca$rotation[3,1], text=i*2,color="red") } for (i in -3:2) { pos = N*i*2/sqrt(50)/pca$sdev[2] rgl.lines(c(max1*pca$rotation[1,1]+pos*pca$rotation[1,2], 1.05*max1*pca$rotation[1,1]+pos*pca$rotation[1,2]), c(max1*pca$rotation[2,1]+pos*pca$rotation[2,2], 1.05*max1*pca$rotation[2,1]+pos*pca$rotation[2,2]), c(max1*pca$rotation[3,1]+pos*pca$rotation[3,2], 1.05*max1*pca$rotation[3,1]+pos*pca$rotation[3,2]), color = "red") rgl.texts(1.08*max1*pca$rotation[1,1]+pos*pca$rotation[1,2], 1.08*max1*pca$rotation[2,1]+pos*pca$rotation[2,2], 1.08*max1*pca$rotation[3,1]+pos*pca$rotation[3,2], text=i*2,color="red") } # make movei and close device movie3d(spin3d(axis = c(0, 1, 0),rpm=-7.5), duration = 8, fps=10, dir = "~/gif") rgl.close()
Interpreting PCA figures in layman terms The link ( http://stats.stackexchange.com/questions/141085 ) provided in the comments give great insight. In this post I place some additions and specific comments on the particular biplot function us
54,908
How to sample uniformly points around a neighborhood of a point lying on a n-sphere?
Sample in a neighborhood of $e_{n+1}=(0,0,\ldots,0,1)$ in $\mathbb{R}^{n+1}$ and then apply any orthogonal transformation (that is, isometry of the sphere) that sends $e_{n+1}$ to $x_k$. This reduces the problem to sampling around $e_{n+1}$. Geometry shows that the last coordinate of the points, $Z$, will range from $1$ down to $1 - r^2/2$. The distribution of $Z$ is otherwise that of a Beta$(n/2,n/2)$ variate, multiplied by $2$ and shifted down by $1$, as shown at https://stats.stackexchange.com/a/85977/919. Conditional on $Z$, the other $n$ coordinates will lie on a sphere of dimension $n-1$ and radius $\sqrt{1-Z^2}$. Generate these using any of the methods described at How to generate uniformly distributed points on the surface of the 3-d unit sphere? (or otherwise). To draw values of $Z$, compute the quantile associated with $1-r^2/2$. If $F$ is the Beta$(d/2,d/2)$ distribution function, this will be $F^{-1}(r^2/4)$. For $U$ a uniform variate in $[0, F^{-1}(r^2/4)]$, set $Z=1-2*F(U)$. Here is R code to illustrate the ideas (and fill in any details I might not have explained sufficiently clearly). Tests of $S^1$ and $S^2$ (which can be directly visualized) and of higher-dimensional spheres with very small distances and the maximum distance ($2$) are consistent with what one would expect, suggesting it is working correctly. d <- 2 # Dimension of the sphere; one less than dimension of its Euclidean space. n <- 1e3 # Number of points to generate. r.max <- 0.2 # Maximum Euclidean distance. # # Generate uniformly random points on a `dim`-sphere of radius `radius`. # Returns (dim+1)-dimensional vectors as rows. # rsphere <- function(n, dim, radius=1) { x <- matrix(rnorm((dim+1)*n), nrow=n) x / (sqrt(rowSums(x^2)) / radius) } # # Generate random heights on the d-sphere. # q <- pbeta(r.max^2 / 4, d/2, d/2) # Limiting quantile z <- 1 - 2*qbeta(runif(n, 0, q), d/2, d/2) # Last coordinate # # Compute the corresponding radii of the cross-sections at heights `z`. # rho <- sqrt(1 - z^2) # Radius # # Generate the remaining (first `d`) coordinates uniformly. # Results are in the rows of `x`. # x <- cbind(rsphere(n, d-1, radius=rho), z) Incidentally, it's simple and fast to find an orthogonal transformation of $\mathbb{R}^{n+1}$ that sends $e_{n+1}$ to $x_k$: choose the reflection with this property. Here is sample R code to apply such a reflection to an entire array of coordinates, such as the x produced above. # # Reflect points `x` (array of rows) in a way that sends (0,0,..,0,1) to `target`. # reflect <- function(x, target) { if (!is.matrix(x)) x <- matrix(x, nrow=1) n <- length(target) v <- c(rep(0, n-1), 1) - target v.norm2 <- sum(v^2) return(x - outer(2/v.norm2 * c(x%*%v), v)) }
How to sample uniformly points around a neighborhood of a point lying on a n-sphere?
Sample in a neighborhood of $e_{n+1}=(0,0,\ldots,0,1)$ in $\mathbb{R}^{n+1}$ and then apply any orthogonal transformation (that is, isometry of the sphere) that sends $e_{n+1}$ to $x_k$. This reduces
How to sample uniformly points around a neighborhood of a point lying on a n-sphere? Sample in a neighborhood of $e_{n+1}=(0,0,\ldots,0,1)$ in $\mathbb{R}^{n+1}$ and then apply any orthogonal transformation (that is, isometry of the sphere) that sends $e_{n+1}$ to $x_k$. This reduces the problem to sampling around $e_{n+1}$. Geometry shows that the last coordinate of the points, $Z$, will range from $1$ down to $1 - r^2/2$. The distribution of $Z$ is otherwise that of a Beta$(n/2,n/2)$ variate, multiplied by $2$ and shifted down by $1$, as shown at https://stats.stackexchange.com/a/85977/919. Conditional on $Z$, the other $n$ coordinates will lie on a sphere of dimension $n-1$ and radius $\sqrt{1-Z^2}$. Generate these using any of the methods described at How to generate uniformly distributed points on the surface of the 3-d unit sphere? (or otherwise). To draw values of $Z$, compute the quantile associated with $1-r^2/2$. If $F$ is the Beta$(d/2,d/2)$ distribution function, this will be $F^{-1}(r^2/4)$. For $U$ a uniform variate in $[0, F^{-1}(r^2/4)]$, set $Z=1-2*F(U)$. Here is R code to illustrate the ideas (and fill in any details I might not have explained sufficiently clearly). Tests of $S^1$ and $S^2$ (which can be directly visualized) and of higher-dimensional spheres with very small distances and the maximum distance ($2$) are consistent with what one would expect, suggesting it is working correctly. d <- 2 # Dimension of the sphere; one less than dimension of its Euclidean space. n <- 1e3 # Number of points to generate. r.max <- 0.2 # Maximum Euclidean distance. # # Generate uniformly random points on a `dim`-sphere of radius `radius`. # Returns (dim+1)-dimensional vectors as rows. # rsphere <- function(n, dim, radius=1) { x <- matrix(rnorm((dim+1)*n), nrow=n) x / (sqrt(rowSums(x^2)) / radius) } # # Generate random heights on the d-sphere. # q <- pbeta(r.max^2 / 4, d/2, d/2) # Limiting quantile z <- 1 - 2*qbeta(runif(n, 0, q), d/2, d/2) # Last coordinate # # Compute the corresponding radii of the cross-sections at heights `z`. # rho <- sqrt(1 - z^2) # Radius # # Generate the remaining (first `d`) coordinates uniformly. # Results are in the rows of `x`. # x <- cbind(rsphere(n, d-1, radius=rho), z) Incidentally, it's simple and fast to find an orthogonal transformation of $\mathbb{R}^{n+1}$ that sends $e_{n+1}$ to $x_k$: choose the reflection with this property. Here is sample R code to apply such a reflection to an entire array of coordinates, such as the x produced above. # # Reflect points `x` (array of rows) in a way that sends (0,0,..,0,1) to `target`. # reflect <- function(x, target) { if (!is.matrix(x)) x <- matrix(x, nrow=1) n <- length(target) v <- c(rep(0, n-1), 1) - target v.norm2 <- sum(v^2) return(x - outer(2/v.norm2 * c(x%*%v), v)) }
How to sample uniformly points around a neighborhood of a point lying on a n-sphere? Sample in a neighborhood of $e_{n+1}=(0,0,\ldots,0,1)$ in $\mathbb{R}^{n+1}$ and then apply any orthogonal transformation (that is, isometry of the sphere) that sends $e_{n+1}$ to $x_k$. This reduces
54,909
How to sample uniformly points around a neighborhood of a point lying on a n-sphere?
Adding to @whuber's answer here is python code to reproduce the ideas he explained, as well to validate the results. from scipy.stats import beta import numpy as np def rsphere(n,n_dim,rad=1.0): X = np.random.normal(size=(n,n_dim+1)) X_norm=np.divide(np.linalg.norm(X,axis=1),rad) X = np.divide(X.T,X_norm).T return X def reflect(X, target): n_dim = X.shape[1] pole = np.array([0]*(n_dim-1) + [1]) v = pole - target X_new = X - np.outer(2.0/np.dot(v,v) * np.matmul(X,v).T, v) return X_new r_max=0.5 # max radius to sample n_dim=200 # number of dimension of the space n_sphere=n_dim-1 # dimension of the n-sphere n = int(1e3) # numer of points to sample target = np.zeros(n_dim) # target target[-2]=1 pole = np.array([0]*(n_dim-1) + [1]) # north pole # generate points X = rsphere(n,n_sphere,rad=1.) rads = np.random.uniform(0.0,r_max,n) q = beta.cdf(np.square(rads)/4.0, n_sphere/2.0, n_sphere/2.0) unif = np.random.uniform(0,q,n) z = 1 - 2*beta.ppf(unif, n_sphere/2.0, n_sphere/2.0) rho = np.sqrt(1.0 - np.square(z)) X_pole = np.hstack((rsphere(n, n_sphere-1, rad=rho),z.reshape(-1,1))) X_nei = reflect(X_pole, target) To validate results we can look at a histogram of distances of the generated points vs the target: import matplotlib.pyplot as plt X_dist = np.linalg.norm(X_nei-target,axis=1) plt.hist(X_dist) plt.xlabel('$\|z_i-z_{target} \|$') plt.show() And to visualize the 3D case: from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure() ax = fig.gca(projection='3d') ax.set_aspect('equal') # Create a sphere r = 1 pi = np.pi cos = np.cos sin = np.sin phi, theta = np.mgrid[0.0:pi:40j, 0.0:2.0*pi:40j] x = r*sin(phi)*cos(theta) y = r*sin(phi)*sin(theta) z = r*cos(phi) # sphere ax.plot_surface( x, y, z, rstride=1, cstride=1, alpha=0.25, linewidth=0) # pole ax.scatter(X_pole[:,0],X_pole[:,1],X_pole[:,2],c='b',alpha=0.5) ax.scatter(pole[0],pole[1],pole[2],s=40,c='k') # target ax.scatter(X_nei[:,0],X_nei[:,1],X_nei[:,2],c='r',alpha=0.5) ax.scatter(target[0],target[1],target[2],s=40,c='k') plt.show()
How to sample uniformly points around a neighborhood of a point lying on a n-sphere?
Adding to @whuber's answer here is python code to reproduce the ideas he explained, as well to validate the results. from scipy.stats import beta import numpy as np def rsphere(n,n_dim,rad=1.0):
How to sample uniformly points around a neighborhood of a point lying on a n-sphere? Adding to @whuber's answer here is python code to reproduce the ideas he explained, as well to validate the results. from scipy.stats import beta import numpy as np def rsphere(n,n_dim,rad=1.0): X = np.random.normal(size=(n,n_dim+1)) X_norm=np.divide(np.linalg.norm(X,axis=1),rad) X = np.divide(X.T,X_norm).T return X def reflect(X, target): n_dim = X.shape[1] pole = np.array([0]*(n_dim-1) + [1]) v = pole - target X_new = X - np.outer(2.0/np.dot(v,v) * np.matmul(X,v).T, v) return X_new r_max=0.5 # max radius to sample n_dim=200 # number of dimension of the space n_sphere=n_dim-1 # dimension of the n-sphere n = int(1e3) # numer of points to sample target = np.zeros(n_dim) # target target[-2]=1 pole = np.array([0]*(n_dim-1) + [1]) # north pole # generate points X = rsphere(n,n_sphere,rad=1.) rads = np.random.uniform(0.0,r_max,n) q = beta.cdf(np.square(rads)/4.0, n_sphere/2.0, n_sphere/2.0) unif = np.random.uniform(0,q,n) z = 1 - 2*beta.ppf(unif, n_sphere/2.0, n_sphere/2.0) rho = np.sqrt(1.0 - np.square(z)) X_pole = np.hstack((rsphere(n, n_sphere-1, rad=rho),z.reshape(-1,1))) X_nei = reflect(X_pole, target) To validate results we can look at a histogram of distances of the generated points vs the target: import matplotlib.pyplot as plt X_dist = np.linalg.norm(X_nei-target,axis=1) plt.hist(X_dist) plt.xlabel('$\|z_i-z_{target} \|$') plt.show() And to visualize the 3D case: from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure() ax = fig.gca(projection='3d') ax.set_aspect('equal') # Create a sphere r = 1 pi = np.pi cos = np.cos sin = np.sin phi, theta = np.mgrid[0.0:pi:40j, 0.0:2.0*pi:40j] x = r*sin(phi)*cos(theta) y = r*sin(phi)*sin(theta) z = r*cos(phi) # sphere ax.plot_surface( x, y, z, rstride=1, cstride=1, alpha=0.25, linewidth=0) # pole ax.scatter(X_pole[:,0],X_pole[:,1],X_pole[:,2],c='b',alpha=0.5) ax.scatter(pole[0],pole[1],pole[2],s=40,c='k') # target ax.scatter(X_nei[:,0],X_nei[:,1],X_nei[:,2],c='r',alpha=0.5) ax.scatter(target[0],target[1],target[2],s=40,c='k') plt.show()
How to sample uniformly points around a neighborhood of a point lying on a n-sphere? Adding to @whuber's answer here is python code to reproduce the ideas he explained, as well to validate the results. from scipy.stats import beta import numpy as np def rsphere(n,n_dim,rad=1.0):
54,910
Why are we interested in asymptotics if the real-world data is almost always finite? [duplicate]
Asymptotic theory tells us about the statistical properties of a sample as it grows to an arbitrarily large size $n$. Often datasets are sufficiently large that theorems like the law of large numbers and the central limit theorem apply in practice. Think of doing a census of tree heights in a forest or the number of time the house wins at a casino at craps over a day. On important thing to note is that asymptotic theory is largely concerned with limiting behaviour of random variables, so there are no infinite datasets involved. For instance, if a sequence of random variables (like sample averages of a dataset) $\hat{X}_1, \hat{X}_2, \ldots$ converges in probability to the true mean $\mu$ (an asymptotic result), then that merely states that for any arbitrarily small error tolerance $\epsilon$ I select, there is some large sample size $n$ so that there is no chance that $\hat{X}_n$ is more than $\epsilon$ away from $\mu$.
Why are we interested in asymptotics if the real-world data is almost always finite? [duplicate]
Asymptotic theory tells us about the statistical properties of a sample as it grows to an arbitrarily large size $n$. Often datasets are sufficiently large that theorems like the law of large numbers
Why are we interested in asymptotics if the real-world data is almost always finite? [duplicate] Asymptotic theory tells us about the statistical properties of a sample as it grows to an arbitrarily large size $n$. Often datasets are sufficiently large that theorems like the law of large numbers and the central limit theorem apply in practice. Think of doing a census of tree heights in a forest or the number of time the house wins at a casino at craps over a day. On important thing to note is that asymptotic theory is largely concerned with limiting behaviour of random variables, so there are no infinite datasets involved. For instance, if a sequence of random variables (like sample averages of a dataset) $\hat{X}_1, \hat{X}_2, \ldots$ converges in probability to the true mean $\mu$ (an asymptotic result), then that merely states that for any arbitrarily small error tolerance $\epsilon$ I select, there is some large sample size $n$ so that there is no chance that $\hat{X}_n$ is more than $\epsilon$ away from $\mu$.
Why are we interested in asymptotics if the real-world data is almost always finite? [duplicate] Asymptotic theory tells us about the statistical properties of a sample as it grows to an arbitrarily large size $n$. Often datasets are sufficiently large that theorems like the law of large numbers
54,911
Expectation of $X$ given $X < c$
A general solution: let $X$ be a random vector with density $f$ and $A=\{X\in B_0\}$, for some $n$-dimensional Borel set $B_0$, with $\Pr(A)>0$. The conditional density denoted by $f(x\mid A)$ must be such that $$ \Pr\{X\in B\mid A\} = \int_B f(x\mid A)\,dx \, \qquad (*) $$ for every $n$-dimensional Borel set $B$. From this it's clear that $$ f(x\mid A) = \frac{f(x)}{\Pr(A)}\,I_{B_0}(x) \, $$ almost everywhere, in which $I_{B_0}$ is the indicator function of $B_0$: $I_{B_0}(x)=1$ if $x\in B_0$, and $I_{B_0}(x)=0$ if $x\notin B_0$. Here is why: the left hand side of $(*)$ is just $$ \Pr\{X\in B\mid X\in B_0\} = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}}, $$ and integrating the right hand side of $(*)$ we have $$ \int_B \frac{f(x)}{\Pr(A)}\,I_{B_0}(x)\, dx = \frac{1}{\Pr(A)} \int_{B\cap B_0} f(x)\,dx = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}}. $$ In your specific case $$ f(x\mid X \leq c) = \frac{f(x)}{\Phi(c)}\,I_{(-\infty,c]}(x). $$
Expectation of $X$ given $X < c$
A general solution: let $X$ be a random vector with density $f$ and $A=\{X\in B_0\}$, for some $n$-dimensional Borel set $B_0$, with $\Pr(A)>0$. The conditional density denoted by $f(x\mid A)$ must be
Expectation of $X$ given $X < c$ A general solution: let $X$ be a random vector with density $f$ and $A=\{X\in B_0\}$, for some $n$-dimensional Borel set $B_0$, with $\Pr(A)>0$. The conditional density denoted by $f(x\mid A)$ must be such that $$ \Pr\{X\in B\mid A\} = \int_B f(x\mid A)\,dx \, \qquad (*) $$ for every $n$-dimensional Borel set $B$. From this it's clear that $$ f(x\mid A) = \frac{f(x)}{\Pr(A)}\,I_{B_0}(x) \, $$ almost everywhere, in which $I_{B_0}$ is the indicator function of $B_0$: $I_{B_0}(x)=1$ if $x\in B_0$, and $I_{B_0}(x)=0$ if $x\notin B_0$. Here is why: the left hand side of $(*)$ is just $$ \Pr\{X\in B\mid X\in B_0\} = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}}, $$ and integrating the right hand side of $(*)$ we have $$ \int_B \frac{f(x)}{\Pr(A)}\,I_{B_0}(x)\, dx = \frac{1}{\Pr(A)} \int_{B\cap B_0} f(x)\,dx = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}}. $$ In your specific case $$ f(x\mid X \leq c) = \frac{f(x)}{\Phi(c)}\,I_{(-\infty,c]}(x). $$
Expectation of $X$ given $X < c$ A general solution: let $X$ be a random vector with density $f$ and $A=\{X\in B_0\}$, for some $n$-dimensional Borel set $B_0$, with $\Pr(A)>0$. The conditional density denoted by $f(x\mid A)$ must be
54,912
Expectation of $X$ given $X < c$
Here is another approach for a continuous random variable, less rigorous than Zen's beautiful solution. Fix $x:x< c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x<c}{=} \frac{P(X <x)}{P(X<c)}=\frac{F_X(x)}{F_X(c)}$ Fix $x:x>c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x>c}{=} \frac{P(X <c)}{P(X<c)}=1$ Then, the conditional pmf is $ f_{X|X<c}(x|x<c)=\frac{f_X(x)}{F_X(c)}I_{(-\infty,c]}(x)$ Therefore, the conditional expectation is $E(X|X<c)=\int_{- \infty}^{+\infty}xf_{X|X<c}(x|x<c)dx=\int_{- \infty}^{c}xf_{X|X<c}(x|x<c)dx$
Expectation of $X$ given $X < c$
Here is another approach for a continuous random variable, less rigorous than Zen's beautiful solution. Fix $x:x< c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x<c}{=} \frac{P(X <x)}
Expectation of $X$ given $X < c$ Here is another approach for a continuous random variable, less rigorous than Zen's beautiful solution. Fix $x:x< c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x<c}{=} \frac{P(X <x)}{P(X<c)}=\frac{F_X(x)}{F_X(c)}$ Fix $x:x>c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x>c}{=} \frac{P(X <c)}{P(X<c)}=1$ Then, the conditional pmf is $ f_{X|X<c}(x|x<c)=\frac{f_X(x)}{F_X(c)}I_{(-\infty,c]}(x)$ Therefore, the conditional expectation is $E(X|X<c)=\int_{- \infty}^{+\infty}xf_{X|X<c}(x|x<c)dx=\int_{- \infty}^{c}xf_{X|X<c}(x|x<c)dx$
Expectation of $X$ given $X < c$ Here is another approach for a continuous random variable, less rigorous than Zen's beautiful solution. Fix $x:x< c$, then $P(X\leq x|X<c)=\frac{P(X\leq x,X <c)}{P(X<c)}\overset{x<c}{=} \frac{P(X <x)}
54,913
Models to use Predict time till next purchase
The basic strategy would be to assume that the activity is distributed according to some family of distributions, and then estimate the parameters based on the data. This wouldn’t really be regression, since the feature variables and the response variables are basically the same. Also, the really basic analysis would base the estimate for each customer only on that customer’s activity, although a more advanced analysis might adjust what family of distributions is expected, or weight towards an expectation of the parameters based on what the average customer is doing. If you have other variables, such as the dollar amounts of the transactions, or if you don’t assume independence (perhaps people who buy a lot on the weekend have behavior that’s different from other people, for example), then you can do regression analysis between those factors and the parameters.
Models to use Predict time till next purchase
The basic strategy would be to assume that the activity is distributed according to some family of distributions, and then estimate the parameters based on the data. This wouldn’t really be regression
Models to use Predict time till next purchase The basic strategy would be to assume that the activity is distributed according to some family of distributions, and then estimate the parameters based on the data. This wouldn’t really be regression, since the feature variables and the response variables are basically the same. Also, the really basic analysis would base the estimate for each customer only on that customer’s activity, although a more advanced analysis might adjust what family of distributions is expected, or weight towards an expectation of the parameters based on what the average customer is doing. If you have other variables, such as the dollar amounts of the transactions, or if you don’t assume independence (perhaps people who buy a lot on the weekend have behavior that’s different from other people, for example), then you can do regression analysis between those factors and the parameters.
Models to use Predict time till next purchase The basic strategy would be to assume that the activity is distributed according to some family of distributions, and then estimate the parameters based on the data. This wouldn’t really be regression
54,914
Models to use Predict time till next purchase
It depends on what you want to use and what assumptions you want to make. If you assume that the purchases are stochastically independent then you're looking for the the Poisson Point Process. An accessible introduction can be found here. More generally, the Exponential Distribution predicts the time between successive events. You can use Exponential Regression if you have insight into what the variables that determine the time intervals are (thanks comment!) If you don't have any insights to the data to allow you to use exponential regression or the Poisson Point Process, you might be able to build a ML algorithm that simulates the purchasing behavior of your clientele and then you can study the simulation to determine the answer.
Models to use Predict time till next purchase
It depends on what you want to use and what assumptions you want to make. If you assume that the purchases are stochastically independent then you're looking for the the Poisson Point Process. An acce
Models to use Predict time till next purchase It depends on what you want to use and what assumptions you want to make. If you assume that the purchases are stochastically independent then you're looking for the the Poisson Point Process. An accessible introduction can be found here. More generally, the Exponential Distribution predicts the time between successive events. You can use Exponential Regression if you have insight into what the variables that determine the time intervals are (thanks comment!) If you don't have any insights to the data to allow you to use exponential regression or the Poisson Point Process, you might be able to build a ML algorithm that simulates the purchasing behavior of your clientele and then you can study the simulation to determine the answer.
Models to use Predict time till next purchase It depends on what you want to use and what assumptions you want to make. If you assume that the purchases are stochastically independent then you're looking for the the Poisson Point Process. An acce
54,915
Models to use Predict time till next purchase
This is an example of Survival Analysis, and yes, that can be done with regression. You can start at Wikipedia. Start with figuring out how to create a Kaplan Meier curve, maybe compare it for some subgroups, if you've never heard of this area.
Models to use Predict time till next purchase
This is an example of Survival Analysis, and yes, that can be done with regression. You can start at Wikipedia. Start with figuring out how to create a Kaplan Meier curve, maybe compare it for some su
Models to use Predict time till next purchase This is an example of Survival Analysis, and yes, that can be done with regression. You can start at Wikipedia. Start with figuring out how to create a Kaplan Meier curve, maybe compare it for some subgroups, if you've never heard of this area.
Models to use Predict time till next purchase This is an example of Survival Analysis, and yes, that can be done with regression. You can start at Wikipedia. Start with figuring out how to create a Kaplan Meier curve, maybe compare it for some su
54,916
Models to use Predict time till next purchase
This is point process modeling. One simple way to use regression is to regress the next interval of purchase as a function of history and other covariates. In general, you can fit a parametric form of conditional intensity function. A popular choice in neuroscience is to use a linear-nonlinear functional form.
Models to use Predict time till next purchase
This is point process modeling. One simple way to use regression is to regress the next interval of purchase as a function of history and other covariates. In general, you can fit a parametric form of
Models to use Predict time till next purchase This is point process modeling. One simple way to use regression is to regress the next interval of purchase as a function of history and other covariates. In general, you can fit a parametric form of conditional intensity function. A popular choice in neuroscience is to use a linear-nonlinear functional form.
Models to use Predict time till next purchase This is point process modeling. One simple way to use regression is to regress the next interval of purchase as a function of history and other covariates. In general, you can fit a parametric form of
54,917
Variance of set of subsets
You are looking for "measure of similarity" between the sets of commenting friends. One of the most popular measures is Jaccard index: $$ J=\frac{|\bigcap_{i=1}^{n} A_i|}{|\bigcup_{i=1}^{n} A_i|} $$ $A_i$ is a set of friends that commented on post i. If most friends that commented are the same, the intersection count will be close to the union count, and the Jaccard index will be close to 1. If only few friends have common comments, it will be small.
Variance of set of subsets
You are looking for "measure of similarity" between the sets of commenting friends. One of the most popular measures is Jaccard index: $$ J=\frac{|\bigcap_{i=1}^{n} A_i|}{|\bigcup_{i=1}^{n} A_i|} $$ $
Variance of set of subsets You are looking for "measure of similarity" between the sets of commenting friends. One of the most popular measures is Jaccard index: $$ J=\frac{|\bigcap_{i=1}^{n} A_i|}{|\bigcup_{i=1}^{n} A_i|} $$ $A_i$ is a set of friends that commented on post i. If most friends that commented are the same, the intersection count will be close to the union count, and the Jaccard index will be close to 1. If only few friends have common comments, it will be small.
Variance of set of subsets You are looking for "measure of similarity" between the sets of commenting friends. One of the most popular measures is Jaccard index: $$ J=\frac{|\bigcap_{i=1}^{n} A_i|}{|\bigcup_{i=1}^{n} A_i|} $$ $
54,918
Variance of set of subsets
What about this: suppose the set of friends you have is $\mathcal{A}$ and $\mathcal{A}_i\subset\mathcal{A}$ is the set of friends commenting on post $i$, $i=1,\ldots,n$. Then $$ \text{KPI} = \frac{\big|\bigcup_{i=1}^n \mathcal{A}_i \big|}{\sum_{i=1}^n |\mathcal{A}_i|} $$ would be high if always different friends comment and low if always the same friends comment (so that's the opposite of what you propose), while being more or less independent of how many friends you have.
Variance of set of subsets
What about this: suppose the set of friends you have is $\mathcal{A}$ and $\mathcal{A}_i\subset\mathcal{A}$ is the set of friends commenting on post $i$, $i=1,\ldots,n$. Then $$ \text{KPI} = \frac{\bi
Variance of set of subsets What about this: suppose the set of friends you have is $\mathcal{A}$ and $\mathcal{A}_i\subset\mathcal{A}$ is the set of friends commenting on post $i$, $i=1,\ldots,n$. Then $$ \text{KPI} = \frac{\big|\bigcup_{i=1}^n \mathcal{A}_i \big|}{\sum_{i=1}^n |\mathcal{A}_i|} $$ would be high if always different friends comment and low if always the same friends comment (so that's the opposite of what you propose), while being more or less independent of how many friends you have.
Variance of set of subsets What about this: suppose the set of friends you have is $\mathcal{A}$ and $\mathcal{A}_i\subset\mathcal{A}$ is the set of friends commenting on post $i$, $i=1,\ldots,n$. Then $$ \text{KPI} = \frac{\bi
54,919
Variance of set of subsets
StijnDeVuyst, and repeated by igrinis, provides a good measure and concept for the 'variance among subsets' you were looking for. In relation to your task to find 'turnover' and not just 'similarity' of friends replying to your posts. I believe that you may wish to extend this concept. Instead of taking all the subsets together you may want to look at adjacent subsets only. The difference is whether you want a relative measure for the friends that consistently reply to all of your posts, or a relative measure for the changes of replying friends (how much come and go) from post to post, or period to period. Instead of creating subsets based on the friends replying to a specific post, you might want to create subsets by taking multiple posts together. You have to determine whether facebook-friends that answer irregularly (for instance answering post number x and x+2 but not post number x+1) should be considered as contributing to turnover. (An example: For some countries that include short-term leaves as emigration and immigration, their statistics show high values, relative to the population. You may wonder if this is correct. Several media like to report on these high numbers of turnover without correctly placing this nuance and pretending that an extreme amount of people are leaving the country). These statistics then become multidimensional. Turnover is for instance not constant in time. Possibly you would like to provide multiple types of definitions of turnover in your graphs. For instance you could graph time series of 1) the number of leavers, 2) the number of arrivers and 3) the total number of commenting friends. And then consider how you wish to express turnover. For instance, is it about replacement (the number of leaves that get replaced by arrival) or about changes? The latter, changes, is also reflecting growth and decrease. As an alternative you could create a measure that determines how long on average your facebook friends remain are inside the commenting subset (residence time reflects turnover rate). You could also brake this up into activity level changes (too loosen the concept that a facebook friend is either active or not active in commenting on your posts). For instance you could determine the number of replies per 10 posts as a level of activity and then determine the change in activity level for all your friends. In such way a friend who is changing from 8 replies in 10 posts to 2 replies to 10 posts can be included in your measure of turnover.
Variance of set of subsets
StijnDeVuyst, and repeated by igrinis, provides a good measure and concept for the 'variance among subsets' you were looking for. In relation to your task to find 'turnover' and not just 'similarity'
Variance of set of subsets StijnDeVuyst, and repeated by igrinis, provides a good measure and concept for the 'variance among subsets' you were looking for. In relation to your task to find 'turnover' and not just 'similarity' of friends replying to your posts. I believe that you may wish to extend this concept. Instead of taking all the subsets together you may want to look at adjacent subsets only. The difference is whether you want a relative measure for the friends that consistently reply to all of your posts, or a relative measure for the changes of replying friends (how much come and go) from post to post, or period to period. Instead of creating subsets based on the friends replying to a specific post, you might want to create subsets by taking multiple posts together. You have to determine whether facebook-friends that answer irregularly (for instance answering post number x and x+2 but not post number x+1) should be considered as contributing to turnover. (An example: For some countries that include short-term leaves as emigration and immigration, their statistics show high values, relative to the population. You may wonder if this is correct. Several media like to report on these high numbers of turnover without correctly placing this nuance and pretending that an extreme amount of people are leaving the country). These statistics then become multidimensional. Turnover is for instance not constant in time. Possibly you would like to provide multiple types of definitions of turnover in your graphs. For instance you could graph time series of 1) the number of leavers, 2) the number of arrivers and 3) the total number of commenting friends. And then consider how you wish to express turnover. For instance, is it about replacement (the number of leaves that get replaced by arrival) or about changes? The latter, changes, is also reflecting growth and decrease. As an alternative you could create a measure that determines how long on average your facebook friends remain are inside the commenting subset (residence time reflects turnover rate). You could also brake this up into activity level changes (too loosen the concept that a facebook friend is either active or not active in commenting on your posts). For instance you could determine the number of replies per 10 posts as a level of activity and then determine the change in activity level for all your friends. In such way a friend who is changing from 8 replies in 10 posts to 2 replies to 10 posts can be included in your measure of turnover.
Variance of set of subsets StijnDeVuyst, and repeated by igrinis, provides a good measure and concept for the 'variance among subsets' you were looking for. In relation to your task to find 'turnover' and not just 'similarity'
54,920
Variance of set of subsets
This is simplistic, but I always start there... Suppose you have 100 friends and 5 posts. And suppose each post gets 20 comments. At one extreme, 20 of the same people all commented on each post. At the other extreme, 20 different people commented on each post. In the first case the average number of comments per post for those 20 people was 1.0, and the same metric was zero for the other eighty. In the second case, everyone made .2 comments per post all 100). My thought is to tunnel in from the count of the number of people who commented divided by the number of posts. Some people will score 1.0 (they comment on every post), and some will score lower (they only comment once or none). If plotted from highest score to lowest, you will get something akin to a 'scree' plot. Wouldn't that possibly get you pretty far?
Variance of set of subsets
This is simplistic, but I always start there... Suppose you have 100 friends and 5 posts. And suppose each post gets 20 comments. At one extreme, 20 of the same people all commented on each post. At
Variance of set of subsets This is simplistic, but I always start there... Suppose you have 100 friends and 5 posts. And suppose each post gets 20 comments. At one extreme, 20 of the same people all commented on each post. At the other extreme, 20 different people commented on each post. In the first case the average number of comments per post for those 20 people was 1.0, and the same metric was zero for the other eighty. In the second case, everyone made .2 comments per post all 100). My thought is to tunnel in from the count of the number of people who commented divided by the number of posts. Some people will score 1.0 (they comment on every post), and some will score lower (they only comment once or none). If plotted from highest score to lowest, you will get something akin to a 'scree' plot. Wouldn't that possibly get you pretty far?
Variance of set of subsets This is simplistic, but I always start there... Suppose you have 100 friends and 5 posts. And suppose each post gets 20 comments. At one extreme, 20 of the same people all commented on each post. At
54,921
MLE for Poisson distribution is undefined with all-zero observations
The likelihood function of the Poisson given observations $x_1, x_2, \ldots, x_n$ is $$ l(\lambda; x) = \prod_i e^{-\lambda}\frac{\lambda^{x_i}}{x_i!} = \frac{e^{-n\lambda}}{x_1!x_2!\cdots x_n!}\lambda^{x_1 + x_2 + \cdots + x_n}$$ If $x_1 = x_2 = \cdots = x_n = 0$ then this becomes $$ l(\lambda; x) = e^{- n \lambda} $$ Which is maximized when $\lambda = 0$. So the MLE does exist in this case, it is $\lambda = 0$.
MLE for Poisson distribution is undefined with all-zero observations
The likelihood function of the Poisson given observations $x_1, x_2, \ldots, x_n$ is $$ l(\lambda; x) = \prod_i e^{-\lambda}\frac{\lambda^{x_i}}{x_i!} = \frac{e^{-n\lambda}}{x_1!x_2!\cdots x_n!}\lambd
MLE for Poisson distribution is undefined with all-zero observations The likelihood function of the Poisson given observations $x_1, x_2, \ldots, x_n$ is $$ l(\lambda; x) = \prod_i e^{-\lambda}\frac{\lambda^{x_i}}{x_i!} = \frac{e^{-n\lambda}}{x_1!x_2!\cdots x_n!}\lambda^{x_1 + x_2 + \cdots + x_n}$$ If $x_1 = x_2 = \cdots = x_n = 0$ then this becomes $$ l(\lambda; x) = e^{- n \lambda} $$ Which is maximized when $\lambda = 0$. So the MLE does exist in this case, it is $\lambda = 0$.
MLE for Poisson distribution is undefined with all-zero observations The likelihood function of the Poisson given observations $x_1, x_2, \ldots, x_n$ is $$ l(\lambda; x) = \prod_i e^{-\lambda}\frac{\lambda^{x_i}}{x_i!} = \frac{e^{-n\lambda}}{x_1!x_2!\cdots x_n!}\lambd
54,922
MLE for Poisson distribution is undefined with all-zero observations
I still keep the opinion that MLE is undefined when all the observations are zero. However, this does not affect on the expectation or variance of this MLE, since the average of all-zero observations is zero, which does not have any effect on these computation, whether MLE is defined or undefined at this point.
MLE for Poisson distribution is undefined with all-zero observations
I still keep the opinion that MLE is undefined when all the observations are zero. However, this does not affect on the expectation or variance of this MLE, since the average of all-zero observations
MLE for Poisson distribution is undefined with all-zero observations I still keep the opinion that MLE is undefined when all the observations are zero. However, this does not affect on the expectation or variance of this MLE, since the average of all-zero observations is zero, which does not have any effect on these computation, whether MLE is defined or undefined at this point.
MLE for Poisson distribution is undefined with all-zero observations I still keep the opinion that MLE is undefined when all the observations are zero. However, this does not affect on the expectation or variance of this MLE, since the average of all-zero observations
54,923
Can t test be used for comparing groups with a sample size of 3?
The permutation test will have insufficient power. (There just aren't enough different ways to split six samples into two groups of three.) But If the assumptions of the t-test hold, then its results are valid. Many thoughtful readers will question whether such a situation could actually arise. Let me share a real story. It concerns cleaning up lead contamination in a field: for years, a farmer accepted "recycled" batteries and dumped them behind his house. Eventually the environmental regulators caught up with him. They caused the "responsible party" to go through three phases of cleanup work: (1) sample the soils to estimate the amount and extent of lead contamination; (2) remove the soils in thin layers, taking samples in the process, until it was clear that clean soils had been reached; (3) independently sample all remaining soil and formally test whether the mean lead concentration is below the environmental standard. The procedure for (3) was designed and approved before the cleanup began. It called for random sampling of all soils exposed during the excavation, analysis of the samples by a certified laboratory, and applying a Student t test. Equivalently, to demonstrate success, a suitable upper confidence limit (UCL) of the mean had to be less than the standard. It did not specify how many samples to take: that would be up to the responsible party to decide. Almost a thousand samples were obtained and analyzed during the first two phases. Although these allowed the (univariate and spatial) distributions of the lead concentrations to be characterized reliably, they of course did not represent the remaining concentrations. However, they did suggest the shape of the (univariate) distribution of the remaining concentrations. Physical and chemical theory, soils science, and experience with remediating lead in soils elsewhere all provided support for this statistical characterization. The cleanup was so thorough and successful that the likely mean concentration was negligible--more than an order of magnitude less than the standard. Power analyses, based on pessimistic (high) estimates of the standard deviation, all suggested that a random sample of only two or three would be needed. There were many potential complications: for instance, any areas that might have been overlooked during the excavation could introduce large outlying values. To detect these, a large number of samples were obtained at random locations, and then composited in groups to produce just five physical samples for laboratory testing. All the values were low. As expected, a t-test of any two of those samples would still have demonstrated attainment. Sprinkled within this brief case study are examples of various ways in which we might be assured that a t-test is appropriate even with tiny samples: experience; theory; preliminary related sampling; making pessimistic assumptions; and sample compositing all played a role--and any one of them might have sufficed to justify the t-test. Incidentally, there are versions of the t-test that work with just a single observation. They are based on obtaining independent estimates of the variance or, lacking that, mathematical theory. Could this ever make sense in reality? The classic situation of compositing the blood from hundreds of soldiers to test for venereal disease provides one possible application.
Can t test be used for comparing groups with a sample size of 3?
The permutation test will have insufficient power. (There just aren't enough different ways to split six samples into two groups of three.) But If the assumptions of the t-test hold, then its result
Can t test be used for comparing groups with a sample size of 3? The permutation test will have insufficient power. (There just aren't enough different ways to split six samples into two groups of three.) But If the assumptions of the t-test hold, then its results are valid. Many thoughtful readers will question whether such a situation could actually arise. Let me share a real story. It concerns cleaning up lead contamination in a field: for years, a farmer accepted "recycled" batteries and dumped them behind his house. Eventually the environmental regulators caught up with him. They caused the "responsible party" to go through three phases of cleanup work: (1) sample the soils to estimate the amount and extent of lead contamination; (2) remove the soils in thin layers, taking samples in the process, until it was clear that clean soils had been reached; (3) independently sample all remaining soil and formally test whether the mean lead concentration is below the environmental standard. The procedure for (3) was designed and approved before the cleanup began. It called for random sampling of all soils exposed during the excavation, analysis of the samples by a certified laboratory, and applying a Student t test. Equivalently, to demonstrate success, a suitable upper confidence limit (UCL) of the mean had to be less than the standard. It did not specify how many samples to take: that would be up to the responsible party to decide. Almost a thousand samples were obtained and analyzed during the first two phases. Although these allowed the (univariate and spatial) distributions of the lead concentrations to be characterized reliably, they of course did not represent the remaining concentrations. However, they did suggest the shape of the (univariate) distribution of the remaining concentrations. Physical and chemical theory, soils science, and experience with remediating lead in soils elsewhere all provided support for this statistical characterization. The cleanup was so thorough and successful that the likely mean concentration was negligible--more than an order of magnitude less than the standard. Power analyses, based on pessimistic (high) estimates of the standard deviation, all suggested that a random sample of only two or three would be needed. There were many potential complications: for instance, any areas that might have been overlooked during the excavation could introduce large outlying values. To detect these, a large number of samples were obtained at random locations, and then composited in groups to produce just five physical samples for laboratory testing. All the values were low. As expected, a t-test of any two of those samples would still have demonstrated attainment. Sprinkled within this brief case study are examples of various ways in which we might be assured that a t-test is appropriate even with tiny samples: experience; theory; preliminary related sampling; making pessimistic assumptions; and sample compositing all played a role--and any one of them might have sufficed to justify the t-test. Incidentally, there are versions of the t-test that work with just a single observation. They are based on obtaining independent estimates of the variance or, lacking that, mathematical theory. Could this ever make sense in reality? The classic situation of compositing the blood from hundreds of soldiers to test for venereal disease provides one possible application.
Can t test be used for comparing groups with a sample size of 3? The permutation test will have insufficient power. (There just aren't enough different ways to split six samples into two groups of three.) But If the assumptions of the t-test hold, then its result
54,924
How is the exact permutation test procedure carried out: iterating over permutations or using combinations of one group?
Whether or not you take the combinations or permutations doesn't actually affect your results, as the number of permutations of $n_{A}$ specific objects in $A$ and $n_{B}$ specific objects in $B$ is the same for all combinations of $x_{1} ... x_{n_{A}}$ and $x_{n_{A+1}} ... x_{n_{A} + n_{B}}$ since the size of each set doesn't change. That is, for each and any given combination, you will get $n_{A}! \times n_{B}!$ times as many permutations than combinations regardless of the values inside each set. And as the value of the result (the difference between group means) does not change between permutations of the same combination the frequency of each specific result will be scaled equally when taking the permutation. So when calculating the quantiles practically it makes no difference using combinations or permutations. In fact you empirically proved it for the case of $n_{A} = 1$ and $n_{b} = 2$, the frequency of each result, $D = 0,2,4$, is just scaled by $2$ when taking permutations resulting in the quantile values being the same. Let's assume the standard scenario where samples are independent, and we want to test if two samples come from the same distribution (null hypothesis) based on the difference in sample means To be technical if you want to test this specific hypothesis I think it is more strictly "correct" to take the complete set of permutations (not combinations) of each set, as the distribution assumption under the null that group labels don't matter, is essentially allowing each $x_{i.}$ to take every value in the presence of every other $x_{j \neq i.}$, which combinations do not allow for. But again, the results of the quantiles for the empirical distribution are the same since the frequency of each result is just scaled by the same amount $n_{A}! \times n_{B}!$, so practically it doesn't matter.
How is the exact permutation test procedure carried out: iterating over permutations or using combin
Whether or not you take the combinations or permutations doesn't actually affect your results, as the number of permutations of $n_{A}$ specific objects in $A$ and $n_{B}$ specific objects in $B$ is t
How is the exact permutation test procedure carried out: iterating over permutations or using combinations of one group? Whether or not you take the combinations or permutations doesn't actually affect your results, as the number of permutations of $n_{A}$ specific objects in $A$ and $n_{B}$ specific objects in $B$ is the same for all combinations of $x_{1} ... x_{n_{A}}$ and $x_{n_{A+1}} ... x_{n_{A} + n_{B}}$ since the size of each set doesn't change. That is, for each and any given combination, you will get $n_{A}! \times n_{B}!$ times as many permutations than combinations regardless of the values inside each set. And as the value of the result (the difference between group means) does not change between permutations of the same combination the frequency of each specific result will be scaled equally when taking the permutation. So when calculating the quantiles practically it makes no difference using combinations or permutations. In fact you empirically proved it for the case of $n_{A} = 1$ and $n_{b} = 2$, the frequency of each result, $D = 0,2,4$, is just scaled by $2$ when taking permutations resulting in the quantile values being the same. Let's assume the standard scenario where samples are independent, and we want to test if two samples come from the same distribution (null hypothesis) based on the difference in sample means To be technical if you want to test this specific hypothesis I think it is more strictly "correct" to take the complete set of permutations (not combinations) of each set, as the distribution assumption under the null that group labels don't matter, is essentially allowing each $x_{i.}$ to take every value in the presence of every other $x_{j \neq i.}$, which combinations do not allow for. But again, the results of the quantiles for the empirical distribution are the same since the frequency of each result is just scaled by the same amount $n_{A}! \times n_{B}!$, so practically it doesn't matter.
How is the exact permutation test procedure carried out: iterating over permutations or using combin Whether or not you take the combinations or permutations doesn't actually affect your results, as the number of permutations of $n_{A}$ specific objects in $A$ and $n_{B}$ specific objects in $B$ is t
54,925
Bivariate random vector uniform distribution
The definition of a "uniform distribution" is that the density function is constant for all $x,y$ within the support region. So one must have $$f_{X,Y}(x,y) = \frac{1}{A}$$ where $A$ is the area of either the square or the circle. The same formula will hold for the density function of a "uniform distribution" on any geometric region. Note though that this idea of uniform-ness applies only to the Cartesian coordinates ($x,y$). If you re-parametrized the circle in terms of radius and angle ($r,\theta$), for example, then the radial distance would not be uniformly distributed. The angle $\theta$ would be uniformly distributed on $(0,2\pi)$ but the radial distance $r$ would have to follow a triangular distribution for the bivariate distribution to be uniform in terms of $x$ and $y$.
Bivariate random vector uniform distribution
The definition of a "uniform distribution" is that the density function is constant for all $x,y$ within the support region. So one must have $$f_{X,Y}(x,y) = \frac{1}{A}$$ where $A$ is the area of ei
Bivariate random vector uniform distribution The definition of a "uniform distribution" is that the density function is constant for all $x,y$ within the support region. So one must have $$f_{X,Y}(x,y) = \frac{1}{A}$$ where $A$ is the area of either the square or the circle. The same formula will hold for the density function of a "uniform distribution" on any geometric region. Note though that this idea of uniform-ness applies only to the Cartesian coordinates ($x,y$). If you re-parametrized the circle in terms of radius and angle ($r,\theta$), for example, then the radial distance would not be uniformly distributed. The angle $\theta$ would be uniformly distributed on $(0,2\pi)$ but the radial distance $r$ would have to follow a triangular distribution for the bivariate distribution to be uniform in terms of $x$ and $y$.
Bivariate random vector uniform distribution The definition of a "uniform distribution" is that the density function is constant for all $x,y$ within the support region. So one must have $$f_{X,Y}(x,y) = \frac{1}{A}$$ where $A$ is the area of ei
54,926
Bivariate random vector uniform distribution
The volume under fxy(x, y) must be equal to 1, over the x, y support, since fxy(x, y) is a probability density function. The x, y support defines an area A. This area can be any area. Since the volume equals V = A x fxy(x, y) = 1, then fxy(x, y) = 1/A. Which is constant over all the x, y support.
Bivariate random vector uniform distribution
The volume under fxy(x, y) must be equal to 1, over the x, y support, since fxy(x, y) is a probability density function. The x, y support defines an area A. This area can be any area. Since the volume
Bivariate random vector uniform distribution The volume under fxy(x, y) must be equal to 1, over the x, y support, since fxy(x, y) is a probability density function. The x, y support defines an area A. This area can be any area. Since the volume equals V = A x fxy(x, y) = 1, then fxy(x, y) = 1/A. Which is constant over all the x, y support.
Bivariate random vector uniform distribution The volume under fxy(x, y) must be equal to 1, over the x, y support, since fxy(x, y) is a probability density function. The x, y support defines an area A. This area can be any area. Since the volume
54,927
Why bother with Benjamini-Hochberg correction?
This is a good question, but you have several concepts confused. Firstly, to answer your broader question, yes splitting p-values and performing correction on them separately is an often-performed and well known approach when you have prior information about the system you're studying. To see more examples of this and a proof for independent tests, see Lei Sun et al. $^1$ The idea behind this approach is to maintain the same level of FDR control but increase the number of true positives that you find, so everyone wins! The answer to your actual question (why not take the trivial case where every test is a unique strata) lies in the behavior of the estimators. As detailed in the Sun paper, to use a fixed FDR framework (i.e. keep the same FDR and reject as many tests as possible, leading to an increased number of true positives being found), one must estimate $\pi_0$ with $\hat{\pi}_0$ which has been a contentious issue in the field for many years now. $\pi_0$ is the proportion of true null hypotheses, while $\hat{\pi}_0$ is its estimator. You can read this yourself in the Methods section under Estimating $\pi_0$ and FDR and in the Discussion section if you want. The bias of estimating $\pi_0$ does not increase with the number of strata, but the variance does; so as we increase the number of strata, we have a worse and worse estimator of $\pi_0$, which means we have a worse and worse estimator of $\alpha^{(k)}$, or the p-value needed to reject the null in order to maintain $\gamma$ overall FDR control. If I were to have to guess, and I have no proof of this other than intuition, I would say that the expected value of $\alpha^{k}$ for all $k$ when the number of strata is equal to $k$ would simply be $\gamma$, which negates the purpose of doing the exercise. I would like to know where you got the notion that FDR is overly cautious; that is very much not the impression that I have. Indeed many people have found that the empirical FDR closely matches the control rate except in circumstances of special kinds of dependencies (see a discussion of that here). What you found in your simulation was not FDR but the false positive rate. You had no true positive associations so your FDR was always 1 by definition, as FDR is the expected value of false positives over all positives. You created a set up where data were randomly generated and you found 1) the P value and 2) the BH corrected P value (the $q$-value is actually a different concept and unique to John Storey's implementation $^2$). You found that 5% of uncorrected P values rejected the null when all were false, which is the meaning of setting $\alpha$ to 0.05 which you did. You also found that 0% of FDR corrected P values rejected the null, which is entirely to be expected as you had no true positives to identify and your results were all within the realm of chance. So really, you found that FDR was doing exactly what it was supposed to do! [1] Sun L, Craiu RV, Paterson AD and Bull SB (2006). Stratified false discovery control for large-scale hypothesis testing with application to genome-wide association studies. Genetic Epidemiology 30:519-530. [2] https://projecteuclid.org/euclid.aos/1074290335
Why bother with Benjamini-Hochberg correction?
This is a good question, but you have several concepts confused. Firstly, to answer your broader question, yes splitting p-values and performing correction on them separately is an often-performed an
Why bother with Benjamini-Hochberg correction? This is a good question, but you have several concepts confused. Firstly, to answer your broader question, yes splitting p-values and performing correction on them separately is an often-performed and well known approach when you have prior information about the system you're studying. To see more examples of this and a proof for independent tests, see Lei Sun et al. $^1$ The idea behind this approach is to maintain the same level of FDR control but increase the number of true positives that you find, so everyone wins! The answer to your actual question (why not take the trivial case where every test is a unique strata) lies in the behavior of the estimators. As detailed in the Sun paper, to use a fixed FDR framework (i.e. keep the same FDR and reject as many tests as possible, leading to an increased number of true positives being found), one must estimate $\pi_0$ with $\hat{\pi}_0$ which has been a contentious issue in the field for many years now. $\pi_0$ is the proportion of true null hypotheses, while $\hat{\pi}_0$ is its estimator. You can read this yourself in the Methods section under Estimating $\pi_0$ and FDR and in the Discussion section if you want. The bias of estimating $\pi_0$ does not increase with the number of strata, but the variance does; so as we increase the number of strata, we have a worse and worse estimator of $\pi_0$, which means we have a worse and worse estimator of $\alpha^{(k)}$, or the p-value needed to reject the null in order to maintain $\gamma$ overall FDR control. If I were to have to guess, and I have no proof of this other than intuition, I would say that the expected value of $\alpha^{k}$ for all $k$ when the number of strata is equal to $k$ would simply be $\gamma$, which negates the purpose of doing the exercise. I would like to know where you got the notion that FDR is overly cautious; that is very much not the impression that I have. Indeed many people have found that the empirical FDR closely matches the control rate except in circumstances of special kinds of dependencies (see a discussion of that here). What you found in your simulation was not FDR but the false positive rate. You had no true positive associations so your FDR was always 1 by definition, as FDR is the expected value of false positives over all positives. You created a set up where data were randomly generated and you found 1) the P value and 2) the BH corrected P value (the $q$-value is actually a different concept and unique to John Storey's implementation $^2$). You found that 5% of uncorrected P values rejected the null when all were false, which is the meaning of setting $\alpha$ to 0.05 which you did. You also found that 0% of FDR corrected P values rejected the null, which is entirely to be expected as you had no true positives to identify and your results were all within the realm of chance. So really, you found that FDR was doing exactly what it was supposed to do! [1] Sun L, Craiu RV, Paterson AD and Bull SB (2006). Stratified false discovery control for large-scale hypothesis testing with application to genome-wide association studies. Genetic Epidemiology 30:519-530. [2] https://projecteuclid.org/euclid.aos/1074290335
Why bother with Benjamini-Hochberg correction? This is a good question, but you have several concepts confused. Firstly, to answer your broader question, yes splitting p-values and performing correction on them separately is an often-performed an
54,928
Pairwise independence of sufficient statistics in exponential families
The problem with this "paradox" comes from absorbing $h(x)$ into the dominating measure and then forgetting about the dominating measure. The most common definition of a probability density is associated with a measure that is a product measure, like the Lebesgue measure. In this case, it is straightforward to prove that $p(x,y)=p_X(x)p_Y(y)$ implies that $X$ and $Y$ are independent: \begin{align*}\mathbb{P}(X\in A, Y\in B)&=\int_{A\times B} p_X(x)p_Y(y)\text{d}\lambda(x,y)\\&=\int_{A}\int_{B} p_X(x)p_Y(y)\text{d}\lambda(x)\text{d}\lambda(y)\\&=\mathbb{P}(X\in A)\mathbb{P}(Y\in B)\end{align*} This result does not, however, extend to arbitrary dominating measures. The fact that a density (i.e. Radon-Nikodym derivative) against an arbitrary measure $\lambda$ factorises as $p(x,y)=p_X(x)p_Y(y)$ (with $p_x$ and $p_y$ integrable against the projected measures $\lambda_x$ and $\lambda_y$) does not make the components $X$ and $Y$ independent. It depends on the measure $\lambda$. "Independence as a product" is achieved in terms of probabilities or measures, not in terms of densities. A simple [counter-]example is made of the switch from the Lebesgue measure $\lambda_0$ to the new measure $$\exp\{\varrho xy\}\text{d}\lambda_0(x,y)$$ The distribution with density $$p(x,y)=\exp\left\{-\frac{x^2}{2}-\frac{y^2}{2}\right\}$$ against this measure is the Gaussian distribution with non-zero covariance $$-\frac{\varrho}{1-\varrho^2}.$$ An even simpler [counter-]example [or re-expression of the above] is to switch from the Lebesgue measure $\lambda_0$ to the new (Gaussian) measure $$\frac{(1-\varrho^2)^{1/2}}{2\pi}\exp\left\{-\frac{x^2}{2}-\frac{y^2}{2}+\varrho xy\right\}\text{d}\lambda_0(x,y)$$ and to consider the constant density $p(x,y)=1$ against this alternative dominating measure. We are again facing a distribution that is Gaussian with non-zero covariance but which has a product density.
Pairwise independence of sufficient statistics in exponential families
The problem with this "paradox" comes from absorbing $h(x)$ into the dominating measure and then forgetting about the dominating measure. The most common definition of a probability density is asso
Pairwise independence of sufficient statistics in exponential families The problem with this "paradox" comes from absorbing $h(x)$ into the dominating measure and then forgetting about the dominating measure. The most common definition of a probability density is associated with a measure that is a product measure, like the Lebesgue measure. In this case, it is straightforward to prove that $p(x,y)=p_X(x)p_Y(y)$ implies that $X$ and $Y$ are independent: \begin{align*}\mathbb{P}(X\in A, Y\in B)&=\int_{A\times B} p_X(x)p_Y(y)\text{d}\lambda(x,y)\\&=\int_{A}\int_{B} p_X(x)p_Y(y)\text{d}\lambda(x)\text{d}\lambda(y)\\&=\mathbb{P}(X\in A)\mathbb{P}(Y\in B)\end{align*} This result does not, however, extend to arbitrary dominating measures. The fact that a density (i.e. Radon-Nikodym derivative) against an arbitrary measure $\lambda$ factorises as $p(x,y)=p_X(x)p_Y(y)$ (with $p_x$ and $p_y$ integrable against the projected measures $\lambda_x$ and $\lambda_y$) does not make the components $X$ and $Y$ independent. It depends on the measure $\lambda$. "Independence as a product" is achieved in terms of probabilities or measures, not in terms of densities. A simple [counter-]example is made of the switch from the Lebesgue measure $\lambda_0$ to the new measure $$\exp\{\varrho xy\}\text{d}\lambda_0(x,y)$$ The distribution with density $$p(x,y)=\exp\left\{-\frac{x^2}{2}-\frac{y^2}{2}\right\}$$ against this measure is the Gaussian distribution with non-zero covariance $$-\frac{\varrho}{1-\varrho^2}.$$ An even simpler [counter-]example [or re-expression of the above] is to switch from the Lebesgue measure $\lambda_0$ to the new (Gaussian) measure $$\frac{(1-\varrho^2)^{1/2}}{2\pi}\exp\left\{-\frac{x^2}{2}-\frac{y^2}{2}+\varrho xy\right\}\text{d}\lambda_0(x,y)$$ and to consider the constant density $p(x,y)=1$ against this alternative dominating measure. We are again facing a distribution that is Gaussian with non-zero covariance but which has a product density.
Pairwise independence of sufficient statistics in exponential families The problem with this "paradox" comes from absorbing $h(x)$ into the dominating measure and then forgetting about the dominating measure. The most common definition of a probability density is asso
54,929
Pairwise independence of sufficient statistics in exponential families
I think it is not true Consider a sample of $n$ observations from a normal distributed random variable $N(\mu, \sigma^2)$ with unknown mean and variance, which is from an exponential family $\left (\sum x_i, \sum x_i^2\right)$ is a sufficient statistic $\sum x_i$ and $\sum x_i^2$ are not independent
Pairwise independence of sufficient statistics in exponential families
I think it is not true Consider a sample of $n$ observations from a normal distributed random variable $N(\mu, \sigma^2)$ with unknown mean and variance, which is from an exponential family $\left (\
Pairwise independence of sufficient statistics in exponential families I think it is not true Consider a sample of $n$ observations from a normal distributed random variable $N(\mu, \sigma^2)$ with unknown mean and variance, which is from an exponential family $\left (\sum x_i, \sum x_i^2\right)$ is a sufficient statistic $\sum x_i$ and $\sum x_i^2$ are not independent
Pairwise independence of sufficient statistics in exponential families I think it is not true Consider a sample of $n$ observations from a normal distributed random variable $N(\mu, \sigma^2)$ with unknown mean and variance, which is from an exponential family $\left (\
54,930
Is graduate level probability theory (Durett) used often in ML, DL research?
As a statistics PhD student studying Bayesian deep learning and Gaussian processes, I have found it useful to be familiar with probability. I do not directly use the results for now because I am working on applied problems, but a lot of the theoretical work I look at is based on nonparametric techniques such as Gaussian or Dirichlet processes and those techniques are shown to have some properties using probability and functional analysis. Look for textbooks by Aad van der Vaart if you'd like an (extreme) example. In other words, knowing probability theory opens up a lot of statistical literature for you to peruse. If you care more about classification rates and less about uncertainty quantification, it might not be worth your time to get into the statistics. But then, if you are doing a logistic regression and the prediction of cancer for subject X is "1" (vs "0"), you might want to know how much confidence to place in that classification.
Is graduate level probability theory (Durett) used often in ML, DL research?
As a statistics PhD student studying Bayesian deep learning and Gaussian processes, I have found it useful to be familiar with probability. I do not directly use the results for now because I am work
Is graduate level probability theory (Durett) used often in ML, DL research? As a statistics PhD student studying Bayesian deep learning and Gaussian processes, I have found it useful to be familiar with probability. I do not directly use the results for now because I am working on applied problems, but a lot of the theoretical work I look at is based on nonparametric techniques such as Gaussian or Dirichlet processes and those techniques are shown to have some properties using probability and functional analysis. Look for textbooks by Aad van der Vaart if you'd like an (extreme) example. In other words, knowing probability theory opens up a lot of statistical literature for you to peruse. If you care more about classification rates and less about uncertainty quantification, it might not be worth your time to get into the statistics. But then, if you are doing a logistic regression and the prediction of cancer for subject X is "1" (vs "0"), you might want to know how much confidence to place in that classification.
Is graduate level probability theory (Durett) used often in ML, DL research? As a statistics PhD student studying Bayesian deep learning and Gaussian processes, I have found it useful to be familiar with probability. I do not directly use the results for now because I am work
54,931
Is graduate level probability theory (Durett) used often in ML, DL research?
Should I take a graduate level statistics course in Probability Theory that follows Durrett's textbook NO. I have seen exactly zero instances where the sort of measure-theoretic background that is explored in depth in books like Durrett and Klenke is actually used in ML. The theory of probability and random variables can get very far, and cover all the concepts you actually see in an ML paper, without that. DeGroot & Schervish or Pishro-Nik (read all the way through) are enough. The measure-theoretic part comes in when you start needing to construct stochastic integrals or things like that but not likely in 99% of ML.
Is graduate level probability theory (Durett) used often in ML, DL research?
Should I take a graduate level statistics course in Probability Theory that follows Durrett's textbook NO. I have seen exactly zero instances where the sort of measure-theoretic background that is
Is graduate level probability theory (Durett) used often in ML, DL research? Should I take a graduate level statistics course in Probability Theory that follows Durrett's textbook NO. I have seen exactly zero instances where the sort of measure-theoretic background that is explored in depth in books like Durrett and Klenke is actually used in ML. The theory of probability and random variables can get very far, and cover all the concepts you actually see in an ML paper, without that. DeGroot & Schervish or Pishro-Nik (read all the way through) are enough. The measure-theoretic part comes in when you start needing to construct stochastic integrals or things like that but not likely in 99% of ML.
Is graduate level probability theory (Durett) used often in ML, DL research? Should I take a graduate level statistics course in Probability Theory that follows Durrett's textbook NO. I have seen exactly zero instances where the sort of measure-theoretic background that is
54,932
Plotting non-parametric (E)CDEF confidence envelopes for comparison
You can use the Kolmogorov-Smirnov test, and invert it to get a confidence band. Let $X_1, X_2, \dotsc, X_n$ be iid observations from some continuous distribution function $F$. Then the KS test statistic is given by $$ D_n = \sup_x \mid \hat{F}_ n(x)-F_0(x) \mid = \max_{i=1,2,\dotsc,n} \max \{\frac{i}{n}-F_0(x_{(i)}),F_0(x_{(i)})-\frac{i-1}{n} \} $$ where $x_{(1)} \le \dotso \le x_{(n)}$ is the order statistics. What is remarkable is that the distribution of $D_n$ do not depend on the assumed null distribution $F_0$ (which must be prespecified). Now we can invert this hypothesis test to get a confidence band. WE can calculate $$ P_{F_0}(D_n \le d) = P_{F_0}( \sup_x \mid \hat{F}_ n(x)-F_0(x) \mid \le d) = \\ P_{F_0}( \hat{F}_n(x)-d \le F_0(x) \le \hat{F}_n(x)+d, \quad \text{for all $x$}) $$ this calculation shows that this is indeed a simultaneous confidence band, valid simultaneously for all $x$. An implementation of this can be found in the R package (on CRAN) sfsmisc, in the function ecdf.ksCI. (Disclaimer: That was originally my code). An example: R code: library(sfsmisc) set.seed(7*11*13) ecdf.ksCI( rchisq(50,3), main="ECDF, sample from chisq with 3 df")
Plotting non-parametric (E)CDEF confidence envelopes for comparison
You can use the Kolmogorov-Smirnov test, and invert it to get a confidence band. Let $X_1, X_2, \dotsc, X_n$ be iid observations from some continuous distribution function $F$. Then the KS test stati
Plotting non-parametric (E)CDEF confidence envelopes for comparison You can use the Kolmogorov-Smirnov test, and invert it to get a confidence band. Let $X_1, X_2, \dotsc, X_n$ be iid observations from some continuous distribution function $F$. Then the KS test statistic is given by $$ D_n = \sup_x \mid \hat{F}_ n(x)-F_0(x) \mid = \max_{i=1,2,\dotsc,n} \max \{\frac{i}{n}-F_0(x_{(i)}),F_0(x_{(i)})-\frac{i-1}{n} \} $$ where $x_{(1)} \le \dotso \le x_{(n)}$ is the order statistics. What is remarkable is that the distribution of $D_n$ do not depend on the assumed null distribution $F_0$ (which must be prespecified). Now we can invert this hypothesis test to get a confidence band. WE can calculate $$ P_{F_0}(D_n \le d) = P_{F_0}( \sup_x \mid \hat{F}_ n(x)-F_0(x) \mid \le d) = \\ P_{F_0}( \hat{F}_n(x)-d \le F_0(x) \le \hat{F}_n(x)+d, \quad \text{for all $x$}) $$ this calculation shows that this is indeed a simultaneous confidence band, valid simultaneously for all $x$. An implementation of this can be found in the R package (on CRAN) sfsmisc, in the function ecdf.ksCI. (Disclaimer: That was originally my code). An example: R code: library(sfsmisc) set.seed(7*11*13) ecdf.ksCI( rchisq(50,3), main="ECDF, sample from chisq with 3 df")
Plotting non-parametric (E)CDEF confidence envelopes for comparison You can use the Kolmogorov-Smirnov test, and invert it to get a confidence band. Let $X_1, X_2, \dotsc, X_n$ be iid observations from some continuous distribution function $F$. Then the KS test stati
54,933
Plotting the typical set of a Gaussian distribution
One of the confusing things about concentration of measure is that we're trying to demonstrate deviations away from our naive, low-dimensional intuition. Here that is demonstrated in how the radial volume changes relative to a uniform distribution over radii. As we move away from a given point the shells of constant radial distance grow bigger and bigger, hence we get exponentially more differential volume as we move out to larger radii contrary to what we would expect from uniform volume growth. More specifically, the volume grows with the $N-1$ power of the radius as discussed in https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates. The plot actually comes from the analytical results for a $10$-dimensional independent identically distributed unit Gaussian distribution, as discussed in Section 4.2 of https://github.com/betanalpha/stan_intro.
Plotting the typical set of a Gaussian distribution
One of the confusing things about concentration of measure is that we're trying to demonstrate deviations away from our naive, low-dimensional intuition. Here that is demonstrated in how the radial v
Plotting the typical set of a Gaussian distribution One of the confusing things about concentration of measure is that we're trying to demonstrate deviations away from our naive, low-dimensional intuition. Here that is demonstrated in how the radial volume changes relative to a uniform distribution over radii. As we move away from a given point the shells of constant radial distance grow bigger and bigger, hence we get exponentially more differential volume as we move out to larger radii contrary to what we would expect from uniform volume growth. More specifically, the volume grows with the $N-1$ power of the radius as discussed in https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates. The plot actually comes from the analytical results for a $10$-dimensional independent identically distributed unit Gaussian distribution, as discussed in Section 4.2 of https://github.com/betanalpha/stan_intro.
Plotting the typical set of a Gaussian distribution One of the confusing things about concentration of measure is that we're trying to demonstrate deviations away from our naive, low-dimensional intuition. Here that is demonstrated in how the radial v
54,934
Ratio of two centered independent centered chi^2
If I'm not mistaken, at least for even $k$, this is solvable, but it is very tedious. Following is the outline, in which, for simplicity, I'm omitting constant terms from any integral (i.e., those not involved in the integration). The ratio distribution of two independent $\chi^2$ RVs is $$ \sim \int_y |y| (zy)^{\frac{k}{2} - 1}e^{-\frac{zy}{2}} y^{\frac{k}{2} - 1}e^{-\frac{y}{2}} \text{d}y = \int_y |y| (zy^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (1) $$ (again, note that constants are omitted from the integral). Similarly, if you shift the RVs by $a$ (which, in your case is the mean), then the ratio distribution is $$ \sim \int_y |y| (zy - a)^{\frac{k}{2} - 1}e^{-\frac{zy}{2}} (y - a)^{\frac{k}{2} - 1}e^{-\frac{y}{2}} \text{d}y \\ = \int_y |y| (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (2) $$ (once again, note that constants are omitted from the integral). Note that $$ \int_y |y| (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (3) \\ = \int_{y > 0} y (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y - \int_{y < 0} y (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y . $$ If $k$ is even, then $q = \frac{k}{2} - 1$ is an integer, and, in (3), $$ (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1} = (zy^2 - ay(z + 1) + a^2)^q $$ so that trinomial expansion can be used. The integral corresponding to each term in the expansion has a closed form solution. I wish you the best of luck!!!
Ratio of two centered independent centered chi^2
If I'm not mistaken, at least for even $k$, this is solvable, but it is very tedious. Following is the outline, in which, for simplicity, I'm omitting constant terms from any integral (i.e., those not
Ratio of two centered independent centered chi^2 If I'm not mistaken, at least for even $k$, this is solvable, but it is very tedious. Following is the outline, in which, for simplicity, I'm omitting constant terms from any integral (i.e., those not involved in the integration). The ratio distribution of two independent $\chi^2$ RVs is $$ \sim \int_y |y| (zy)^{\frac{k}{2} - 1}e^{-\frac{zy}{2}} y^{\frac{k}{2} - 1}e^{-\frac{y}{2}} \text{d}y = \int_y |y| (zy^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (1) $$ (again, note that constants are omitted from the integral). Similarly, if you shift the RVs by $a$ (which, in your case is the mean), then the ratio distribution is $$ \sim \int_y |y| (zy - a)^{\frac{k}{2} - 1}e^{-\frac{zy}{2}} (y - a)^{\frac{k}{2} - 1}e^{-\frac{y}{2}} \text{d}y \\ = \int_y |y| (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (2) $$ (once again, note that constants are omitted from the integral). Note that $$ \int_y |y| (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y \; (3) \\ = \int_{y > 0} y (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y - \int_{y < 0} y (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1}e^{-\frac{y(z + 1)}{2}} \text{d}y . $$ If $k$ is even, then $q = \frac{k}{2} - 1$ is an integer, and, in (3), $$ (zy^2 - ay(z + 1) + a^2)^{\frac{k}{2} - 1} = (zy^2 - ay(z + 1) + a^2)^q $$ so that trinomial expansion can be used. The integral corresponding to each term in the expansion has a closed form solution. I wish you the best of luck!!!
Ratio of two centered independent centered chi^2 If I'm not mistaken, at least for even $k$, this is solvable, but it is very tedious. Following is the outline, in which, for simplicity, I'm omitting constant terms from any integral (i.e., those not
54,935
Ratio of two centered independent centered chi^2
The behaviour of the distribution is quite 'wild' for small values of parameter $k$. Here is a Monte Carlo simulation of the pdf of the ratio when $k = 1$: ... and when $k = 2$: However, it rapidly stabilises as $k$ increases, and when $k$ is larger, is extremely well approximated by a Cauchy distribution (same as Student's $t$ with 1 df), i.e. $$f(x) = \frac{1}{\pi \left(x^2+1\right)}$$ ... which seems very pretty, considering how messy the symbolics are. Here is a plot of the pdf of the ratio when $k = 25$: the rough blue curve is the simulated Monte Carlo pdf of the ratio the dashed red curve underneath is the exact Cauchy pdf The fit appears amazing - too good to be arbitrary. Touchy-feely explanation To get a feel for why this is the case, first observe that the distribution of $X-k$ which has form: $$f(x) = \frac{2^{-\frac{k}{2}} e^{-\frac{1}{2} (k+x)} (k+x)^{\frac{k}{2}-1}}{\Gamma \left(\frac{k}{2}\right)}$$ ... tends to normality (in particular, it seems $N(0, 2k)$) as $k$ becomes large: see for instance: Then, note that the ratio of two Normals with zero mean is Cauchy, and we have the gist of it.
Ratio of two centered independent centered chi^2
The behaviour of the distribution is quite 'wild' for small values of parameter $k$. Here is a Monte Carlo simulation of the pdf of the ratio when $k = 1$: ... and when $k = 2$: However, it rapidly
Ratio of two centered independent centered chi^2 The behaviour of the distribution is quite 'wild' for small values of parameter $k$. Here is a Monte Carlo simulation of the pdf of the ratio when $k = 1$: ... and when $k = 2$: However, it rapidly stabilises as $k$ increases, and when $k$ is larger, is extremely well approximated by a Cauchy distribution (same as Student's $t$ with 1 df), i.e. $$f(x) = \frac{1}{\pi \left(x^2+1\right)}$$ ... which seems very pretty, considering how messy the symbolics are. Here is a plot of the pdf of the ratio when $k = 25$: the rough blue curve is the simulated Monte Carlo pdf of the ratio the dashed red curve underneath is the exact Cauchy pdf The fit appears amazing - too good to be arbitrary. Touchy-feely explanation To get a feel for why this is the case, first observe that the distribution of $X-k$ which has form: $$f(x) = \frac{2^{-\frac{k}{2}} e^{-\frac{1}{2} (k+x)} (k+x)^{\frac{k}{2}-1}}{\Gamma \left(\frac{k}{2}\right)}$$ ... tends to normality (in particular, it seems $N(0, 2k)$) as $k$ becomes large: see for instance: Then, note that the ratio of two Normals with zero mean is Cauchy, and we have the gist of it.
Ratio of two centered independent centered chi^2 The behaviour of the distribution is quite 'wild' for small values of parameter $k$. Here is a Monte Carlo simulation of the pdf of the ratio when $k = 1$: ... and when $k = 2$: However, it rapidly
54,936
z-test for one population proportion hypothesis with a small sample size
To achieve that level of confidence, you would have to satisfy the most stringent critic. They would require you to establish Your sample truly is a simple random sample. The respondents answer honestly and correctly. That if the barest minority, just less than half, of the population actually would answer "yes", you would have less than a $100 - 90\% = 10\%$ chance of observing at least this many yeses in such a random sample. You address $(1)$ by explaining and documenting your procedures to identify the population and obtain a sample from it. You address $(2)$ by documenting how the questioning was carried out and including additional questions to assess reliability and internal consistency. You address $(3)$ by computing the chance of observing $14$ or more yeses in a random sample when at most $50\%$ of the population would answer yes if asked. Let's do that. Assuming the population is large (any larger than a few hundred would be fine), the distribution of the yes counts would be very close to Binomial with parameters $22$ and $p \lt 50\%$. In the worst case you have to deal with, take $p=50\%$. Here is a partial chart of the relevant chances. The top row is a threshold count; below it is the chance that the count in a sample would equal or exceed it. Threshold: 11 12 13 14 15 16 17 18 19 20 21 22 Chance: 0.58 0.42 0.26 0.14 0.07 0.03 0.01 0.00 0.00 0.00 0.00 0.00 Since the chance of 14 or more is $0.14=14\%$ and that's greater than $10\%$, you cannot have $90\%$ confidence that a majority of the population would answer yes. If the population is much smaller than several hundred, the confidence in these results noticeably increases. In the very best case, where the population is $28$ or smaller, your sample already contains half or more of the yes-responders and your confidence is $100\%$. The R calculation of the table was x <- 11:22 y <- round(pbinom(x-1, 22, 1/2, lower.tail=FALSE), 2) names(y) <- x print(y) Alternatively, you could use the Normal approximation to this Binomial distribution and compute instead y <- round(pnorm(x-1+1/2, 11, sqrt(22*1/2*(1-1/2)), lower.tail=FALSE), 2) To two decimal places the results are the same. With even less computation you could find the critical threshold approximately as ceiling(qnorm(0.90, 11, sqrt(22*1/2*(1-1/2))) + 1/2), which returns $15$.
z-test for one population proportion hypothesis with a small sample size
To achieve that level of confidence, you would have to satisfy the most stringent critic. They would require you to establish Your sample truly is a simple random sample. The respondents answer hone
z-test for one population proportion hypothesis with a small sample size To achieve that level of confidence, you would have to satisfy the most stringent critic. They would require you to establish Your sample truly is a simple random sample. The respondents answer honestly and correctly. That if the barest minority, just less than half, of the population actually would answer "yes", you would have less than a $100 - 90\% = 10\%$ chance of observing at least this many yeses in such a random sample. You address $(1)$ by explaining and documenting your procedures to identify the population and obtain a sample from it. You address $(2)$ by documenting how the questioning was carried out and including additional questions to assess reliability and internal consistency. You address $(3)$ by computing the chance of observing $14$ or more yeses in a random sample when at most $50\%$ of the population would answer yes if asked. Let's do that. Assuming the population is large (any larger than a few hundred would be fine), the distribution of the yes counts would be very close to Binomial with parameters $22$ and $p \lt 50\%$. In the worst case you have to deal with, take $p=50\%$. Here is a partial chart of the relevant chances. The top row is a threshold count; below it is the chance that the count in a sample would equal or exceed it. Threshold: 11 12 13 14 15 16 17 18 19 20 21 22 Chance: 0.58 0.42 0.26 0.14 0.07 0.03 0.01 0.00 0.00 0.00 0.00 0.00 Since the chance of 14 or more is $0.14=14\%$ and that's greater than $10\%$, you cannot have $90\%$ confidence that a majority of the population would answer yes. If the population is much smaller than several hundred, the confidence in these results noticeably increases. In the very best case, where the population is $28$ or smaller, your sample already contains half or more of the yes-responders and your confidence is $100\%$. The R calculation of the table was x <- 11:22 y <- round(pbinom(x-1, 22, 1/2, lower.tail=FALSE), 2) names(y) <- x print(y) Alternatively, you could use the Normal approximation to this Binomial distribution and compute instead y <- round(pnorm(x-1+1/2, 11, sqrt(22*1/2*(1-1/2)), lower.tail=FALSE), 2) To two decimal places the results are the same. With even less computation you could find the critical threshold approximately as ceiling(qnorm(0.90, 11, sqrt(22*1/2*(1-1/2))) + 1/2), which returns $15$.
z-test for one population proportion hypothesis with a small sample size To achieve that level of confidence, you would have to satisfy the most stringent critic. They would require you to establish Your sample truly is a simple random sample. The respondents answer hone
54,937
Relative importance of predictors in logistic regression
I assume all predictors have been standardized (thus, centered and scaled by the sample standard deviations). Let $\mathbf{x}$ be the vector of predictors and $y$ the response, conditionally Bernoulli-distributed wrt $\mathbf{x}$. Then if $\mu=\mathbb{E[y|\mathbf{x}]}=p(y=1|\mathbf{x})$, then clearly $$\frac{\partial \mu}{\partial x_i}=\beta_i \frac{\exp{(-\beta_0-\boldsymbol{\beta}^T\cdot \mathbf{x})}}{(1+\exp{(-\beta_0-\boldsymbol{\beta}^T\cdot \mathbf{x})})^2}$$ measures the effect of $x_i$ on $\mu$. This effect is a function of $\mathbf{x}$. However, the relative importance of two predictors is $$\frac{\frac{\partial \mu}{\partial x_i}}{\frac{\partial \mu}{\partial x_j}}=\frac{\beta_i}{\beta_j}$$ which is independent of $\mathbf{x}$. Thus, provided we have standardized all predictors, we can look at the estimates of the model coefficients as indicators of the relative importance of the predictors for what it concerns the variation of the output. As an example application, I will adapt the case in section 4.3.4 of An Introduction to Statistical Learning, by James, Witten, Hastie & Tibshirani. Suppose you have a database Default of default rates for credit card owners, with predictors student (categorical), income and credit card balance (continuous). Standardize the predictors and fit a logistic regression model. Now you can use the relative magnitude of the $\hat{\beta}_j$to decide which predictor has a larger effect on probability of default. This helps the credit card company decide to whom they should offer credit, which categories are the more risky, which customer segment to target with an ad campaign and so on. Finally, this paper lists six different definitions of relative predictor importance for logistic regression. The first one is very similar to the one I showed, with the only difference that instead of standardizing the predictors before, they standardize the $\hat{\beta}_j$ after estimation by multiplying by the ratio $\frac{s_j}{s_y}$ where $s_y$ is the response sample standard deviation, and $s_j$ is the sample standard deviation of predictor $x_j$. It's not exactly the same as my suggestion, because the estimators for the logistic regression coefficients are nonlinear functions of the data, but the idea is similar. The second one (using the $p-$values from the Wald $\chi^2$ test) is flawed, as explained by @MatthewDrury in the comments to the OP, and shouldn't be used. Third one (logistic pseudo partial correlation) can be a good choice as long , instead of the Wald $\chi^2$ statistic, in the numerator of the pseudo partial correlation we use the ratio of the likelihood of the model with just predictor $x_i$, to that of the full model. I cannot comment on the other approaches since I don't know enough about them.
Relative importance of predictors in logistic regression
I assume all predictors have been standardized (thus, centered and scaled by the sample standard deviations). Let $\mathbf{x}$ be the vector of predictors and $y$ the response, conditionally Bernoull
Relative importance of predictors in logistic regression I assume all predictors have been standardized (thus, centered and scaled by the sample standard deviations). Let $\mathbf{x}$ be the vector of predictors and $y$ the response, conditionally Bernoulli-distributed wrt $\mathbf{x}$. Then if $\mu=\mathbb{E[y|\mathbf{x}]}=p(y=1|\mathbf{x})$, then clearly $$\frac{\partial \mu}{\partial x_i}=\beta_i \frac{\exp{(-\beta_0-\boldsymbol{\beta}^T\cdot \mathbf{x})}}{(1+\exp{(-\beta_0-\boldsymbol{\beta}^T\cdot \mathbf{x})})^2}$$ measures the effect of $x_i$ on $\mu$. This effect is a function of $\mathbf{x}$. However, the relative importance of two predictors is $$\frac{\frac{\partial \mu}{\partial x_i}}{\frac{\partial \mu}{\partial x_j}}=\frac{\beta_i}{\beta_j}$$ which is independent of $\mathbf{x}$. Thus, provided we have standardized all predictors, we can look at the estimates of the model coefficients as indicators of the relative importance of the predictors for what it concerns the variation of the output. As an example application, I will adapt the case in section 4.3.4 of An Introduction to Statistical Learning, by James, Witten, Hastie & Tibshirani. Suppose you have a database Default of default rates for credit card owners, with predictors student (categorical), income and credit card balance (continuous). Standardize the predictors and fit a logistic regression model. Now you can use the relative magnitude of the $\hat{\beta}_j$to decide which predictor has a larger effect on probability of default. This helps the credit card company decide to whom they should offer credit, which categories are the more risky, which customer segment to target with an ad campaign and so on. Finally, this paper lists six different definitions of relative predictor importance for logistic regression. The first one is very similar to the one I showed, with the only difference that instead of standardizing the predictors before, they standardize the $\hat{\beta}_j$ after estimation by multiplying by the ratio $\frac{s_j}{s_y}$ where $s_y$ is the response sample standard deviation, and $s_j$ is the sample standard deviation of predictor $x_j$. It's not exactly the same as my suggestion, because the estimators for the logistic regression coefficients are nonlinear functions of the data, but the idea is similar. The second one (using the $p-$values from the Wald $\chi^2$ test) is flawed, as explained by @MatthewDrury in the comments to the OP, and shouldn't be used. Third one (logistic pseudo partial correlation) can be a good choice as long , instead of the Wald $\chi^2$ statistic, in the numerator of the pseudo partial correlation we use the ratio of the likelihood of the model with just predictor $x_i$, to that of the full model. I cannot comment on the other approaches since I don't know enough about them.
Relative importance of predictors in logistic regression I assume all predictors have been standardized (thus, centered and scaled by the sample standard deviations). Let $\mathbf{x}$ be the vector of predictors and $y$ the response, conditionally Bernoull
54,938
How many data points to acurately approximate the average of recent values in a time series?
I agree with Carl that fitting a model to the data would be an ideal solution, if it makes sense in your context. But I also want to suggest that the following way of thinking about your problem might be helpful. Suppose that I have a time-varying process that looks like a random walk (so that the location is autocorrelated through time). Suppose I measure this position with some measurement error (which is an IID random variable) at equally spaced time points. I understand your question to mean: How many time points should I average in order to get the best possible information about the process' current location? If the random walk careens around wildly and the measurement error is small, then the answer might well be use only the most recent point. Using previous points in the average would introduce lots of extra variation due to the random walk, and a single point is already a reasonably good approximation of the process' true location. If the measurement error is large and the random walk moves slowly, then the answer will be use a lot of points. The random walk is relatively stationary, and you need to average over the noise in the measurement. Interestingly, if your measurement noise is extremely fat-tailed (i.e. Cauchy distributed), then the answer will always be use just the most recent point (because the average of multiple points does not provide a better approximation to the central tendency of that distribution than any single point does!). It should be possible to work out the ideal number of points to use in special cases where the distribution followed by the random walk and the distribution of the measurement error are both known. However, this is precisely the case where a model, as suggested by Carl, would be useful. Edit Carl's comment also made me realize that it's very likely that a weighted average (that weights more recent points more heavily) could outperform an average that introduces some hard-threshold cutoff for inclusion.
How many data points to acurately approximate the average of recent values in a time series?
I agree with Carl that fitting a model to the data would be an ideal solution, if it makes sense in your context. But I also want to suggest that the following way of thinking about your problem migh
How many data points to acurately approximate the average of recent values in a time series? I agree with Carl that fitting a model to the data would be an ideal solution, if it makes sense in your context. But I also want to suggest that the following way of thinking about your problem might be helpful. Suppose that I have a time-varying process that looks like a random walk (so that the location is autocorrelated through time). Suppose I measure this position with some measurement error (which is an IID random variable) at equally spaced time points. I understand your question to mean: How many time points should I average in order to get the best possible information about the process' current location? If the random walk careens around wildly and the measurement error is small, then the answer might well be use only the most recent point. Using previous points in the average would introduce lots of extra variation due to the random walk, and a single point is already a reasonably good approximation of the process' true location. If the measurement error is large and the random walk moves slowly, then the answer will be use a lot of points. The random walk is relatively stationary, and you need to average over the noise in the measurement. Interestingly, if your measurement noise is extremely fat-tailed (i.e. Cauchy distributed), then the answer will always be use just the most recent point (because the average of multiple points does not provide a better approximation to the central tendency of that distribution than any single point does!). It should be possible to work out the ideal number of points to use in special cases where the distribution followed by the random walk and the distribution of the measurement error are both known. However, this is precisely the case where a model, as suggested by Carl, would be useful. Edit Carl's comment also made me realize that it's very likely that a weighted average (that weights more recent points more heavily) could outperform an average that introduces some hard-threshold cutoff for inclusion.
How many data points to acurately approximate the average of recent values in a time series? I agree with Carl that fitting a model to the data would be an ideal solution, if it makes sense in your context. But I also want to suggest that the following way of thinking about your problem migh
54,939
How many data points to acurately approximate the average of recent values in a time series?
Q1: How many points? All of them. The best way, is to fit a model to the data. If the model is a good one, i.e., if an accurate $T(t)$ can be found, where $T$ is the temperature as a function of $t$ time, then the problem is solved, just choose a $t$, and $T$ is predicted. In a temperature time series this may require one or more delay terms in the auto regressive integrated moving average, ARIMA, sense. However, in general, use of a model is superior to taking averages. Q2: What about outliers? True outliers are rare. If needed outlier testing can be performed. More frequently, when the data is not normally distributed on the $T$-axis, a transformation of variables may be needed to make conditions more normal, among other examples using $\ln(T)$, or using $\dfrac{1}{T}$ or using $\sqrt{T}$ instead of plain old $T$ for the regression equation target.
How many data points to acurately approximate the average of recent values in a time series?
Q1: How many points? All of them. The best way, is to fit a model to the data. If the model is a good one, i.e., if an accurate $T(t)$ can be found, where $T$ is the temperature as a function of $t$ t
How many data points to acurately approximate the average of recent values in a time series? Q1: How many points? All of them. The best way, is to fit a model to the data. If the model is a good one, i.e., if an accurate $T(t)$ can be found, where $T$ is the temperature as a function of $t$ time, then the problem is solved, just choose a $t$, and $T$ is predicted. In a temperature time series this may require one or more delay terms in the auto regressive integrated moving average, ARIMA, sense. However, in general, use of a model is superior to taking averages. Q2: What about outliers? True outliers are rare. If needed outlier testing can be performed. More frequently, when the data is not normally distributed on the $T$-axis, a transformation of variables may be needed to make conditions more normal, among other examples using $\ln(T)$, or using $\dfrac{1}{T}$ or using $\sqrt{T}$ instead of plain old $T$ for the regression equation target.
How many data points to acurately approximate the average of recent values in a time series? Q1: How many points? All of them. The best way, is to fit a model to the data. If the model is a good one, i.e., if an accurate $T(t)$ can be found, where $T$ is the temperature as a function of $t$ t
54,940
How many data points to acurately approximate the average of recent values in a time series?
This is more a suggestion than a complete answer, but maybe you could use the median instead of the average so that outliers affect less the result.
How many data points to acurately approximate the average of recent values in a time series?
This is more a suggestion than a complete answer, but maybe you could use the median instead of the average so that outliers affect less the result.
How many data points to acurately approximate the average of recent values in a time series? This is more a suggestion than a complete answer, but maybe you could use the median instead of the average so that outliers affect less the result.
How many data points to acurately approximate the average of recent values in a time series? This is more a suggestion than a complete answer, but maybe you could use the median instead of the average so that outliers affect less the result.
54,941
What's the intuition of variance, quadratic variation and total variation of Brownian Motion in practice?
Aside from the heavily technical definitions of Brownian motion, the simplest is that if you run Brownian motion from a starting point $B_0=x$, the resulting distribution $B_t$ at time $t$ is Gaussian, with mean $x$ and variance $t$. This is useful because it gives you a sense of how spread out Brownian motion will be after time $t$, relative to a starting point $x$. Concerning quadratic variation, this is primarily defined as a tool for evaluating integrals involving Brownian motion. Typical integrals looks like $\int_0^t f(t,B_t)dB_t$ or even simpler, $\int_0^t f(B_t)dB_t$ . Heuristically, you can evaluate this integral numerically by taking a small partition $[t_0=0,t_1,...,t_n,t_{n+1}=t]$ of $[0,t]$: $$\int_0^tf(B_t)dB_t\approx \sum_{i=0}^nf(B_{t_n})[B_{t_{n+1}}-B_{t_n}]$$ Visually this looks like: Here the black curve represents $f(B_t)$, and the blue curve is Brownian motion, which oscillates and corresponding gives a value for $f(B_t)$. We also track the relative change of Brownian motion by $[B_{t_{n+1}}-B_{t_n}]$. This can generalize to $f(t,B_t)$ by introducing another dimension for time. Quadratic variation arises when we consider the Ito isometry: $$\mathbb{E} \left[ \left( \int_0^T X_t \, \mathrm{d} W_t \right)^2 \right] = \int_0^T \mathbb{E} \left[X_t^2\right] \, \mathrm{d} t$$ , where one squares the original integral and correspondingly gets terms involving $[B_{t_{n+1}}-B_{t_n}]^2$ in the numerical approximation and in the limit. Thus quadratic variation captures the relative drift of your stochastic process over an interval of time. The technical details are beyond the scope of this answer, but the basic need for quadratic variation arises because Brownian motion's total variation, $\sum_{i=0}^n|B_{t_{n+1}}-B_{t_n}|$ will almost surely diverge in the limit, whereas $\sum_{i=0}^n[B_{t_{n+1}}-B_{t_n}]^2$ will almost surely converge. The fact that quadratic variation converges allows one to make sense of Ito integrals (through an analog the Cauchy Schwartz inequality).
What's the intuition of variance, quadratic variation and total variation of Brownian Motion in prac
Aside from the heavily technical definitions of Brownian motion, the simplest is that if you run Brownian motion from a starting point $B_0=x$, the resulting distribution $B_t$ at time $t$ is Gaussian
What's the intuition of variance, quadratic variation and total variation of Brownian Motion in practice? Aside from the heavily technical definitions of Brownian motion, the simplest is that if you run Brownian motion from a starting point $B_0=x$, the resulting distribution $B_t$ at time $t$ is Gaussian, with mean $x$ and variance $t$. This is useful because it gives you a sense of how spread out Brownian motion will be after time $t$, relative to a starting point $x$. Concerning quadratic variation, this is primarily defined as a tool for evaluating integrals involving Brownian motion. Typical integrals looks like $\int_0^t f(t,B_t)dB_t$ or even simpler, $\int_0^t f(B_t)dB_t$ . Heuristically, you can evaluate this integral numerically by taking a small partition $[t_0=0,t_1,...,t_n,t_{n+1}=t]$ of $[0,t]$: $$\int_0^tf(B_t)dB_t\approx \sum_{i=0}^nf(B_{t_n})[B_{t_{n+1}}-B_{t_n}]$$ Visually this looks like: Here the black curve represents $f(B_t)$, and the blue curve is Brownian motion, which oscillates and corresponding gives a value for $f(B_t)$. We also track the relative change of Brownian motion by $[B_{t_{n+1}}-B_{t_n}]$. This can generalize to $f(t,B_t)$ by introducing another dimension for time. Quadratic variation arises when we consider the Ito isometry: $$\mathbb{E} \left[ \left( \int_0^T X_t \, \mathrm{d} W_t \right)^2 \right] = \int_0^T \mathbb{E} \left[X_t^2\right] \, \mathrm{d} t$$ , where one squares the original integral and correspondingly gets terms involving $[B_{t_{n+1}}-B_{t_n}]^2$ in the numerical approximation and in the limit. Thus quadratic variation captures the relative drift of your stochastic process over an interval of time. The technical details are beyond the scope of this answer, but the basic need for quadratic variation arises because Brownian motion's total variation, $\sum_{i=0}^n|B_{t_{n+1}}-B_{t_n}|$ will almost surely diverge in the limit, whereas $\sum_{i=0}^n[B_{t_{n+1}}-B_{t_n}]^2$ will almost surely converge. The fact that quadratic variation converges allows one to make sense of Ito integrals (through an analog the Cauchy Schwartz inequality).
What's the intuition of variance, quadratic variation and total variation of Brownian Motion in prac Aside from the heavily technical definitions of Brownian motion, the simplest is that if you run Brownian motion from a starting point $B_0=x$, the resulting distribution $B_t$ at time $t$ is Gaussian
54,942
meaning of metric vs. statistic vs. parameter
As already mentioned by Kodiologist, metric is rather informal name for e way of measuring something, e.g. this is how it is used by Google Analytics Metrics are quantitative measurements. The metric Sessions is the total number of sessions. The metric Pages/Session is the average number of pages viewed per session. In mathematics, metric is a function that measures distance between two points. The first, informal, definition of metric is consistent with the definition statistic, i.e. function of a sample. On another hand, if you are estimating some quantity of underlying probability distribution (the parameter, see also: Is any quantitative property of the population a "parameter"?), then the function is called estimator.
meaning of metric vs. statistic vs. parameter
As already mentioned by Kodiologist, metric is rather informal name for e way of measuring something, e.g. this is how it is used by Google Analytics Metrics are quantitative measurements. The metric
meaning of metric vs. statistic vs. parameter As already mentioned by Kodiologist, metric is rather informal name for e way of measuring something, e.g. this is how it is used by Google Analytics Metrics are quantitative measurements. The metric Sessions is the total number of sessions. The metric Pages/Session is the average number of pages viewed per session. In mathematics, metric is a function that measures distance between two points. The first, informal, definition of metric is consistent with the definition statistic, i.e. function of a sample. On another hand, if you are estimating some quantity of underlying probability distribution (the parameter, see also: Is any quantitative property of the population a "parameter"?), then the function is called estimator.
meaning of metric vs. statistic vs. parameter As already mentioned by Kodiologist, metric is rather informal name for e way of measuring something, e.g. this is how it is used by Google Analytics Metrics are quantitative measurements. The metric
54,943
meaning of metric vs. statistic vs. parameter
This sense of the word "metric" is informal. It means a way to measure or quantify something. There's an unrelated formal sense of the word "metric" that arises in analysis. There, a metric is a real-valued binary function that is nonnegative, returns 0 iff its arguments are equal, is symmetric, and satisfies the triangle inequality.
meaning of metric vs. statistic vs. parameter
This sense of the word "metric" is informal. It means a way to measure or quantify something. There's an unrelated formal sense of the word "metric" that arises in analysis. There, a metric is a real-
meaning of metric vs. statistic vs. parameter This sense of the word "metric" is informal. It means a way to measure or quantify something. There's an unrelated formal sense of the word "metric" that arises in analysis. There, a metric is a real-valued binary function that is nonnegative, returns 0 iff its arguments are equal, is symmetric, and satisfies the triangle inequality.
meaning of metric vs. statistic vs. parameter This sense of the word "metric" is informal. It means a way to measure or quantify something. There's an unrelated formal sense of the word "metric" that arises in analysis. There, a metric is a real-
54,944
meaning of metric vs. statistic vs. parameter
What is the appropriate way to use the term "metric," and how is its meaning different from "statistic" and different from "parameter"? ... was the question first asked. It has not been answered. I believe the term parameter is the original, traditional expression used to describe a range of defining limits, and also used to describe a state (escalating or declining) of progress or performance. The term metric is an Americanisation originally unfamiliar to other English-speaking people. The use of American neologisms and idiom has grown throughout the world with the expansion of the American internet and media. So too, the term metrics has come to replace parameters. Each can be used interchangeably, however UK residents would recognise parameters more readily.
meaning of metric vs. statistic vs. parameter
What is the appropriate way to use the term "metric," and how is its meaning different from "statistic" and different from "parameter"? ... was the question first asked. It has not been answered. I b
meaning of metric vs. statistic vs. parameter What is the appropriate way to use the term "metric," and how is its meaning different from "statistic" and different from "parameter"? ... was the question first asked. It has not been answered. I believe the term parameter is the original, traditional expression used to describe a range of defining limits, and also used to describe a state (escalating or declining) of progress or performance. The term metric is an Americanisation originally unfamiliar to other English-speaking people. The use of American neologisms and idiom has grown throughout the world with the expansion of the American internet and media. So too, the term metrics has come to replace parameters. Each can be used interchangeably, however UK residents would recognise parameters more readily.
meaning of metric vs. statistic vs. parameter What is the appropriate way to use the term "metric," and how is its meaning different from "statistic" and different from "parameter"? ... was the question first asked. It has not been answered. I b
54,945
meaning of metric vs. statistic vs. parameter
A metric is a measurement and therefore has units: a parameter may not. For example a mole in chemistry is the amount if stuff that has 6*10^23 molecules is atoms or whatever, and is a dimensionless number - a parameter, not a metric - that has no units. A metric space is any set of data where distance between two points is the same in both directions and concepts of straight and point make sense. (There is a very clear definition that is worth looking up if you're interested.) Other definitions are available.
meaning of metric vs. statistic vs. parameter
A metric is a measurement and therefore has units: a parameter may not. For example a mole in chemistry is the amount if stuff that has 6*10^23 molecules is atoms or whatever, and is a dimensionless n
meaning of metric vs. statistic vs. parameter A metric is a measurement and therefore has units: a parameter may not. For example a mole in chemistry is the amount if stuff that has 6*10^23 molecules is atoms or whatever, and is a dimensionless number - a parameter, not a metric - that has no units. A metric space is any set of data where distance between two points is the same in both directions and concepts of straight and point make sense. (There is a very clear definition that is worth looking up if you're interested.) Other definitions are available.
meaning of metric vs. statistic vs. parameter A metric is a measurement and therefore has units: a parameter may not. For example a mole in chemistry is the amount if stuff that has 6*10^23 molecules is atoms or whatever, and is a dimensionless n
54,946
Hypothesis Test: Bound for number of observations from Y that exceed max(X) if X=Y in distribution
Let us reason as follows. If the null hypothesis is true, then in a combined sample, any of the $n_X+n_Y$ observations has the same chance to be labelled with $Y$ as any other observation. Counting how many $Y$s stick out one end is like we have a deck of $n_X$ red cards and $n_Y$ white cards, and we deal cards off the top of a shuffled deck until we hit the first red card, and we count how many white cards there were before then. So that would suggest that the pmf under the null is that of a negative hypergeometric distribution for the number of successes until the first failure, where there are $n_Y$ successes and $n_X$ failures. I think that will boil down to: $$P(S=s) = \frac{{n_X+n_Y-1-s} \choose {n_Y-s}}{{n_X+n_Y}\choose{n_Y}}$$ A quick plausibility check: Consider $n_X=2, n_Y=3$. We can compute from the above formula: s 0 1 2 3 P(S=s) 0.4 0.3 0.2 0.1 Now let's try a simulation to check it: nsim=1000000L table(replicate(nsim,{a=runif(3);b=runif(2);sum(a>max(b))}))/nsim res 0 1 2 3 0.400026 0.300800 0.199712 0.099462 That looks like we should expect. For a given $n_X$ and $n_Y$ you can use this to find the smallest value $s_\text{crit}$ that has $P(S\geq s_\text{crit})\leq \alpha$ and then reject for any observed $s$ that is at least that large. [Note that to do the calculations with large arguments you want a function like R's lchoose (which computes the log of ${n} \choose {x}$, or failing that, at least something like its lgamma (the log of a gamma function).] Alternatively, you can compute a p-value for some observed $s$ as $P(S\geq s)$. It may often be more convenient to compute $P(S< s)$ and take its complement. When $n_Y$ and $n_X$ are both very large you may be able to use a geometric approximation (with $p=\frac{n_X}{n_X+n_Y+1}$). That may at least be useful at least in figuring out about where to sum up to, to find a more accurate critical value from that approximate one. From the look of it for your example $n$'s and $\alpha=0.01$ that approximation would work well, taking you just a few values past the required quantile; it's easy to take the cumulative sum of the pmf up to there and so have the cdf to good accuracy. Note that if you choose not the number of $Y$'s past the largest $X$-value, but above say the $k$-th largest $X$ (e.g. the number of $Y$'s past the tenth-highest $X$), that should still be negative hypergeometric. I would advise you to consider the power properties of this test for distributions that look something like the data you have. Tests that look similar to this one may have great power in some situations but relatively poor power in others. In particular, if the upper tail is heavier than exponential, the power is likely to be quite poor, but if the distribution has a very light upper tail, the power may be quite good. Simulation to check you can have a reasonable chance to reject the null when you think you should be able to would be advisable.
Hypothesis Test: Bound for number of observations from Y that exceed max(X) if X=Y in distribution
Let us reason as follows. If the null hypothesis is true, then in a combined sample, any of the $n_X+n_Y$ observations has the same chance to be labelled with $Y$ as any other observation. Counting h
Hypothesis Test: Bound for number of observations from Y that exceed max(X) if X=Y in distribution Let us reason as follows. If the null hypothesis is true, then in a combined sample, any of the $n_X+n_Y$ observations has the same chance to be labelled with $Y$ as any other observation. Counting how many $Y$s stick out one end is like we have a deck of $n_X$ red cards and $n_Y$ white cards, and we deal cards off the top of a shuffled deck until we hit the first red card, and we count how many white cards there were before then. So that would suggest that the pmf under the null is that of a negative hypergeometric distribution for the number of successes until the first failure, where there are $n_Y$ successes and $n_X$ failures. I think that will boil down to: $$P(S=s) = \frac{{n_X+n_Y-1-s} \choose {n_Y-s}}{{n_X+n_Y}\choose{n_Y}}$$ A quick plausibility check: Consider $n_X=2, n_Y=3$. We can compute from the above formula: s 0 1 2 3 P(S=s) 0.4 0.3 0.2 0.1 Now let's try a simulation to check it: nsim=1000000L table(replicate(nsim,{a=runif(3);b=runif(2);sum(a>max(b))}))/nsim res 0 1 2 3 0.400026 0.300800 0.199712 0.099462 That looks like we should expect. For a given $n_X$ and $n_Y$ you can use this to find the smallest value $s_\text{crit}$ that has $P(S\geq s_\text{crit})\leq \alpha$ and then reject for any observed $s$ that is at least that large. [Note that to do the calculations with large arguments you want a function like R's lchoose (which computes the log of ${n} \choose {x}$, or failing that, at least something like its lgamma (the log of a gamma function).] Alternatively, you can compute a p-value for some observed $s$ as $P(S\geq s)$. It may often be more convenient to compute $P(S< s)$ and take its complement. When $n_Y$ and $n_X$ are both very large you may be able to use a geometric approximation (with $p=\frac{n_X}{n_X+n_Y+1}$). That may at least be useful at least in figuring out about where to sum up to, to find a more accurate critical value from that approximate one. From the look of it for your example $n$'s and $\alpha=0.01$ that approximation would work well, taking you just a few values past the required quantile; it's easy to take the cumulative sum of the pmf up to there and so have the cdf to good accuracy. Note that if you choose not the number of $Y$'s past the largest $X$-value, but above say the $k$-th largest $X$ (e.g. the number of $Y$'s past the tenth-highest $X$), that should still be negative hypergeometric. I would advise you to consider the power properties of this test for distributions that look something like the data you have. Tests that look similar to this one may have great power in some situations but relatively poor power in others. In particular, if the upper tail is heavier than exponential, the power is likely to be quite poor, but if the distribution has a very light upper tail, the power may be quite good. Simulation to check you can have a reasonable chance to reject the null when you think you should be able to would be advisable.
Hypothesis Test: Bound for number of observations from Y that exceed max(X) if X=Y in distribution Let us reason as follows. If the null hypothesis is true, then in a combined sample, any of the $n_X+n_Y$ observations has the same chance to be labelled with $Y$ as any other observation. Counting h
54,947
How are whiskers in a Boxplot of different lengths? [duplicate]
Not necessarily. The whiskers actually end at the highest point within $Q3+1.5R$ and at the lowest point above $Q1-1.5R$. So for instance if $Q3+1.5R=100$ and the highest value in your sample is $90$ then the whisker will end at $90$. You must be observing something like this.
How are whiskers in a Boxplot of different lengths? [duplicate]
Not necessarily. The whiskers actually end at the highest point within $Q3+1.5R$ and at the lowest point above $Q1-1.5R$. So for instance if $Q3+1.5R=100$ and the highest value in your sample is $90$
How are whiskers in a Boxplot of different lengths? [duplicate] Not necessarily. The whiskers actually end at the highest point within $Q3+1.5R$ and at the lowest point above $Q1-1.5R$. So for instance if $Q3+1.5R=100$ and the highest value in your sample is $90$ then the whisker will end at $90$. You must be observing something like this.
How are whiskers in a Boxplot of different lengths? [duplicate] Not necessarily. The whiskers actually end at the highest point within $Q3+1.5R$ and at the lowest point above $Q1-1.5R$. So for instance if $Q3+1.5R=100$ and the highest value in your sample is $90$
54,948
Calculate Kappa statistic in R [closed]
Your reference and class1 are correctly defined, but using a wrong function. The function kappa in R base is not calculating Cohen's Kappa but "Compute or Estimate the Condition Number of a Matrix". See ?kappa. In stead you can try caret::confusionMatrix(reference,class1) Confusion Matrix and Statistics Reference Prediction hi low med hi 2 0 0 low 1 1 1 med 0 0 2 Overall Statistics Accuracy : 0.7143 95% CI : (0.2904, 0.9633) No Information Rate : 0.4286 P-Value [Acc > NIR] : 0.1266 Kappa : 0.5882
Calculate Kappa statistic in R [closed]
Your reference and class1 are correctly defined, but using a wrong function. The function kappa in R base is not calculating Cohen's Kappa but "Compute or Estimate the Condition Number of a Matrix". S
Calculate Kappa statistic in R [closed] Your reference and class1 are correctly defined, but using a wrong function. The function kappa in R base is not calculating Cohen's Kappa but "Compute or Estimate the Condition Number of a Matrix". See ?kappa. In stead you can try caret::confusionMatrix(reference,class1) Confusion Matrix and Statistics Reference Prediction hi low med hi 2 0 0 low 1 1 1 med 0 0 2 Overall Statistics Accuracy : 0.7143 95% CI : (0.2904, 0.9633) No Information Rate : 0.4286 P-Value [Acc > NIR] : 0.1266 Kappa : 0.5882
Calculate Kappa statistic in R [closed] Your reference and class1 are correctly defined, but using a wrong function. The function kappa in R base is not calculating Cohen's Kappa but "Compute or Estimate the Condition Number of a Matrix". S
54,949
Proper name for "modified Bland–Altman plot"
On acceptance: Acceptance of names depends on who you want them to be accepted by. Bland-Altman plots are simply Tukey mean-difference plots (and Tukey was there much earlier), so if you want statisticians to accept the name you possibly wouldn't name it after Bland and Altman. On the other hand, in some application areas (medicine or chemistry, perhaps), you'd probably get quizzical looks if you called mean-difference plots anything but Bland-Altman. [However, chances are someone was there before Tukey as well, many of these ideas are quite old.] On a suitable name: If those things you're calling "truth" are actually "truth" (not just observations with error, say), I'd probably just call what you have a residual plot (though it looks like those differences are negative residuals); depending on how your truth was obtained you might hyphenate in a descriptive noun after "residual" (e.g. if it was based on some gold-standard calibration you might call it a residual-standard plot). If your truth is really truth (and not just observations or even some higher-quality estimate) then you could even argue that error should be used in place of residual. On the suitability of the plot as a diagnostic: How were those "truth" values obtained? Are those just the actual data? If so "truth" is arguably a misnomer, and in that case you'd expect there to be some negative correlation in that plot; it wouldn't necessarily suggest a problem at all. We have many threads which explain (or even prove) that plots of $y-\hat{y}$ vs $y$ will have a positive correlation, if your "difference" axis is $\hat{y}-y$ then you'd expect a negative linear trend when the regression model was appropriate. What is the aim of the plot? How are you interpreting it? Here's a few of the existing posts relating to the issue of plotting residuals vs data: Trend in residuals vs dependent - but not in residuals vs fitted Does it make sense to study plots of residuals with respect to the dependent variable? What is the expected correlation between residual and the dependent variable?
Proper name for "modified Bland–Altman plot"
On acceptance: Acceptance of names depends on who you want them to be accepted by. Bland-Altman plots are simply Tukey mean-difference plots (and Tukey was there much earlier), so if you want statisti
Proper name for "modified Bland–Altman plot" On acceptance: Acceptance of names depends on who you want them to be accepted by. Bland-Altman plots are simply Tukey mean-difference plots (and Tukey was there much earlier), so if you want statisticians to accept the name you possibly wouldn't name it after Bland and Altman. On the other hand, in some application areas (medicine or chemistry, perhaps), you'd probably get quizzical looks if you called mean-difference plots anything but Bland-Altman. [However, chances are someone was there before Tukey as well, many of these ideas are quite old.] On a suitable name: If those things you're calling "truth" are actually "truth" (not just observations with error, say), I'd probably just call what you have a residual plot (though it looks like those differences are negative residuals); depending on how your truth was obtained you might hyphenate in a descriptive noun after "residual" (e.g. if it was based on some gold-standard calibration you might call it a residual-standard plot). If your truth is really truth (and not just observations or even some higher-quality estimate) then you could even argue that error should be used in place of residual. On the suitability of the plot as a diagnostic: How were those "truth" values obtained? Are those just the actual data? If so "truth" is arguably a misnomer, and in that case you'd expect there to be some negative correlation in that plot; it wouldn't necessarily suggest a problem at all. We have many threads which explain (or even prove) that plots of $y-\hat{y}$ vs $y$ will have a positive correlation, if your "difference" axis is $\hat{y}-y$ then you'd expect a negative linear trend when the regression model was appropriate. What is the aim of the plot? How are you interpreting it? Here's a few of the existing posts relating to the issue of plotting residuals vs data: Trend in residuals vs dependent - but not in residuals vs fitted Does it make sense to study plots of residuals with respect to the dependent variable? What is the expected correlation between residual and the dependent variable?
Proper name for "modified Bland–Altman plot" On acceptance: Acceptance of names depends on who you want them to be accepted by. Bland-Altman plots are simply Tukey mean-difference plots (and Tukey was there much earlier), so if you want statisti
54,950
Stationarity of AR(1) process, stable filter
A stable filter is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? Well, intuitively, you can see what happens when you simulate data from the model if $|\phi| > 1$. You will see the process could not hover around some mean for all time. If you rewrite your model as $$ X_t - \mu = \varphi(X_{t-1} - \mu) + \epsilon_t $$ with $c = \mu(1-\varphi)$, then you can re-write it again as $$ (1-\varphi B) Y_t = \epsilon_t \tag{1} $$ where $B$ is the backshift operator and $Y_t = X_t - \mu$ is the demeaned process. A filter is a (possibly infinite) linear combination that you apply to to white noise (I take white noise to mean errors that are mutually uncorrelated and mean zero. This doesn't mean that they are independent, necessarily.). Filtering white noise is a natural way to form time series data. We would write filtered noise as $$ \psi(B)\epsilon_t = \left(\sum_{j=-\infty}^{\infty}\psi_j B^j\right)\epsilon_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ where the collection of coefficients $\{\psi_j\}$ is our impulse response function. This only exists (has finite expectation and variance) if the coefficients far away get small fast enough. Usually they are assumed to be absolutely summable, that is $\sum_{j=-\infty}^{\infty} |\psi_j| < \infty$. Showing that this is a sufficient condition is a detail you might want to fill in yourself. Getting $\psi(B)$ our filter from $\varphi(B)$ our model's AR polynomial is not always something you can do, though. If we could divide both sides of (1) by $(1-\varphi B)$, then your model is $$ Y_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ and this is just like doing simple algebra. We would do this, and then figure out what each $\psi_j$ was in terms of $\varphi$. You can only do this, however, if the roots of the complex polynomial $1 - \varphi z$ are not zero (otherwise you would be dividing by zero), or equivalently if $|\varphi|\neq1$ if you're writing the constraint in terms of the parameters instead of the complex number $z$. If moreover $|\varphi| < 1$, (or if you're stating it in terms of $z$ again, the roots are outside of the unit circle), then your model is causal, and you don't have to filter future noise: $$ Y_t = \sum_{j=0}^{\infty}\psi_j \epsilon_{t-j} = \sum_{j=0}^{\infty}\varphi^j \epsilon_{t-j}. $$ See how the sum representing the lag runs from $0$ to $\infty$ now? Figuring out the coefficients of $\psi(B)$ in terms of $\phi$ can be done by solving $(1 + \psi_1 B + \psi_2 B^2 + \cdots)(1 - \varphi B) = 1$, and this might be something you want to do yourself.
Stationarity of AR(1) process, stable filter
A stable filter is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? W
Stationarity of AR(1) process, stable filter A stable filter is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? Well, intuitively, you can see what happens when you simulate data from the model if $|\phi| > 1$. You will see the process could not hover around some mean for all time. If you rewrite your model as $$ X_t - \mu = \varphi(X_{t-1} - \mu) + \epsilon_t $$ with $c = \mu(1-\varphi)$, then you can re-write it again as $$ (1-\varphi B) Y_t = \epsilon_t \tag{1} $$ where $B$ is the backshift operator and $Y_t = X_t - \mu$ is the demeaned process. A filter is a (possibly infinite) linear combination that you apply to to white noise (I take white noise to mean errors that are mutually uncorrelated and mean zero. This doesn't mean that they are independent, necessarily.). Filtering white noise is a natural way to form time series data. We would write filtered noise as $$ \psi(B)\epsilon_t = \left(\sum_{j=-\infty}^{\infty}\psi_j B^j\right)\epsilon_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ where the collection of coefficients $\{\psi_j\}$ is our impulse response function. This only exists (has finite expectation and variance) if the coefficients far away get small fast enough. Usually they are assumed to be absolutely summable, that is $\sum_{j=-\infty}^{\infty} |\psi_j| < \infty$. Showing that this is a sufficient condition is a detail you might want to fill in yourself. Getting $\psi(B)$ our filter from $\varphi(B)$ our model's AR polynomial is not always something you can do, though. If we could divide both sides of (1) by $(1-\varphi B)$, then your model is $$ Y_t = \sum_{j=-\infty}^{\infty}\psi_j \epsilon_{t-j}, $$ and this is just like doing simple algebra. We would do this, and then figure out what each $\psi_j$ was in terms of $\varphi$. You can only do this, however, if the roots of the complex polynomial $1 - \varphi z$ are not zero (otherwise you would be dividing by zero), or equivalently if $|\varphi|\neq1$ if you're writing the constraint in terms of the parameters instead of the complex number $z$. If moreover $|\varphi| < 1$, (or if you're stating it in terms of $z$ again, the roots are outside of the unit circle), then your model is causal, and you don't have to filter future noise: $$ Y_t = \sum_{j=0}^{\infty}\psi_j \epsilon_{t-j} = \sum_{j=0}^{\infty}\varphi^j \epsilon_{t-j}. $$ See how the sum representing the lag runs from $0$ to $\infty$ now? Figuring out the coefficients of $\psi(B)$ in terms of $\phi$ can be done by solving $(1 + \psi_1 B + \psi_2 B^2 + \cdots)(1 - \varphi B) = 1$, and this might be something you want to do yourself.
Stationarity of AR(1) process, stable filter A stable filter is a filter which exists, and is causal. Causal means that your current observation is a function of past or contemporaneous noise, not future noise. Why do they use the word stable? W
54,951
Stationarity of AR(1) process, stable filter
This is a somewhat simple answer: The "Definition" section of the same Wiki article says: An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise. In this case, the stability is a consequence of the so called filter having its poles inside the unit circle.
Stationarity of AR(1) process, stable filter
This is a somewhat simple answer: The "Definition" section of the same Wiki article says: An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whos
Stationarity of AR(1) process, stable filter This is a somewhat simple answer: The "Definition" section of the same Wiki article says: An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise. In this case, the stability is a consequence of the so called filter having its poles inside the unit circle.
Stationarity of AR(1) process, stable filter This is a somewhat simple answer: The "Definition" section of the same Wiki article says: An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whos
54,952
What F-test is performed by $\texttt{lm()}$ function in R, at the end of the output?
The $F$ test always tests against the intercept-only model (y ~ 1) unless the model has not intercept, then a zero-mean model is used (y ~ 0). In your case, this means that the null hypothesis $\beta_1 = \dots = \beta_5 = 0$ is tested.
What F-test is performed by $\texttt{lm()}$ function in R, at the end of the output?
The $F$ test always tests against the intercept-only model (y ~ 1) unless the model has not intercept, then a zero-mean model is used (y ~ 0). In your case, this means that the null hypothesis $\beta_
What F-test is performed by $\texttt{lm()}$ function in R, at the end of the output? The $F$ test always tests against the intercept-only model (y ~ 1) unless the model has not intercept, then a zero-mean model is used (y ~ 0). In your case, this means that the null hypothesis $\beta_1 = \dots = \beta_5 = 0$ is tested.
What F-test is performed by $\texttt{lm()}$ function in R, at the end of the output? The $F$ test always tests against the intercept-only model (y ~ 1) unless the model has not intercept, then a zero-mean model is used (y ~ 0). In your case, this means that the null hypothesis $\beta_
54,953
Testing whether two data sets are statistically different
It depends on the type of statistical diffrences you are looking. It also depends upon the data distribution. A great small table summarizing the statistical test are here with the use cases : https://cyfar.org/types-statistical-tests
Testing whether two data sets are statistically different
It depends on the type of statistical diffrences you are looking. It also depends upon the data distribution. A great small table summarizing the statistical test are here with the use cases : https:
Testing whether two data sets are statistically different It depends on the type of statistical diffrences you are looking. It also depends upon the data distribution. A great small table summarizing the statistical test are here with the use cases : https://cyfar.org/types-statistical-tests
Testing whether two data sets are statistically different It depends on the type of statistical diffrences you are looking. It also depends upon the data distribution. A great small table summarizing the statistical test are here with the use cases : https:
54,954
Interpretation of a spline
The only difference is in an intercept term. This is standard with smooth terms in models of this kind. Taking your two plots and resizing the red one to be on the same scale as the other one, then shifting the y-axis to align the two: we can see that they are otherwise identical -- one is just a shift of the other (well, that and the fact that the red one is only evaluated at data points while the black one has been evaluated on a fine grid).
Interpretation of a spline
The only difference is in an intercept term. This is standard with smooth terms in models of this kind. Taking your two plots and resizing the red one to be on the same scale as the other one, then sh
Interpretation of a spline The only difference is in an intercept term. This is standard with smooth terms in models of this kind. Taking your two plots and resizing the red one to be on the same scale as the other one, then shifting the y-axis to align the two: we can see that they are otherwise identical -- one is just a shift of the other (well, that and the fact that the red one is only evaluated at data points while the black one has been evaluated on a fine grid).
Interpretation of a spline The only difference is in an intercept term. This is standard with smooth terms in models of this kind. Taking your two plots and resizing the red one to be on the same scale as the other one, then sh
54,955
Interpretation of a spline
To add a little to @Glen_b's answer, the standard splines in mgcv are subject to constraints to enable their identification as they are confounded with the model intercept term. The constraint mgcv uses is $$\sum_i f_j(x_{ij}) = 0 ~~ \forall ~ j$$ This is the sum-to-zero constraint, where $f_j$ is a spline function and $x_{ij}$ is the $i$th observation of the $j$th variable. This constraint results in the splines being centred around zero. It also results in better behaved confidence intervals on the estimated smooth functions than other identifiability constraints. If you have a single smooth, you can use the shift argument to plot.gam() to add on the intercept to scale the y-axis in response units (assuming family = gaussian); for non-Gaussian models you'd also need to use the trans argument to post apply the inverse of the link functions once shift had been added.
Interpretation of a spline
To add a little to @Glen_b's answer, the standard splines in mgcv are subject to constraints to enable their identification as they are confounded with the model intercept term. The constraint mgcv us
Interpretation of a spline To add a little to @Glen_b's answer, the standard splines in mgcv are subject to constraints to enable their identification as they are confounded with the model intercept term. The constraint mgcv uses is $$\sum_i f_j(x_{ij}) = 0 ~~ \forall ~ j$$ This is the sum-to-zero constraint, where $f_j$ is a spline function and $x_{ij}$ is the $i$th observation of the $j$th variable. This constraint results in the splines being centred around zero. It also results in better behaved confidence intervals on the estimated smooth functions than other identifiability constraints. If you have a single smooth, you can use the shift argument to plot.gam() to add on the intercept to scale the y-axis in response units (assuming family = gaussian); for non-Gaussian models you'd also need to use the trans argument to post apply the inverse of the link functions once shift had been added.
Interpretation of a spline To add a little to @Glen_b's answer, the standard splines in mgcv are subject to constraints to enable their identification as they are confounded with the model intercept term. The constraint mgcv us
54,956
Degrees freedom reported by the lmer model don't seem plausible
Are you using the lmerTest package to get your p-values to output for the summary of an lmer object? If so, you are estimating degrees of freedom using the Satterthwaite approximation. The top of your output should say something like: t-tests use Satterthwaite approximations to degrees of freedom I assume you are asking if these dfs are plausible because they are not integers—they have decimals? Since there is no straightforward way of calculating dfs for these multilevel models, different approximations are used, which gives you the dfs that aren't necessarily integers.
Degrees freedom reported by the lmer model don't seem plausible
Are you using the lmerTest package to get your p-values to output for the summary of an lmer object? If so, you are estimating degrees of freedom using the Satterthwaite approximation. The top of your
Degrees freedom reported by the lmer model don't seem plausible Are you using the lmerTest package to get your p-values to output for the summary of an lmer object? If so, you are estimating degrees of freedom using the Satterthwaite approximation. The top of your output should say something like: t-tests use Satterthwaite approximations to degrees of freedom I assume you are asking if these dfs are plausible because they are not integers—they have decimals? Since there is no straightforward way of calculating dfs for these multilevel models, different approximations are used, which gives you the dfs that aren't necessarily integers.
Degrees freedom reported by the lmer model don't seem plausible Are you using the lmerTest package to get your p-values to output for the summary of an lmer object? If so, you are estimating degrees of freedom using the Satterthwaite approximation. The top of your
54,957
Degrees freedom reported by the lmer model don't seem plausible
How do you define "plausible" degrees of freedom? Simply speaking, calculating degrees of freedom for GLMMs is complicated and there is no simple formula for calculating them. Let me quote the r-sig-mixed-models FAQ: (There is an R FAQ entry on this topic, which links to a mailing list post by Doug Bates (there is also a voluminous mailing list thread reproduced on the R wiki). The bottom line is In general it is not clear that the null distribution of the computed ratio of sums of squares is really an F distribution, for any choice of denominator degrees of freedom. While this is true for special cases that correspond to classical experimental designs (nested, split-plot, randomized block, etc.), it is apparently not true for more complex designs (unbalanced, GLMMs, temporal or spatial correlation, etc.). For each simple degrees-of-freedom recipe that has been suggested (trace of the hat matrix, etc.) there seems to be at least one fairly simple counterexample where the recipe fails badly. Other df approximation schemes that have been suggested (Satterthwaite, Kenward-Roger, etc.) would apparently be fairly hard to implement in lme4/nlme, both because of a difference in notational framework and because naive approaches would be computationally difficult in the case of large data sets. (The Kenward-Roger approach has now been implemented in the pbkrtest package (as KRmodcomp): although it was derived for LMMs, Stroup [29] states on the basis of (unpresented) simulation results that it actually works reasonably well for GLMMs. However, at present the code in KRmodcomp only handles LMMs.) Note that there are several different issues at play in finite-size (small-sample) adjustments, which apply slightly differently to LMMs and GLMMs. When the responses are normally distributed and the design is balanced, nested etc. (i.e. the classical LMM situation), the scaled deviances and differences in deviances are exactly F-distributed and looking at the experimental design (i.e., which treatments vary/are replicated at which levels) tells us what the relevant degrees of freedom are. When the data are not classical (crossed, unbalanced, R-side effects), we might still guess that the deviances etc. are approximately F-distributed but that we don't know the real degrees of freedom — this is what the Satterthwaite, Kenward-Roger, Fai-Cornelius, etc. approximations are supposed to do. When the responses are not normally distributed (as in GLMs and GLMMs), and when the scale parameter is not estimated (as in standard Poisson- and binomial-response models), then the deviance differences are only asymptotically F- or chi-square-distributed (i.e. not for our real, finite-size samples). In standard GLM practice, we usually ignore this problem; there is some literature on finite-size corrections for GLMs under the rubrics of "Bartlett corrections" and "higher order asymptotics" (see McCullagh and Nelder, work by Cordeiro, and the cond package on CRAN [which works with GLMs, not GLMMs]), but it's rarely used. (The bias correction/Firth approach implemented in the brglm package attempts to address the problem of finite-size bias, not finite-size non-chi-squaredness of the deviance differences.) When the scale parameter in a GLM is estimated rather than fixed (as in Gamma or quasi-likelihood models), it is sometimes recommended to use an F test to account for the uncertainty of the scale parameter (e.g. Venables and Ripley recommend anova(…,test="F") for quasi-likelihood models) Combining these issues, one has to look pretty hard for information on small-sample or finite-size corrections for GLMMs: Feng et al 2004 [14] and Bell and Grunwald 2010 [6] look like good starting points, but it's not at all trivial. Because the primary authors of lme4 are not convinced of the utility of the general approach of testing with reference to an approximate null distribution, and because of the overhead of anyone else digging into the code to enable the relevant functionality (as a patch or an add-on), this situation is unlikely to change in the future.
Degrees freedom reported by the lmer model don't seem plausible
How do you define "plausible" degrees of freedom? Simply speaking, calculating degrees of freedom for GLMMs is complicated and there is no simple formula for calculating them. Let me quote the r-sig-
Degrees freedom reported by the lmer model don't seem plausible How do you define "plausible" degrees of freedom? Simply speaking, calculating degrees of freedom for GLMMs is complicated and there is no simple formula for calculating them. Let me quote the r-sig-mixed-models FAQ: (There is an R FAQ entry on this topic, which links to a mailing list post by Doug Bates (there is also a voluminous mailing list thread reproduced on the R wiki). The bottom line is In general it is not clear that the null distribution of the computed ratio of sums of squares is really an F distribution, for any choice of denominator degrees of freedom. While this is true for special cases that correspond to classical experimental designs (nested, split-plot, randomized block, etc.), it is apparently not true for more complex designs (unbalanced, GLMMs, temporal or spatial correlation, etc.). For each simple degrees-of-freedom recipe that has been suggested (trace of the hat matrix, etc.) there seems to be at least one fairly simple counterexample where the recipe fails badly. Other df approximation schemes that have been suggested (Satterthwaite, Kenward-Roger, etc.) would apparently be fairly hard to implement in lme4/nlme, both because of a difference in notational framework and because naive approaches would be computationally difficult in the case of large data sets. (The Kenward-Roger approach has now been implemented in the pbkrtest package (as KRmodcomp): although it was derived for LMMs, Stroup [29] states on the basis of (unpresented) simulation results that it actually works reasonably well for GLMMs. However, at present the code in KRmodcomp only handles LMMs.) Note that there are several different issues at play in finite-size (small-sample) adjustments, which apply slightly differently to LMMs and GLMMs. When the responses are normally distributed and the design is balanced, nested etc. (i.e. the classical LMM situation), the scaled deviances and differences in deviances are exactly F-distributed and looking at the experimental design (i.e., which treatments vary/are replicated at which levels) tells us what the relevant degrees of freedom are. When the data are not classical (crossed, unbalanced, R-side effects), we might still guess that the deviances etc. are approximately F-distributed but that we don't know the real degrees of freedom — this is what the Satterthwaite, Kenward-Roger, Fai-Cornelius, etc. approximations are supposed to do. When the responses are not normally distributed (as in GLMs and GLMMs), and when the scale parameter is not estimated (as in standard Poisson- and binomial-response models), then the deviance differences are only asymptotically F- or chi-square-distributed (i.e. not for our real, finite-size samples). In standard GLM practice, we usually ignore this problem; there is some literature on finite-size corrections for GLMs under the rubrics of "Bartlett corrections" and "higher order asymptotics" (see McCullagh and Nelder, work by Cordeiro, and the cond package on CRAN [which works with GLMs, not GLMMs]), but it's rarely used. (The bias correction/Firth approach implemented in the brglm package attempts to address the problem of finite-size bias, not finite-size non-chi-squaredness of the deviance differences.) When the scale parameter in a GLM is estimated rather than fixed (as in Gamma or quasi-likelihood models), it is sometimes recommended to use an F test to account for the uncertainty of the scale parameter (e.g. Venables and Ripley recommend anova(…,test="F") for quasi-likelihood models) Combining these issues, one has to look pretty hard for information on small-sample or finite-size corrections for GLMMs: Feng et al 2004 [14] and Bell and Grunwald 2010 [6] look like good starting points, but it's not at all trivial. Because the primary authors of lme4 are not convinced of the utility of the general approach of testing with reference to an approximate null distribution, and because of the overhead of anyone else digging into the code to enable the relevant functionality (as a patch or an add-on), this situation is unlikely to change in the future.
Degrees freedom reported by the lmer model don't seem plausible How do you define "plausible" degrees of freedom? Simply speaking, calculating degrees of freedom for GLMMs is complicated and there is no simple formula for calculating them. Let me quote the r-sig-
54,958
lme4_fixed-effect model matrix is rank deficient so dropping 1 column / coefficient
In the data you link to, Language and useOfIntrinsic encode the exact same information. Think about it this way: Language gives the anova flexibility to estimate the mean for each language independently. Once this has been done, there is no additional among-language variation floating around to estimate the effect of useOfIntrinsic. Or think about it this way: imagine that the effect of useOfIntrinsic is absolutely anything you'd like. The model can't know if you're right or wrong, because whatever predictions it makes about each language based on useOfIntrinsic, it can just use the effect of Language to offset those predictions and give the correct group mean. So there's no way to estimate the useOfIntrinsic effect when Language is also in the model. One final way to think about it. You can think of the model you are trying to fit as asking for an estimate of the effect of useOfIntrinsic while controlling for the effect of Language. But once you've controlled for the effect of Language, you've already completely dealt with the differences between languages that you might want to attribute to useOfIntrinsic. To put both variables in a single model, you either need some independent variation in the two variables (i.e. some variation in useOfIntrinsic within a single language), or you need to place some additional constraints on how you estimate the effect of Language. One possibility would be to experiment with estimating Language as a random effect, but I don't necessarily recommend this given that you only have five languages in the sample. You do not need to apply any correction for changing which language is the reference group. This is not a situation where you are estimating five different models--this is just five different parameterizations of the exact same model. You are looking at the exact same results five different ways. The results will be the exact same each time, up to the appropriate constants involved in the reparameterization.
lme4_fixed-effect model matrix is rank deficient so dropping 1 column / coefficient
In the data you link to, Language and useOfIntrinsic encode the exact same information. Think about it this way: Language gives the anova flexibility to estimate the mean for each language independen
lme4_fixed-effect model matrix is rank deficient so dropping 1 column / coefficient In the data you link to, Language and useOfIntrinsic encode the exact same information. Think about it this way: Language gives the anova flexibility to estimate the mean for each language independently. Once this has been done, there is no additional among-language variation floating around to estimate the effect of useOfIntrinsic. Or think about it this way: imagine that the effect of useOfIntrinsic is absolutely anything you'd like. The model can't know if you're right or wrong, because whatever predictions it makes about each language based on useOfIntrinsic, it can just use the effect of Language to offset those predictions and give the correct group mean. So there's no way to estimate the useOfIntrinsic effect when Language is also in the model. One final way to think about it. You can think of the model you are trying to fit as asking for an estimate of the effect of useOfIntrinsic while controlling for the effect of Language. But once you've controlled for the effect of Language, you've already completely dealt with the differences between languages that you might want to attribute to useOfIntrinsic. To put both variables in a single model, you either need some independent variation in the two variables (i.e. some variation in useOfIntrinsic within a single language), or you need to place some additional constraints on how you estimate the effect of Language. One possibility would be to experiment with estimating Language as a random effect, but I don't necessarily recommend this given that you only have five languages in the sample. You do not need to apply any correction for changing which language is the reference group. This is not a situation where you are estimating five different models--this is just five different parameterizations of the exact same model. You are looking at the exact same results five different ways. The results will be the exact same each time, up to the appropriate constants involved in the reparameterization.
lme4_fixed-effect model matrix is rank deficient so dropping 1 column / coefficient In the data you link to, Language and useOfIntrinsic encode the exact same information. Think about it this way: Language gives the anova flexibility to estimate the mean for each language independen
54,959
What's the use of the embedding matrix in a char-rnn seq2seq model?
Embeddings are dense vector representations of the characters. The rationale behind using it is to convert an arbitrary discrete id, to a continuous representation. The main advantage is that back-propagation is possible over continuous representations while it is not over discrete representations. A second advantage is that the vector representation might contain additional information based on its location compared to the other characters. This is still a hot area of research. If you are interested in learning more, check out the word2vec algorithms: vector embeddings are learned for words where interesting relationships are learned. For example an interesting write-up here: https://deeplearning4j.org/word2vec.html
What's the use of the embedding matrix in a char-rnn seq2seq model?
Embeddings are dense vector representations of the characters. The rationale behind using it is to convert an arbitrary discrete id, to a continuous representation. The main advantage is that back-pr
What's the use of the embedding matrix in a char-rnn seq2seq model? Embeddings are dense vector representations of the characters. The rationale behind using it is to convert an arbitrary discrete id, to a continuous representation. The main advantage is that back-propagation is possible over continuous representations while it is not over discrete representations. A second advantage is that the vector representation might contain additional information based on its location compared to the other characters. This is still a hot area of research. If you are interested in learning more, check out the word2vec algorithms: vector embeddings are learned for words where interesting relationships are learned. For example an interesting write-up here: https://deeplearning4j.org/word2vec.html
What's the use of the embedding matrix in a char-rnn seq2seq model? Embeddings are dense vector representations of the characters. The rationale behind using it is to convert an arbitrary discrete id, to a continuous representation. The main advantage is that back-pr
54,960
At what level are covariates held constant in multiple logistic regression?
This is mostly addressed at What does “all else equal” mean in multiple regression? Namely, that they can be held constant at any value or level of the covariates. In some sense, it is easiest to explain them (or conceive of them) as being held at the means of the other continuous variables and the reference levels of the other categorical variables, but any value or level could be used. Furthermore, this assumes that there are no interaction terms in the model amongst your covariates, otherwise it is not generally possible to hold all else equal (that is also explained in the linked thread). The only added complication in a logistic regression context (or any generalized linear model in which the link is not the identity function), is that this only pertains to the linear predictor. For example, in logistic regression, the result of $\bf X \boldsymbol{\hat\beta}$ is a set of log odds. However, people often prefer to see $\hat p_i$, instead. That is of course fine, but it involves a nonlinear transformation. As a result, due to Jensen's inequality, the sigmoid curves you would get for the relationship between $X_1$ and $Y$ would differ based on whether $X_2$ is held constant at $\bar X_2$ or $\bar X_2 + s_{\bar X_2}$. The implication of this is that there isn't really any such thing as "all else equal" in the transformed space, only in the space of the linear predictor. If it helps to clarify these ideas, consider this simple simulation (coded in R): set.seed(6666) # makes the example exactly reproducible lo2p = function(lo){ exp(lo)/(1+exp(lo)) } # we'll need this function x1 = runif(500, min=0, max=10) # generating X data x2 = rbinom(500, size=1, prob=.5) lo = -2.2 + 1.1*x2 + .44*x1 # the true data generating process p = lo2p(lo) y = rbinom(500, size=1, prob=p) # generating Y data m = glm(y~x1+x2, family=binomial) # fitting the model & viewing the coefficients summary(m)$coefficients # Estimate Std. Error z value Pr(>|z|) # (Intercept) -2.0395304 0.25907518 -7.872350 3.480415e-15 # x1 0.4220811 0.04409752 9.571538 1.053267e-21 # x2 1.2582332 0.22653761 5.554191 2.789001e-08 x.seq = seq(from=0, to=10, by=.1) # this is a sequence of X values for the plot x2.0.lo = predict(m, newdata=data.frame(x1=x.seq, x2=0), type="link") # predicted x2.1.lo = predict(m, newdata=data.frame(x1=x.seq, x2=1), type="link") # log odds x2.0.p = lo2p(x2.0.lo) # converted to probabilities x2.1.p = lo2p(x2.1.lo) windows() layout(matrix(1:2, nrow=2)) plot(x.seq, x2.0.lo, type="l", ylim=c(-2,3.5), ylab="log odds", xlab="x1", cex.axis=.9, main="Linear predictor") lines(x.seq, x2.1.lo, col="red") legend("topleft", legend=c("when x2=1", "when x2=0"), lty=1, col=2:1) plot(x.seq, x2.0.p, type="l", ylim=c(0,1), yaxp=c(0,1,4), cex.axis=.8, las=1, xlab="x1", ylab="probability", main="Transformed") lines(x.seq, x2.1.p, col="red") On the scale of the linear predictor (i.e., the log odds), the slope on $X_1$ is $\approx 1.26$ whether you are holding $X_2$ at $0$ or $1$. That's because the lines are parallel. On the other hand, in the transformed space, the lines aren't parallel. The rate of change in $\hat p(Y=1)$ associated with a $1$-unit change in $X_1$ differs depending on whether $X_2 = 0$ or $X_2 = 1$. (It also differs depending on what value of $X-1$ you are starting from.)
At what level are covariates held constant in multiple logistic regression?
This is mostly addressed at What does “all else equal” mean in multiple regression? Namely, that they can be held constant at any value or level of the covariates. In some sense, it is easiest to ex
At what level are covariates held constant in multiple logistic regression? This is mostly addressed at What does “all else equal” mean in multiple regression? Namely, that they can be held constant at any value or level of the covariates. In some sense, it is easiest to explain them (or conceive of them) as being held at the means of the other continuous variables and the reference levels of the other categorical variables, but any value or level could be used. Furthermore, this assumes that there are no interaction terms in the model amongst your covariates, otherwise it is not generally possible to hold all else equal (that is also explained in the linked thread). The only added complication in a logistic regression context (or any generalized linear model in which the link is not the identity function), is that this only pertains to the linear predictor. For example, in logistic regression, the result of $\bf X \boldsymbol{\hat\beta}$ is a set of log odds. However, people often prefer to see $\hat p_i$, instead. That is of course fine, but it involves a nonlinear transformation. As a result, due to Jensen's inequality, the sigmoid curves you would get for the relationship between $X_1$ and $Y$ would differ based on whether $X_2$ is held constant at $\bar X_2$ or $\bar X_2 + s_{\bar X_2}$. The implication of this is that there isn't really any such thing as "all else equal" in the transformed space, only in the space of the linear predictor. If it helps to clarify these ideas, consider this simple simulation (coded in R): set.seed(6666) # makes the example exactly reproducible lo2p = function(lo){ exp(lo)/(1+exp(lo)) } # we'll need this function x1 = runif(500, min=0, max=10) # generating X data x2 = rbinom(500, size=1, prob=.5) lo = -2.2 + 1.1*x2 + .44*x1 # the true data generating process p = lo2p(lo) y = rbinom(500, size=1, prob=p) # generating Y data m = glm(y~x1+x2, family=binomial) # fitting the model & viewing the coefficients summary(m)$coefficients # Estimate Std. Error z value Pr(>|z|) # (Intercept) -2.0395304 0.25907518 -7.872350 3.480415e-15 # x1 0.4220811 0.04409752 9.571538 1.053267e-21 # x2 1.2582332 0.22653761 5.554191 2.789001e-08 x.seq = seq(from=0, to=10, by=.1) # this is a sequence of X values for the plot x2.0.lo = predict(m, newdata=data.frame(x1=x.seq, x2=0), type="link") # predicted x2.1.lo = predict(m, newdata=data.frame(x1=x.seq, x2=1), type="link") # log odds x2.0.p = lo2p(x2.0.lo) # converted to probabilities x2.1.p = lo2p(x2.1.lo) windows() layout(matrix(1:2, nrow=2)) plot(x.seq, x2.0.lo, type="l", ylim=c(-2,3.5), ylab="log odds", xlab="x1", cex.axis=.9, main="Linear predictor") lines(x.seq, x2.1.lo, col="red") legend("topleft", legend=c("when x2=1", "when x2=0"), lty=1, col=2:1) plot(x.seq, x2.0.p, type="l", ylim=c(0,1), yaxp=c(0,1,4), cex.axis=.8, las=1, xlab="x1", ylab="probability", main="Transformed") lines(x.seq, x2.1.p, col="red") On the scale of the linear predictor (i.e., the log odds), the slope on $X_1$ is $\approx 1.26$ whether you are holding $X_2$ at $0$ or $1$. That's because the lines are parallel. On the other hand, in the transformed space, the lines aren't parallel. The rate of change in $\hat p(Y=1)$ associated with a $1$-unit change in $X_1$ differs depending on whether $X_2 = 0$ or $X_2 = 1$. (It also differs depending on what value of $X-1$ you are starting from.)
At what level are covariates held constant in multiple logistic regression? This is mostly addressed at What does “all else equal” mean in multiple regression? Namely, that they can be held constant at any value or level of the covariates. In some sense, it is easiest to ex
54,961
At what level are covariates held constant in multiple logistic regression?
Coding really doesn't matter, because when it comes down to it, regression coefficients are always based on slope, i.e., $\Delta y/\Delta x$. Categorical factors are always broken down to either $k-1$ dummy indicators for each $k$-level factor (corner point coding, level-1's $\Delta y/\Delta x$ goes to constant term) or $k$ dummy indicator variables (sum-to-zero constraints, no constant term). Fundamental to regression is also the concept that $x$-predictors are not random variables, hence, the levels of every $x$-variable are supposed to be experimentally controlled values, which can in reality be set by e.g. a variometer. For example, if age is a predictor, then the model will assume that at each age $18, 19, \ldots, 85+$ you enrolled experimental subjects for which $y$ was measured. After all, this is what is done for each level of a categorical factor. Regarding inferential tests of hypotheses, once you have overcome coding issues, there is a series of partial $F$-tests, which can be employed to address your specific question. There is one caveat to tell students when learning regression, for e.g. serum plasma protein expression or mineral (element) concentrations, which is that, instead of thinking about a change in $y$ for a one-unit change in $x$, or concentration, for a significant positive slope the clinical interpretation is that subjects with greater $y$-values had greater $x$-values.
At what level are covariates held constant in multiple logistic regression?
Coding really doesn't matter, because when it comes down to it, regression coefficients are always based on slope, i.e., $\Delta y/\Delta x$. Categorical factors are always broken down to either $k-1
At what level are covariates held constant in multiple logistic regression? Coding really doesn't matter, because when it comes down to it, regression coefficients are always based on slope, i.e., $\Delta y/\Delta x$. Categorical factors are always broken down to either $k-1$ dummy indicators for each $k$-level factor (corner point coding, level-1's $\Delta y/\Delta x$ goes to constant term) or $k$ dummy indicator variables (sum-to-zero constraints, no constant term). Fundamental to regression is also the concept that $x$-predictors are not random variables, hence, the levels of every $x$-variable are supposed to be experimentally controlled values, which can in reality be set by e.g. a variometer. For example, if age is a predictor, then the model will assume that at each age $18, 19, \ldots, 85+$ you enrolled experimental subjects for which $y$ was measured. After all, this is what is done for each level of a categorical factor. Regarding inferential tests of hypotheses, once you have overcome coding issues, there is a series of partial $F$-tests, which can be employed to address your specific question. There is one caveat to tell students when learning regression, for e.g. serum plasma protein expression or mineral (element) concentrations, which is that, instead of thinking about a change in $y$ for a one-unit change in $x$, or concentration, for a significant positive slope the clinical interpretation is that subjects with greater $y$-values had greater $x$-values.
At what level are covariates held constant in multiple logistic regression? Coding really doesn't matter, because when it comes down to it, regression coefficients are always based on slope, i.e., $\Delta y/\Delta x$. Categorical factors are always broken down to either $k-1
54,962
Can one truly fight outliers with more data?
As with most questions about outliers, I don't think there's an easy answer. It will depend on your situation. For instance, if you are modeling the relationship between race and income, and, just by chance, Michael Jordan answers your survey, then more data can help because it can clarify the situation, but, since very few people get to "be like Mike" you would need millions of cases to fully remove the effect of having him in your sample. (And N = 1000 will make it much clearer than N = 100 that income is not remotely normally distributed). On the other hand, if one of your variables is height, then more people will smooth things out because, while Jordan is tall, he's not so freakishly tall that he shouldn't show up in a data set. Sometime a variable you think is normal isn't, at least not in the population you are dealing with. I found this with weight in adult humans: It's right skew in many populations. A large sample shows this. A small one doesn't. It will also depend on what method you are using for classification and whether the outlier is like the closest inliers. If you were, for instance, using trees to classify people into "professional basketball players" and "others" then MJ being an outlier on height would be no problem. Nor his being an outlier on income. But if you happened to get a very tall person who was very tall but not a basketball player, that would make the tree weird and more data might not help (or it might - I think it depends on the algorithm). For cluster analysis, the effect of an outlier (and more data) is likely to be different for single linkage and complete linkage. And so on.
Can one truly fight outliers with more data?
As with most questions about outliers, I don't think there's an easy answer. It will depend on your situation. For instance, if you are modeling the relationship between race and income, and, just by
Can one truly fight outliers with more data? As with most questions about outliers, I don't think there's an easy answer. It will depend on your situation. For instance, if you are modeling the relationship between race and income, and, just by chance, Michael Jordan answers your survey, then more data can help because it can clarify the situation, but, since very few people get to "be like Mike" you would need millions of cases to fully remove the effect of having him in your sample. (And N = 1000 will make it much clearer than N = 100 that income is not remotely normally distributed). On the other hand, if one of your variables is height, then more people will smooth things out because, while Jordan is tall, he's not so freakishly tall that he shouldn't show up in a data set. Sometime a variable you think is normal isn't, at least not in the population you are dealing with. I found this with weight in adult humans: It's right skew in many populations. A large sample shows this. A small one doesn't. It will also depend on what method you are using for classification and whether the outlier is like the closest inliers. If you were, for instance, using trees to classify people into "professional basketball players" and "others" then MJ being an outlier on height would be no problem. Nor his being an outlier on income. But if you happened to get a very tall person who was very tall but not a basketball player, that would make the tree weird and more data might not help (or it might - I think it depends on the algorithm). For cluster analysis, the effect of an outlier (and more data) is likely to be different for single linkage and complete linkage. And so on.
Can one truly fight outliers with more data? As with most questions about outliers, I don't think there's an easy answer. It will depend on your situation. For instance, if you are modeling the relationship between race and income, and, just by
54,963
Can one truly fight outliers with more data?
If your outliers occur because of natural outliers in the distribution, then yes, your estimations will become more stable with more data. Let's say you're using a logistic regression, in a theoretical case where the outcome depends on an observed variable with some normally distributed noise. Outliers will come in the noise, and accidentally you hit a value in the noise that's three standard deviations outside of the normal distribution. Then this will have an effect on the estimated intercept and coefficient in your model, and this will be much stronger if you don't have much data. With more data, the effect of these accidents will average out. But this is a very theoretical case. If your outliers arise from something more weird, or something that you didn't include in the model but is actually structural, or maybe they are arbitrarily large, then more data might not be able to save your model.
Can one truly fight outliers with more data?
If your outliers occur because of natural outliers in the distribution, then yes, your estimations will become more stable with more data. Let's say you're using a logistic regression, in a theoretica
Can one truly fight outliers with more data? If your outliers occur because of natural outliers in the distribution, then yes, your estimations will become more stable with more data. Let's say you're using a logistic regression, in a theoretical case where the outcome depends on an observed variable with some normally distributed noise. Outliers will come in the noise, and accidentally you hit a value in the noise that's three standard deviations outside of the normal distribution. Then this will have an effect on the estimated intercept and coefficient in your model, and this will be much stronger if you don't have much data. With more data, the effect of these accidents will average out. But this is a very theoretical case. If your outliers arise from something more weird, or something that you didn't include in the model but is actually structural, or maybe they are arbitrarily large, then more data might not be able to save your model.
Can one truly fight outliers with more data? If your outliers occur because of natural outliers in the distribution, then yes, your estimations will become more stable with more data. Let's say you're using a logistic regression, in a theoretica
54,964
Can one truly fight outliers with more data?
Outliers are those samples that deviate from the pattern(s) of most data. Usually, the number of outliers are far less than normal samples. So, for each new sample, it is more likely that it is a normal sample than outlier. Thus I think, gathering more data can help fight outliers.
Can one truly fight outliers with more data?
Outliers are those samples that deviate from the pattern(s) of most data. Usually, the number of outliers are far less than normal samples. So, for each new sample, it is more likely that it is a norm
Can one truly fight outliers with more data? Outliers are those samples that deviate from the pattern(s) of most data. Usually, the number of outliers are far less than normal samples. So, for each new sample, it is more likely that it is a normal sample than outlier. Thus I think, gathering more data can help fight outliers.
Can one truly fight outliers with more data? Outliers are those samples that deviate from the pattern(s) of most data. Usually, the number of outliers are far less than normal samples. So, for each new sample, it is more likely that it is a norm
54,965
How to normalize if MAD equals zero?
If at least 50% of your observations are identical then yes, this normalisation operation wouldn’t make sense mathematically as well as intuitively. I would probably consider binning the observations as suggested before. For instance, all observations with the same value will be labelled group 1 and everything else group 2. That being said, if you really want to maintain the numerical nature of the feature then you could try various transformations first (such as the log, square root etc) to minimise the effect of the outliers and then normalise using the traditional methods.
How to normalize if MAD equals zero?
If at least 50% of your observations are identical then yes, this normalisation operation wouldn’t make sense mathematically as well as intuitively. I would probably consider binning the observations
How to normalize if MAD equals zero? If at least 50% of your observations are identical then yes, this normalisation operation wouldn’t make sense mathematically as well as intuitively. I would probably consider binning the observations as suggested before. For instance, all observations with the same value will be labelled group 1 and everything else group 2. That being said, if you really want to maintain the numerical nature of the feature then you could try various transformations first (such as the log, square root etc) to minimise the effect of the outliers and then normalise using the traditional methods.
How to normalize if MAD equals zero? If at least 50% of your observations are identical then yes, this normalisation operation wouldn’t make sense mathematically as well as intuitively. I would probably consider binning the observations
54,966
How to normalize if MAD equals zero?
Normalization is not at all straightforward, as this question indicates. Consider small numbers of large outliers. Even though they don't contribute to MAD, their final values normalized by MAD/median will be very high in absolute values, probably higher than their final values would be had you normalized by SD/mean. If you are trying to get all your features on a common scale for, say, fair relative penalization in ridge regression, LASSO, or penalized maximum likelihood, even that choice of normalization will affect the results. In your case with more than 50% identical values, none of the usual candidates for robust measures of scale will work as they all break down in that case. Like MAD, the $S_n$ and $Q_n$ measures developed in the paper you cite break down at 50%. I suppose you could try to use different order statistics than the median in some way, but then you are going back toward the measure of scale being dominated by outliers. One thing that came to mind (against usual advice) is binning such features to treat them as ordinal variables. In this case binning might not be so bad, if the main interest is whether or not the feature value differs from that single highly prevalent value and, if so, in which direction. That changes this problem into another difficult problem, however, which is how best to normalize an ordinal variable. This page, this page and this page provide entries to the discussion. It seems that knowledge of the underlying subject matter and what you are trying to accomplish with normalization, rather than a simple algorithm, might provide the best answer to your question.
How to normalize if MAD equals zero?
Normalization is not at all straightforward, as this question indicates. Consider small numbers of large outliers. Even though they don't contribute to MAD, their final values normalized by MAD/median
How to normalize if MAD equals zero? Normalization is not at all straightforward, as this question indicates. Consider small numbers of large outliers. Even though they don't contribute to MAD, their final values normalized by MAD/median will be very high in absolute values, probably higher than their final values would be had you normalized by SD/mean. If you are trying to get all your features on a common scale for, say, fair relative penalization in ridge regression, LASSO, or penalized maximum likelihood, even that choice of normalization will affect the results. In your case with more than 50% identical values, none of the usual candidates for robust measures of scale will work as they all break down in that case. Like MAD, the $S_n$ and $Q_n$ measures developed in the paper you cite break down at 50%. I suppose you could try to use different order statistics than the median in some way, but then you are going back toward the measure of scale being dominated by outliers. One thing that came to mind (against usual advice) is binning such features to treat them as ordinal variables. In this case binning might not be so bad, if the main interest is whether or not the feature value differs from that single highly prevalent value and, if so, in which direction. That changes this problem into another difficult problem, however, which is how best to normalize an ordinal variable. This page, this page and this page provide entries to the discussion. It seems that knowledge of the underlying subject matter and what you are trying to accomplish with normalization, rather than a simple algorithm, might provide the best answer to your question.
How to normalize if MAD equals zero? Normalization is not at all straightforward, as this question indicates. Consider small numbers of large outliers. Even though they don't contribute to MAD, their final values normalized by MAD/median
54,967
How to normalize if MAD equals zero?
Another way might be to induce some deviance on the 50% constant values. For example, if 50% of the values are 1, generate random values with range from 0.9999 to 1.0001 for constant values
How to normalize if MAD equals zero?
Another way might be to induce some deviance on the 50% constant values. For example, if 50% of the values are 1, generate random values with range from 0.9999 to 1.0001 for constant values
How to normalize if MAD equals zero? Another way might be to induce some deviance on the 50% constant values. For example, if 50% of the values are 1, generate random values with range from 0.9999 to 1.0001 for constant values
How to normalize if MAD equals zero? Another way might be to induce some deviance on the 50% constant values. For example, if 50% of the values are 1, generate random values with range from 0.9999 to 1.0001 for constant values
54,968
How much is overfitting?
There are no hard-and-fast rules about what constitutes "over-fitting". When using regression, heuristics are sometimes given in terms of the ratio of the sample size to the number of parameters, rather than the difference in predictive accuracy in-sample versus out-of-sample. E.g., in a regression context, I recall reading both 3 and 5 observations per observation being the minimum. In predictive applications, the goal is to improve your out-of-sample prediction, so if you have a model that is 20% worse out of sample than in-sample, but still predicts better than a model that is only 1% better out-of-sample than in-sample, you should going to prefer the model that is over-fitted. Of course, in such a situation you will also want to find ways of reducing the over-fitting, while keeping the out-of-sample prediction constant.
How much is overfitting?
There are no hard-and-fast rules about what constitutes "over-fitting". When using regression, heuristics are sometimes given in terms of the ratio of the sample size to the number of parameters, rat
How much is overfitting? There are no hard-and-fast rules about what constitutes "over-fitting". When using regression, heuristics are sometimes given in terms of the ratio of the sample size to the number of parameters, rather than the difference in predictive accuracy in-sample versus out-of-sample. E.g., in a regression context, I recall reading both 3 and 5 observations per observation being the minimum. In predictive applications, the goal is to improve your out-of-sample prediction, so if you have a model that is 20% worse out of sample than in-sample, but still predicts better than a model that is only 1% better out-of-sample than in-sample, you should going to prefer the model that is over-fitted. Of course, in such a situation you will also want to find ways of reducing the over-fitting, while keeping the out-of-sample prediction constant.
How much is overfitting? There are no hard-and-fast rules about what constitutes "over-fitting". When using regression, heuristics are sometimes given in terms of the ratio of the sample size to the number of parameters, rat
54,969
How much is overfitting?
Another suggestion: You could also take a look at the OOB error in Random Forest. This OOB error should actually be worse on the training data then the error on the test data (since on the training data only trees are used which haven't been trained on the observation and therefor these are less trees).
How much is overfitting?
Another suggestion: You could also take a look at the OOB error in Random Forest. This OOB error should actually be worse on the training data then the error on the test data (since on the training da
How much is overfitting? Another suggestion: You could also take a look at the OOB error in Random Forest. This OOB error should actually be worse on the training data then the error on the test data (since on the training data only trees are used which haven't been trained on the observation and therefor these are less trees).
How much is overfitting? Another suggestion: You could also take a look at the OOB error in Random Forest. This OOB error should actually be worse on the training data then the error on the test data (since on the training da
54,970
How much is overfitting?
Interesting question. I'm not familiar with such a definition. You can evaluate that using cross validation. Do cross validation for X times 1.1 In each time measure the accuracy on both the training set and test set Test whether the train accuracy and test accuracy are likely to come from the same distribution (e.g. using Two-Sample t-Test for Equal Means) Note that this procedure ignores the model complexity. If your model size is tiny with respect to the dataset, it cannot encode much of it. If you see a significant difference between the train and test accuracies there might be another problem there. Example of such a problem is a difference between your train and test datasets. On the other hand, if you have a huge model and the accuracies are similar, you still might have over fitted but also failed to exploit that overfitting. Example of such a case is a large random forest while a one of its trees contributes most predictive power.
How much is overfitting?
Interesting question. I'm not familiar with such a definition. You can evaluate that using cross validation. Do cross validation for X times 1.1 In each time measure the accuracy on both the trainin
How much is overfitting? Interesting question. I'm not familiar with such a definition. You can evaluate that using cross validation. Do cross validation for X times 1.1 In each time measure the accuracy on both the training set and test set Test whether the train accuracy and test accuracy are likely to come from the same distribution (e.g. using Two-Sample t-Test for Equal Means) Note that this procedure ignores the model complexity. If your model size is tiny with respect to the dataset, it cannot encode much of it. If you see a significant difference between the train and test accuracies there might be another problem there. Example of such a problem is a difference between your train and test datasets. On the other hand, if you have a huge model and the accuracies are similar, you still might have over fitted but also failed to exploit that overfitting. Example of such a case is a large random forest while a one of its trees contributes most predictive power.
How much is overfitting? Interesting question. I'm not familiar with such a definition. You can evaluate that using cross validation. Do cross validation for X times 1.1 In each time measure the accuracy on both the trainin
54,971
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset
Neither seems appropriate. Rather, assign whatever penalty scores you want to the two kinds of errors (mistaking a 0 for a 1, and mistaking a 1 for a 0) and sum the errors. This allows you to precisely control the tradeoff.
What evaluation metric to use for high class imbalance where i want to capture most of the positive
Neither seems appropriate. Rather, assign whatever penalty scores you want to the two kinds of errors (mistaking a 0 for a 1, and mistaking a 1 for a 0) and sum the errors. This allows you to precisel
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset Neither seems appropriate. Rather, assign whatever penalty scores you want to the two kinds of errors (mistaking a 0 for a 1, and mistaking a 1 for a 0) and sum the errors. This allows you to precisely control the tradeoff.
What evaluation metric to use for high class imbalance where i want to capture most of the positive Neither seems appropriate. Rather, assign whatever penalty scores you want to the two kinds of errors (mistaking a 0 for a 1, and mistaking a 1 for a 0) and sum the errors. This allows you to precisel
54,972
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset
You can look at the Precision,Recall and the F1 score which is nothing but the harmonic mean of the Precision and Recall.
What evaluation metric to use for high class imbalance where i want to capture most of the positive
You can look at the Precision,Recall and the F1 score which is nothing but the harmonic mean of the Precision and Recall.
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset You can look at the Precision,Recall and the F1 score which is nothing but the harmonic mean of the Precision and Recall.
What evaluation metric to use for high class imbalance where i want to capture most of the positive You can look at the Precision,Recall and the F1 score which is nothing but the harmonic mean of the Precision and Recall.
54,973
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset
Your reading is correct in the sense that AUC-PRC is a better metric for imbalanced classification compared to AUC-ROC. I disagree with Kodi in sense that AUC could be useful in these scenarios. Like Santanu said you could look for precision, recall and F1. I would want to add Sensitivity and Kappa. However, choice of a metric is not only the way to handle imbalanced classification. You could look for sampling techniques such as SMOTE, converting it to a probability estimation problem with biased threshold and others discussed here and elsewhere.
What evaluation metric to use for high class imbalance where i want to capture most of the positive
Your reading is correct in the sense that AUC-PRC is a better metric for imbalanced classification compared to AUC-ROC. I disagree with Kodi in sense that AUC could be useful in these scenarios. Like
What evaluation metric to use for high class imbalance where i want to capture most of the positive (ones) in the dataset Your reading is correct in the sense that AUC-PRC is a better metric for imbalanced classification compared to AUC-ROC. I disagree with Kodi in sense that AUC could be useful in these scenarios. Like Santanu said you could look for precision, recall and F1. I would want to add Sensitivity and Kappa. However, choice of a metric is not only the way to handle imbalanced classification. You could look for sampling techniques such as SMOTE, converting it to a probability estimation problem with biased threshold and others discussed here and elsewhere.
What evaluation metric to use for high class imbalance where i want to capture most of the positive Your reading is correct in the sense that AUC-PRC is a better metric for imbalanced classification compared to AUC-ROC. I disagree with Kodi in sense that AUC could be useful in these scenarios. Like
54,974
Modeling Correlated Outputs in Multi-Output Neural Networks
The question is, if the likelihood function you specified does so. E.g. if you model 2 variables $y_1, y_2$, you will often use a likelihood function of the form $$ p(y_1, y_2 | x) = p(y_1|x) p(y_2|x). $$ For example, this is the same as summing up the log-likelihoods of two Bernoulli variables if you do two classificiations simultaneously for a loss function. This will not capture correlations, as the likelihood function assumes independency. If you want to capture correlations, you need to use a different likelihood function. Two ways of doing so are a) formulate the problem in an autoregressive fashion or b) using additional stochastic variables. In the former case, you use a likelihood function of the form $$p(y_1, y_2 | x) = p(y_1|x) p(y_2|y_1, x).$$ Note the $y_1$ in the condition for $y_2$. A starting point on how to do this is [1]. The latter is accounts to a graphical model of the form $$p(y_1, y_2 | x) = \int p(y_1|x, z) p(y_2|x, z) p(z|x) dz.$$ Check out [2] for a description on how to learn these beasts. [1] Uria, Benigno, et al. "Neural Autoregressive Distribution Estimation." Journal of Machine Learning Research 17.205 (2016): 1-37. [2] Tang, Yichuan, and Ruslan R. Salakhutdinov. "Learning stochastic feedforward neural networks." Advances in Neural Information Processing Systems. 2013.
Modeling Correlated Outputs in Multi-Output Neural Networks
The question is, if the likelihood function you specified does so. E.g. if you model 2 variables $y_1, y_2$, you will often use a likelihood function of the form $$ p(y_1, y_2 | x) = p(y_1|x) p(y_2|x)
Modeling Correlated Outputs in Multi-Output Neural Networks The question is, if the likelihood function you specified does so. E.g. if you model 2 variables $y_1, y_2$, you will often use a likelihood function of the form $$ p(y_1, y_2 | x) = p(y_1|x) p(y_2|x). $$ For example, this is the same as summing up the log-likelihoods of two Bernoulli variables if you do two classificiations simultaneously for a loss function. This will not capture correlations, as the likelihood function assumes independency. If you want to capture correlations, you need to use a different likelihood function. Two ways of doing so are a) formulate the problem in an autoregressive fashion or b) using additional stochastic variables. In the former case, you use a likelihood function of the form $$p(y_1, y_2 | x) = p(y_1|x) p(y_2|y_1, x).$$ Note the $y_1$ in the condition for $y_2$. A starting point on how to do this is [1]. The latter is accounts to a graphical model of the form $$p(y_1, y_2 | x) = \int p(y_1|x, z) p(y_2|x, z) p(z|x) dz.$$ Check out [2] for a description on how to learn these beasts. [1] Uria, Benigno, et al. "Neural Autoregressive Distribution Estimation." Journal of Machine Learning Research 17.205 (2016): 1-37. [2] Tang, Yichuan, and Ruslan R. Salakhutdinov. "Learning stochastic feedforward neural networks." Advances in Neural Information Processing Systems. 2013.
Modeling Correlated Outputs in Multi-Output Neural Networks The question is, if the likelihood function you specified does so. E.g. if you model 2 variables $y_1, y_2$, you will often use a likelihood function of the form $$ p(y_1, y_2 | x) = p(y_1|x) p(y_2|x)
54,975
Should I use the logits or the scaled probabilities from them to extract my predictions?
It is not exactly clear what kind of model you are referring to, but in case of logistic regression, multinomial logistic regression and similar models, they output probabilities. As about predictions, notice that if you pass the logits through softmax function then it does not change the relations between values, so if $x_1 < x_2$, then $\DeclareMathOperator{\softmax}{\mathrm{softmax}} \softmax(x_1) < \softmax(x_2)$, so it doesn't really matter what do you compare.
Should I use the logits or the scaled probabilities from them to extract my predictions?
It is not exactly clear what kind of model you are referring to, but in case of logistic regression, multinomial logistic regression and similar models, they output probabilities. As about prediction
Should I use the logits or the scaled probabilities from them to extract my predictions? It is not exactly clear what kind of model you are referring to, but in case of logistic regression, multinomial logistic regression and similar models, they output probabilities. As about predictions, notice that if you pass the logits through softmax function then it does not change the relations between values, so if $x_1 < x_2$, then $\DeclareMathOperator{\softmax}{\mathrm{softmax}} \softmax(x_1) < \softmax(x_2)$, so it doesn't really matter what do you compare.
Should I use the logits or the scaled probabilities from them to extract my predictions? It is not exactly clear what kind of model you are referring to, but in case of logistic regression, multinomial logistic regression and similar models, they output probabilities. As about prediction
54,976
Fisher R-to-Z transform for group correlation stats
You are right: it's not necessary to perform Fisher's transform. Do the t-test. It uses an exact null distribution, whereas comparing Fisher z-transform to a normal distribution would be an approximation. Trying to do both the z-transform and the transformation to t-distribution would be complete nonsense. The formula for a t-statistic that you give is only for Pearson correlation coefficients, not for z-statistics.
Fisher R-to-Z transform for group correlation stats
You are right: it's not necessary to perform Fisher's transform. Do the t-test. It uses an exact null distribution, whereas comparing Fisher z-transform to a normal distribution would be an approximat
Fisher R-to-Z transform for group correlation stats You are right: it's not necessary to perform Fisher's transform. Do the t-test. It uses an exact null distribution, whereas comparing Fisher z-transform to a normal distribution would be an approximation. Trying to do both the z-transform and the transformation to t-distribution would be complete nonsense. The formula for a t-statistic that you give is only for Pearson correlation coefficients, not for z-statistics.
Fisher R-to-Z transform for group correlation stats You are right: it's not necessary to perform Fisher's transform. Do the t-test. It uses an exact null distribution, whereas comparing Fisher z-transform to a normal distribution would be an approximat
54,977
Fisher R-to-Z transform for group correlation stats
If you analyse the $r$ values directly you are assuming they all have the same precision which is only likely to be true if they are (a) all based on the same $n$ (b) all more or less the same. You could compute the standard errors and then do your analysis weighting each by the inverse of its sampling variance. It would seem easier to transform them to $z$ especially if they are all based on the same $n$ as then you could assume equal variances. If they are not based on the same $n$ then you definitely need to weight them. If I were doing this I would treat it as a meta-analysis problem because software is readily available for doing this on correlation coefficients and it takes care of the weighting. I would enter the $z$ with their standard errors and get an overall summary $z$ (which I would transform back to $r$ obviously) and more importantly a confidence interval for $z$ (and hence $r$). It would also provide a significance test if you really like significance tests. Meta-analysis software would also give you an estimate of the heterogeneity of the estimated coefficients which would indicate whether in fact summarising them as a single number was a fruitful thing to so.
Fisher R-to-Z transform for group correlation stats
If you analyse the $r$ values directly you are assuming they all have the same precision which is only likely to be true if they are (a) all based on the same $n$ (b) all more or less the same. You co
Fisher R-to-Z transform for group correlation stats If you analyse the $r$ values directly you are assuming they all have the same precision which is only likely to be true if they are (a) all based on the same $n$ (b) all more or less the same. You could compute the standard errors and then do your analysis weighting each by the inverse of its sampling variance. It would seem easier to transform them to $z$ especially if they are all based on the same $n$ as then you could assume equal variances. If they are not based on the same $n$ then you definitely need to weight them. If I were doing this I would treat it as a meta-analysis problem because software is readily available for doing this on correlation coefficients and it takes care of the weighting. I would enter the $z$ with their standard errors and get an overall summary $z$ (which I would transform back to $r$ obviously) and more importantly a confidence interval for $z$ (and hence $r$). It would also provide a significance test if you really like significance tests. Meta-analysis software would also give you an estimate of the heterogeneity of the estimated coefficients which would indicate whether in fact summarising them as a single number was a fruitful thing to so.
Fisher R-to-Z transform for group correlation stats If you analyse the $r$ values directly you are assuming they all have the same precision which is only likely to be true if they are (a) all based on the same $n$ (b) all more or less the same. You co
54,978
Fisher R-to-Z transform for group correlation stats
If I am reading you correctly, you are comparing the mean r values of two groups. In general, even though the t test is robust to violations of normality, you have greater power with normal distributions. Therefore, if some of your r's are high (over .6 or so) it would be a good idea to transform them. Naturally, the t test doesn't care what the numbers are (they are correlations) but only their distribution.
Fisher R-to-Z transform for group correlation stats
If I am reading you correctly, you are comparing the mean r values of two groups. In general, even though the t test is robust to violations of normality, you have greater power with normal distributi
Fisher R-to-Z transform for group correlation stats If I am reading you correctly, you are comparing the mean r values of two groups. In general, even though the t test is robust to violations of normality, you have greater power with normal distributions. Therefore, if some of your r's are high (over .6 or so) it would be a good idea to transform them. Naturally, the t test doesn't care what the numbers are (they are correlations) but only their distribution.
Fisher R-to-Z transform for group correlation stats If I am reading you correctly, you are comparing the mean r values of two groups. In general, even though the t test is robust to violations of normality, you have greater power with normal distributi
54,979
Automated forecasting of 1000 weekly time series (food product retail)
Learn R, at least well enough to program for loops to loop over your time series. Plus, learn enough R to read and write your data (e.g., using the read.table() and write.table() commands). There are tons of introductions to R around, pick almost any one. Read Hyndman & Athanasopoulos "Forecasting: principles and practice", which contains many worked examples in R. This will get you a long way. Retail time series will likely exhibit some yearly seasonality, so concentrate on seasonal methods (e.g., seasonal exponential smoothing, but don't necessarily include trends). Skip ARIMA models - ARIMA doesn't like "long" seasonal cycles, and 52 weeks qualify as "long", plus, you may need to take seasonal differences, and most retail time series don't cover many years, so each differencing loses a lot of data. Consider some kind of regression-based forecast to capture the effect of promotions, if there are any in your data. Potentially: Attend this year's International Symposium on Forecasting in Cairns, in particular one of the pre-conference workshops, like the one on forecasting with R or the one on forecasting to meet demand, which is given by an experienced retail forecaster. (Disclaimer: the guy giving the second workshop is a colleague of mine. If you come to Cairns, say hello to me and feel free to ask me all about retail forecasting.)
Automated forecasting of 1000 weekly time series (food product retail)
Learn R, at least well enough to program for loops to loop over your time series. Plus, learn enough R to read and write your data (e.g., using the read.table() and write.table() commands). There are
Automated forecasting of 1000 weekly time series (food product retail) Learn R, at least well enough to program for loops to loop over your time series. Plus, learn enough R to read and write your data (e.g., using the read.table() and write.table() commands). There are tons of introductions to R around, pick almost any one. Read Hyndman & Athanasopoulos "Forecasting: principles and practice", which contains many worked examples in R. This will get you a long way. Retail time series will likely exhibit some yearly seasonality, so concentrate on seasonal methods (e.g., seasonal exponential smoothing, but don't necessarily include trends). Skip ARIMA models - ARIMA doesn't like "long" seasonal cycles, and 52 weeks qualify as "long", plus, you may need to take seasonal differences, and most retail time series don't cover many years, so each differencing loses a lot of data. Consider some kind of regression-based forecast to capture the effect of promotions, if there are any in your data. Potentially: Attend this year's International Symposium on Forecasting in Cairns, in particular one of the pre-conference workshops, like the one on forecasting with R or the one on forecasting to meet demand, which is given by an experienced retail forecaster. (Disclaimer: the guy giving the second workshop is a colleague of mine. If you come to Cairns, say hello to me and feel free to ask me all about retail forecasting.)
Automated forecasting of 1000 weekly time series (food product retail) Learn R, at least well enough to program for loops to loop over your time series. Plus, learn enough R to read and write your data (e.g., using the read.table() and write.table() commands). There are
54,980
Automated forecasting of 1000 weekly time series (food product retail)
AUTOBOX ( a piece of software that I had helped develop) has an R version that you can try out . It will detect not only the ARIMA structure but the week of the year structure and changes in the week-of-the-year structure along with level shifts and local time trends while incorporating the lead and lag effects of user-specified causals.
Automated forecasting of 1000 weekly time series (food product retail)
AUTOBOX ( a piece of software that I had helped develop) has an R version that you can try out . It will detect not only the ARIMA structure but the week of the year structure and changes in the wee
Automated forecasting of 1000 weekly time series (food product retail) AUTOBOX ( a piece of software that I had helped develop) has an R version that you can try out . It will detect not only the ARIMA structure but the week of the year structure and changes in the week-of-the-year structure along with level shifts and local time trends while incorporating the lead and lag effects of user-specified causals.
Automated forecasting of 1000 weekly time series (food product retail) AUTOBOX ( a piece of software that I had helped develop) has an R version that you can try out . It will detect not only the ARIMA structure but the week of the year structure and changes in the wee
54,981
price elasticity and time series modelling
The most important question here is 'Is the observed variation in the price exogenous?'. Unless you did pricing experiment, the answer is usually no for any observational data and there is no hope of recovering price elasticity just from observing sales and price. That is, the price in your equation has to be correlated with unobserved factors that affect the demand and that breaks the basic assumption for parameter identification of the least square method. Even if you're the price setter, your pricing is affected by various demand factors(you charge high price when you expect high demand...) and therefore the observed prices are endogenous. So you often get negative 'price elasticity' from the least square estimation. What you need is a real or quasi experiment, that makes your price variation purely exogenous. Unexpected promotion in random timing can be one of them. But real promos rarely have such properties. Another route is to look at individual customer level data if the item of interest is frequently bought. Modeling assumptions on individual utility/choice helps a lot for the identification. (given that you believe those assumptions...) There still would be the concern of endogenous price, but in much less degree than the aggregate reduced form that you're looking at. Read any graduate econometrics textbook and look for topics like simultaneous equations model/instrumental variables. Price elasticity/demand estimation is one of the prime examples. One of the leading scholars related to the second approach is J.P.Dube at Booth. He's done a lot of research on modeling individual level purchase data. He also has worked on some pricing experiments too. Sophisticated time series concerns can come much later after the above issue is addressed.
price elasticity and time series modelling
The most important question here is 'Is the observed variation in the price exogenous?'. Unless you did pricing experiment, the answer is usually no for any observational data and there is no hope of
price elasticity and time series modelling The most important question here is 'Is the observed variation in the price exogenous?'. Unless you did pricing experiment, the answer is usually no for any observational data and there is no hope of recovering price elasticity just from observing sales and price. That is, the price in your equation has to be correlated with unobserved factors that affect the demand and that breaks the basic assumption for parameter identification of the least square method. Even if you're the price setter, your pricing is affected by various demand factors(you charge high price when you expect high demand...) and therefore the observed prices are endogenous. So you often get negative 'price elasticity' from the least square estimation. What you need is a real or quasi experiment, that makes your price variation purely exogenous. Unexpected promotion in random timing can be one of them. But real promos rarely have such properties. Another route is to look at individual customer level data if the item of interest is frequently bought. Modeling assumptions on individual utility/choice helps a lot for the identification. (given that you believe those assumptions...) There still would be the concern of endogenous price, but in much less degree than the aggregate reduced form that you're looking at. Read any graduate econometrics textbook and look for topics like simultaneous equations model/instrumental variables. Price elasticity/demand estimation is one of the prime examples. One of the leading scholars related to the second approach is J.P.Dube at Booth. He's done a lot of research on modeling individual level purchase data. He also has worked on some pricing experiments too. Sophisticated time series concerns can come much later after the above issue is addressed.
price elasticity and time series modelling The most important question here is 'Is the observed variation in the price exogenous?'. Unless you did pricing experiment, the answer is usually no for any observational data and there is no hope of
54,982
How do I compute the correlation between two features and their output class?
Correlation is used as a method for feature selection and is usually calculated between a feature and the output class (filter methods for feature selection). It roughly translates to how much will the change be reflected on the output class for a small change in the current feature. If the change is proportional and very high, then we say that the feature is highly correlated with the output class and is usually a very good idea to keep it around for any end of the pipeline tasks. That being said, a feature having higher correlation in say corpus XYZ won't necessarily mean that it will have a high correlation in some other corpus say ABC and thus cannot be translated from one dataset to another. There are many metrics to measure the correlation between a feature and a class label such as mutual information, chi-square, correlation coefficient scores etc. numpy.correlate in python does the trick for most of my work. or if i want to use mutual information then sklearn has mutual info score .
How do I compute the correlation between two features and their output class?
Correlation is used as a method for feature selection and is usually calculated between a feature and the output class (filter methods for feature selection). It roughly translates to how much will th
How do I compute the correlation between two features and their output class? Correlation is used as a method for feature selection and is usually calculated between a feature and the output class (filter methods for feature selection). It roughly translates to how much will the change be reflected on the output class for a small change in the current feature. If the change is proportional and very high, then we say that the feature is highly correlated with the output class and is usually a very good idea to keep it around for any end of the pipeline tasks. That being said, a feature having higher correlation in say corpus XYZ won't necessarily mean that it will have a high correlation in some other corpus say ABC and thus cannot be translated from one dataset to another. There are many metrics to measure the correlation between a feature and a class label such as mutual information, chi-square, correlation coefficient scores etc. numpy.correlate in python does the trick for most of my work. or if i want to use mutual information then sklearn has mutual info score .
How do I compute the correlation between two features and their output class? Correlation is used as a method for feature selection and is usually calculated between a feature and the output class (filter methods for feature selection). It roughly translates to how much will th
54,983
How do I compute the correlation between two features and their output class?
This is called correlation filter. If you are using R, You can have a look at FSelector package's linear.correlation and rank.correlation functions. When class labels are categorical, you can use Point-Biserial Correlation, polyserial or polychloric correlation. The idea is if feature 1 is highly correlated with class label while feature 2 is not highly correlated we would select feature 1 over feature 2.
How do I compute the correlation between two features and their output class?
This is called correlation filter. If you are using R, You can have a look at FSelector package's linear.correlation and rank.correlation functions. When class labels are categorical, you can use Poin
How do I compute the correlation between two features and their output class? This is called correlation filter. If you are using R, You can have a look at FSelector package's linear.correlation and rank.correlation functions. When class labels are categorical, you can use Point-Biserial Correlation, polyserial or polychloric correlation. The idea is if feature 1 is highly correlated with class label while feature 2 is not highly correlated we would select feature 1 over feature 2.
How do I compute the correlation between two features and their output class? This is called correlation filter. If you are using R, You can have a look at FSelector package's linear.correlation and rank.correlation functions. When class labels are categorical, you can use Poin
54,984
Survival bias in survival analysis
Survival bias occurs in retrospective studies where inclusion is in some sense outcome dependent (through outcomes or their moderators) but is treated as representative of a population at-risk at baseline. Your description does not give any details suggesting survival bias is an issue here. Censoring leads to a different type of bias, censoring bias, when not properly accounted for. Your analytic plan of using a Cox model does properly account for censoring thus eliminating censoring bias. Despite that, censoring does reduce the power of an analysis. Suppose you monitor 5,000 people but only 10 experience an outcome (death or other), the Cox model does not afford much more power than a survival analysis of only 10 people. Your description of the exposure is not exactly clear to me. It sounds like participants are eligible to participate in the study only if they have not yet begun a certain therapy. After a period of self-determined time, they begin a therapy. You then follow participants for an outcome (at the time of which they may be either on or off such a therapy). This is an analysis that should be done using time-varying covariates with some caveats. When I enter the study, irrespective of calendar time or age, my survival "clock" is at time 0. If I initiate therapy at day 10, and then die at day 20 I contribute two correlated observations to the sample: the first I live 0-10 days with no therapy and am censored at time 10, the second I live 0-10 days and die at time 10. The clock resets when I initiate therapy. Frailties are the Cox model equivalent of random effects that allow you to account for clustered observations in such a format. If age and/or calendar year are significant predictors of survival in such a study, you should consider adding them as covariates in the model. The caveat to time-varying covariates is as follows: initiation of therapy almost always depends on latent disease state. Patients whose initial hospitalization requires high acuity will initiate therapy more quickly, and likely die more quickly even if the therapy is beneficial. This leads to use-bias. If you measure indicators of disease state longitudinally (like blood pressure, physical functioning, or other), latent variable models or marginal structural models may be used to reduce such a bias.
Survival bias in survival analysis
Survival bias occurs in retrospective studies where inclusion is in some sense outcome dependent (through outcomes or their moderators) but is treated as representative of a population at-risk at base
Survival bias in survival analysis Survival bias occurs in retrospective studies where inclusion is in some sense outcome dependent (through outcomes or their moderators) but is treated as representative of a population at-risk at baseline. Your description does not give any details suggesting survival bias is an issue here. Censoring leads to a different type of bias, censoring bias, when not properly accounted for. Your analytic plan of using a Cox model does properly account for censoring thus eliminating censoring bias. Despite that, censoring does reduce the power of an analysis. Suppose you monitor 5,000 people but only 10 experience an outcome (death or other), the Cox model does not afford much more power than a survival analysis of only 10 people. Your description of the exposure is not exactly clear to me. It sounds like participants are eligible to participate in the study only if they have not yet begun a certain therapy. After a period of self-determined time, they begin a therapy. You then follow participants for an outcome (at the time of which they may be either on or off such a therapy). This is an analysis that should be done using time-varying covariates with some caveats. When I enter the study, irrespective of calendar time or age, my survival "clock" is at time 0. If I initiate therapy at day 10, and then die at day 20 I contribute two correlated observations to the sample: the first I live 0-10 days with no therapy and am censored at time 10, the second I live 0-10 days and die at time 10. The clock resets when I initiate therapy. Frailties are the Cox model equivalent of random effects that allow you to account for clustered observations in such a format. If age and/or calendar year are significant predictors of survival in such a study, you should consider adding them as covariates in the model. The caveat to time-varying covariates is as follows: initiation of therapy almost always depends on latent disease state. Patients whose initial hospitalization requires high acuity will initiate therapy more quickly, and likely die more quickly even if the therapy is beneficial. This leads to use-bias. If you measure indicators of disease state longitudinally (like blood pressure, physical functioning, or other), latent variable models or marginal structural models may be used to reduce such a bias.
Survival bias in survival analysis Survival bias occurs in retrospective studies where inclusion is in some sense outcome dependent (through outcomes or their moderators) but is treated as representative of a population at-risk at base
54,985
Relationship between standard error of the mean and standard deviation
The samples are 'not normally disributed'. Don't they have to be to use pnorm? The question is asking about the distribution of sample means, not the distribution of the original variable. Under mild conditions, sample means will tend to be closer to normally distributed than the original variable was. See what happens when we sample from a distribution of counts (representing number of disturbances), which has population mean 15 and sd 10: (many elementary books attribute this tendency to the central limit theorem, though the central limit theorem doesn't tell us what will happen with small samples; nonetheless this is a real effect -- I'd argue it's better attributed to the Berry-Esseen inequality) What confuses me is the term standard error of the mean. The term means "the standard deviation of the distribution of sample means". See the histogram on the right above -- its standard deviation is consistent with 1 (for this large sample - 30000 values from the distribution of sample means - we got a standard deviation of just under 1.01). We see that the distribution of the sample means -- while not actually normal -- is quite close to normal in this case; using a normal distribution with mean 15 and standard deviation 1 as an approximation of the distribution of means (of samples of 100 observations from the original quite skewed distribution) will work quite well in this case. While $n=100$ was plenty large enough to treat the sample mean as approximately normal in the situation I simulated, it's not true for every distribution -- in some cases, even when the distribution of sample means will still be well approximated by a normal distribution in large samples -- you may need $n$ to be a great deal larger than 100 for it work well; we don't know the population distribution here, so we don't know for sure that $n=100$ would be sufficient (it was for the example distribution I used, which you can see it at least moderately skewed); that n=100 is large enough to approximate as normal in this case is an assumption. 1) Why is standard deviation not 10 as described in the original problem? Because the distribution of sample means has a smaller standard deviation than the original variable that you took means of. This is why you divide the original standard deviation by $\sqrt{n}$ -- because that then gives the standard deviation of the distribution of means from samples of size $n$. 2) The answer gives the standard error of the mean as 1, yet this value is termed standard deviation in the R code (sd=1). Why is that ? It's the standard deviation of the distribution of means (and the call to pnorm is because we're using the normal distribution to approximate the distribution of sample means).
Relationship between standard error of the mean and standard deviation
The samples are 'not normally disributed'. Don't they have to be to use pnorm? The question is asking about the distribution of sample means, not the distribution of the original variable. Under mild
Relationship between standard error of the mean and standard deviation The samples are 'not normally disributed'. Don't they have to be to use pnorm? The question is asking about the distribution of sample means, not the distribution of the original variable. Under mild conditions, sample means will tend to be closer to normally distributed than the original variable was. See what happens when we sample from a distribution of counts (representing number of disturbances), which has population mean 15 and sd 10: (many elementary books attribute this tendency to the central limit theorem, though the central limit theorem doesn't tell us what will happen with small samples; nonetheless this is a real effect -- I'd argue it's better attributed to the Berry-Esseen inequality) What confuses me is the term standard error of the mean. The term means "the standard deviation of the distribution of sample means". See the histogram on the right above -- its standard deviation is consistent with 1 (for this large sample - 30000 values from the distribution of sample means - we got a standard deviation of just under 1.01). We see that the distribution of the sample means -- while not actually normal -- is quite close to normal in this case; using a normal distribution with mean 15 and standard deviation 1 as an approximation of the distribution of means (of samples of 100 observations from the original quite skewed distribution) will work quite well in this case. While $n=100$ was plenty large enough to treat the sample mean as approximately normal in the situation I simulated, it's not true for every distribution -- in some cases, even when the distribution of sample means will still be well approximated by a normal distribution in large samples -- you may need $n$ to be a great deal larger than 100 for it work well; we don't know the population distribution here, so we don't know for sure that $n=100$ would be sufficient (it was for the example distribution I used, which you can see it at least moderately skewed); that n=100 is large enough to approximate as normal in this case is an assumption. 1) Why is standard deviation not 10 as described in the original problem? Because the distribution of sample means has a smaller standard deviation than the original variable that you took means of. This is why you divide the original standard deviation by $\sqrt{n}$ -- because that then gives the standard deviation of the distribution of means from samples of size $n$. 2) The answer gives the standard error of the mean as 1, yet this value is termed standard deviation in the R code (sd=1). Why is that ? It's the standard deviation of the distribution of means (and the call to pnorm is because we're using the normal distribution to approximate the distribution of sample means).
Relationship between standard error of the mean and standard deviation The samples are 'not normally disributed'. Don't they have to be to use pnorm? The question is asking about the distribution of sample means, not the distribution of the original variable. Under mild
54,986
Can someone please explain the truncated back propagation through time algorithm?
I am sure you have found your answer by now, but for others. The truncated part of Truncated Backpropagation through Time simply refers to at which point in time to stop calculating the gradients for the backpropagation phase. Lets say you truncate after $k$ steps then the difference is you calculate the below instead. $$ \frac{\partial{L}}{\partial A}\approx\sum_{t=T-k}^{T}\frac{\partial{L}}{\partial h_{t}}\frac{\partial^{+}{h_{t}}}{\partial A}=\sum_{t=T-k}^{T}(\frac{\partial{L}}{\partial h_{t}}\odot f'(Ah_{t-1}+Bx_{t}))h_{t-1}^{T}. $$ Where $\frac{\partial^{+}}{\partial{A}}$ is the "immediate" partial wrt A, i.e. the one that assumes all terms other than an explicit A are constant. Essentially you are truncating the sum in Equation (4) of Pascanu, Mikolov, and Bengio. At least this is my understanding of it judging from Ilya Sutskever's psuedo-code in his thesis. Truncated BPTT is given below: 1: for t from 1 to T do 2: Run the RNN for one step, computing ht and zt 3: if t divides k1 then 4: Run BPTT (as described in sec. 2.5), from t down to t − k2 5: end if 6: end for
Can someone please explain the truncated back propagation through time algorithm?
I am sure you have found your answer by now, but for others. The truncated part of Truncated Backpropagation through Time simply refers to at which point in time to stop calculating the gradients for
Can someone please explain the truncated back propagation through time algorithm? I am sure you have found your answer by now, but for others. The truncated part of Truncated Backpropagation through Time simply refers to at which point in time to stop calculating the gradients for the backpropagation phase. Lets say you truncate after $k$ steps then the difference is you calculate the below instead. $$ \frac{\partial{L}}{\partial A}\approx\sum_{t=T-k}^{T}\frac{\partial{L}}{\partial h_{t}}\frac{\partial^{+}{h_{t}}}{\partial A}=\sum_{t=T-k}^{T}(\frac{\partial{L}}{\partial h_{t}}\odot f'(Ah_{t-1}+Bx_{t}))h_{t-1}^{T}. $$ Where $\frac{\partial^{+}}{\partial{A}}$ is the "immediate" partial wrt A, i.e. the one that assumes all terms other than an explicit A are constant. Essentially you are truncating the sum in Equation (4) of Pascanu, Mikolov, and Bengio. At least this is my understanding of it judging from Ilya Sutskever's psuedo-code in his thesis. Truncated BPTT is given below: 1: for t from 1 to T do 2: Run the RNN for one step, computing ht and zt 3: if t divides k1 then 4: Run BPTT (as described in sec. 2.5), from t down to t − k2 5: end if 6: end for
Can someone please explain the truncated back propagation through time algorithm? I am sure you have found your answer by now, but for others. The truncated part of Truncated Backpropagation through Time simply refers to at which point in time to stop calculating the gradients for
54,987
On real-world use of gamma distributions
According to Wikipedia, "the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution". (See also comment by Scortchi.) It has similar interpretations to the Gamma distribution in terms of "waiting times". Note that for a Gamma distribution with shape parameter $\alpha$ and rate parameter $\beta$, the mean and variance are $$\mu=\frac{\alpha}{\beta}\,,\,\sigma^2=\frac{\alpha}{\beta^2}$$ while for a negative Binomial distribution with success probability $p$ and number of failures $r$, the mean and variance are $$\mu=\frac{pr}{1-p}\,,\,\sigma^2=\frac{pr}{(1-p)^2}$$ So equating the two would give $$\alpha\approx{pr}\,,\,\beta\approx{1-p}$$ (Note however that this may not be a great approximation for all parameter values, so it would be better to estimate the negative binomial parameters from your data directly.)
On real-world use of gamma distributions
According to Wikipedia, "the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution". (See also comment by Scortchi.) It has similar interpretations to
On real-world use of gamma distributions According to Wikipedia, "the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution". (See also comment by Scortchi.) It has similar interpretations to the Gamma distribution in terms of "waiting times". Note that for a Gamma distribution with shape parameter $\alpha$ and rate parameter $\beta$, the mean and variance are $$\mu=\frac{\alpha}{\beta}\,,\,\sigma^2=\frac{\alpha}{\beta^2}$$ while for a negative Binomial distribution with success probability $p$ and number of failures $r$, the mean and variance are $$\mu=\frac{pr}{1-p}\,,\,\sigma^2=\frac{pr}{(1-p)^2}$$ So equating the two would give $$\alpha\approx{pr}\,,\,\beta\approx{1-p}$$ (Note however that this may not be a great approximation for all parameter values, so it would be better to estimate the negative binomial parameters from your data directly.)
On real-world use of gamma distributions According to Wikipedia, "the negative binomial distribution is sometimes considered the discrete analogue of the Gamma distribution". (See also comment by Scortchi.) It has similar interpretations to
54,988
Kappa Statistic for Variable Number of Raters
You can do this using a "generalized formula." The intuition here is to use the following formula for observed agreement, which uses the number of raters rather than conditionals for specific raters. $$p_o=\frac{1}{n'}\sum_{i=1}^{n'}\sum_{k=1}^q\frac{r_{ik}(r_{ik}^\star-1)}{r_i(r_i-1)}$$ where $n'$ is the number of items that were coded by two or more raters, $q$ is the number of possible categories, $r_{ik}$ is the number of raters that assigned item $i$ to category $k$, and $r_i$ is the number of raters that assigned item $i$ to any category. $r_{ik}^\star$ allows you to use a weighting scheme to account for non-nominal categories if desired. Click the links below to get more information from my website or read Gwet (2014). Formula and MATLAB function for generalized Cohen's kappa Formula and MATLAB function for generalized Scott's pi (AKA Fleiss' kappa) Reference Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics. Uebersax, J. S. (1982). A design-independent method for measuring the reliability of psychiatric diagnosis. Journal of Psychiatric Research, 17(4), 335–342.
Kappa Statistic for Variable Number of Raters
You can do this using a "generalized formula." The intuition here is to use the following formula for observed agreement, which uses the number of raters rather than conditionals for specific raters.
Kappa Statistic for Variable Number of Raters You can do this using a "generalized formula." The intuition here is to use the following formula for observed agreement, which uses the number of raters rather than conditionals for specific raters. $$p_o=\frac{1}{n'}\sum_{i=1}^{n'}\sum_{k=1}^q\frac{r_{ik}(r_{ik}^\star-1)}{r_i(r_i-1)}$$ where $n'$ is the number of items that were coded by two or more raters, $q$ is the number of possible categories, $r_{ik}$ is the number of raters that assigned item $i$ to category $k$, and $r_i$ is the number of raters that assigned item $i$ to any category. $r_{ik}^\star$ allows you to use a weighting scheme to account for non-nominal categories if desired. Click the links below to get more information from my website or read Gwet (2014). Formula and MATLAB function for generalized Cohen's kappa Formula and MATLAB function for generalized Scott's pi (AKA Fleiss' kappa) Reference Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics. Uebersax, J. S. (1982). A design-independent method for measuring the reliability of psychiatric diagnosis. Journal of Psychiatric Research, 17(4), 335–342.
Kappa Statistic for Variable Number of Raters You can do this using a "generalized formula." The intuition here is to use the following formula for observed agreement, which uses the number of raters rather than conditionals for specific raters.
54,989
How can I need to de-transformation from diff(log(data),1)?
Say your original series is $y_t$ and you define the transformed series: $$\tilde{y}_t = \log(y_t)-\log(y_{t-1})$$ You then forecast the value of $\tilde{y}_{t+1}$ by using your ARIMA(1,0,2) model, giving you $\hat{\tilde{y}}_{t+1}$, but you want a forecast of $y_{t+1}$. You can compute it like this: $$\hat{y}_{t+1} = y_t \exp\left({\hat{\tilde{y}}_{t+1}}\right)$$ More generally, you can verify that the following telescoping sum: $$\log(y_t) = \log(y_0) + \sum_{i=1}^t \left(\log(y_i)-\log(y_{i-1})\right)$$ implies that the inverse transformation is: $$y_t = y_0 \exp{\left(\sum_{i=1}^t \tilde{y}_i\right)}$$ As a practical matter, the forecast::Arima function you are using will do all of this for you if you specify both the log-transform and the difference in the function call, instead of doing it by hand before calling it: fit <- Arima(dataset, order=c(1,1,2), lambda=0)
How can I need to de-transformation from diff(log(data),1)?
Say your original series is $y_t$ and you define the transformed series: $$\tilde{y}_t = \log(y_t)-\log(y_{t-1})$$ You then forecast the value of $\tilde{y}_{t+1}$ by using your ARIMA(1,0,2) model, gi
How can I need to de-transformation from diff(log(data),1)? Say your original series is $y_t$ and you define the transformed series: $$\tilde{y}_t = \log(y_t)-\log(y_{t-1})$$ You then forecast the value of $\tilde{y}_{t+1}$ by using your ARIMA(1,0,2) model, giving you $\hat{\tilde{y}}_{t+1}$, but you want a forecast of $y_{t+1}$. You can compute it like this: $$\hat{y}_{t+1} = y_t \exp\left({\hat{\tilde{y}}_{t+1}}\right)$$ More generally, you can verify that the following telescoping sum: $$\log(y_t) = \log(y_0) + \sum_{i=1}^t \left(\log(y_i)-\log(y_{i-1})\right)$$ implies that the inverse transformation is: $$y_t = y_0 \exp{\left(\sum_{i=1}^t \tilde{y}_i\right)}$$ As a practical matter, the forecast::Arima function you are using will do all of this for you if you specify both the log-transform and the difference in the function call, instead of doing it by hand before calling it: fit <- Arima(dataset, order=c(1,1,2), lambda=0)
How can I need to de-transformation from diff(log(data),1)? Say your original series is $y_t$ and you define the transformed series: $$\tilde{y}_t = \log(y_t)-\log(y_{t-1})$$ You then forecast the value of $\tilde{y}_{t+1}$ by using your ARIMA(1,0,2) model, gi
54,990
Logistic Regression-Linear Features
I think the term they used may not be standard term. I guess it means the decision boundary can be roughly approximated by a heyperplane. (a line in 2D space). Here are two examples for "linear feature" (A) or not (B) in 2 dimensional sapce. Here is my answer to a very related question, which may be helpful for you. Do all machine learning algorithms separate data linearly?
Logistic Regression-Linear Features
I think the term they used may not be standard term. I guess it means the decision boundary can be roughly approximated by a heyperplane. (a line in 2D space). Here are two examples for "linear featur
Logistic Regression-Linear Features I think the term they used may not be standard term. I guess it means the decision boundary can be roughly approximated by a heyperplane. (a line in 2D space). Here are two examples for "linear feature" (A) or not (B) in 2 dimensional sapce. Here is my answer to a very related question, which may be helpful for you. Do all machine learning algorithms separate data linearly?
Logistic Regression-Linear Features I think the term they used may not be standard term. I guess it means the decision boundary can be roughly approximated by a heyperplane. (a line in 2D space). Here are two examples for "linear featur
54,991
Logistic Regression-Linear Features
For another view of linear separability (assuming that's what linear features means) imagine you have a simple dataset with sex as a binary, categorical covariate and results as a binary outcome of some experiment. sex outcome m 1 m 1 m 0 f 0 f 0 f 0 For binary response data (i.e. like this), it's common to use a logistic regression to model, among other things, the probability of outcome = 1 given sex. However, what if your input was sex = f. Then: $$P(\text{outcome} = 1 | \text{sex} = f) = 0$$ because we have no training examples of this ever happening. In this way, we say that the data is linearly separable per @hxd1011's image above. In fact, if you tried to fit a logistic regression you'd likely get an error because the MLE estimates tend to infinity (unless your computer software has a parameter which stops the algorithm, usually IRLS, from continuing to looks for a minimum). If you get data like this and want to use a logistic regression, you can look into penalized logistic regression. Here is nice write up about it.
Logistic Regression-Linear Features
For another view of linear separability (assuming that's what linear features means) imagine you have a simple dataset with sex as a binary, categorical covariate and results as a binary outcome of so
Logistic Regression-Linear Features For another view of linear separability (assuming that's what linear features means) imagine you have a simple dataset with sex as a binary, categorical covariate and results as a binary outcome of some experiment. sex outcome m 1 m 1 m 0 f 0 f 0 f 0 For binary response data (i.e. like this), it's common to use a logistic regression to model, among other things, the probability of outcome = 1 given sex. However, what if your input was sex = f. Then: $$P(\text{outcome} = 1 | \text{sex} = f) = 0$$ because we have no training examples of this ever happening. In this way, we say that the data is linearly separable per @hxd1011's image above. In fact, if you tried to fit a logistic regression you'd likely get an error because the MLE estimates tend to infinity (unless your computer software has a parameter which stops the algorithm, usually IRLS, from continuing to looks for a minimum). If you get data like this and want to use a logistic regression, you can look into penalized logistic regression. Here is nice write up about it.
Logistic Regression-Linear Features For another view of linear separability (assuming that's what linear features means) imagine you have a simple dataset with sex as a binary, categorical covariate and results as a binary outcome of so
54,992
Sufficiency of Sample Mean for Laplace Distribution
For on observation, the Laplace pdf is $$f_X(x) = \dfrac 1 {2b} \exp(-\dfrac {|x-\mu|} b)$$ For multiple iid observations, the pdf is $$f_\boldsymbol X(\boldsymbol x) = \dfrac 1 {(2b)^n} \exp(- \dfrac 1 b \sum_{i=1}^n {|x_i-\mu|})$$ The easiest way to determine what statistics are sufficient for $\boldsymbol X$ is to try to use the Factorization Theorem (https://en.wikipedia.org/wiki/Sufficient_statistic#Fisher.E2.80.93Neyman_factorization_theorem). However, if you start to work with this expression, you'll see that the absolute values in the sum make it impossible to do any simplification/factorization. To answer your first question, the sample mean is not a sufficient statistic (event if $b$ is known). However, if $\mu$ is known, then $\sum_{i=1}^n |x_i-\mu|$ is a sufficient statistic for $b$. But $\mu$ will almost never be known unless it's assumed to be zero. As for your second question, I don't believe there are any theorems which directly state for some conditions, inefficiency implies insufficiency or vice-versa. However, there are theorems which connect sufficient statistics to maximum likelihood estimators and MLEs are asymptotically efficient under certain regularity conditions. So in that sense, I suppose you could view the insufficiency and inefficiency of the sample mean as related results.
Sufficiency of Sample Mean for Laplace Distribution
For on observation, the Laplace pdf is $$f_X(x) = \dfrac 1 {2b} \exp(-\dfrac {|x-\mu|} b)$$ For multiple iid observations, the pdf is $$f_\boldsymbol X(\boldsymbol x) = \dfrac 1 {(2b)^n} \exp(- \dfrac
Sufficiency of Sample Mean for Laplace Distribution For on observation, the Laplace pdf is $$f_X(x) = \dfrac 1 {2b} \exp(-\dfrac {|x-\mu|} b)$$ For multiple iid observations, the pdf is $$f_\boldsymbol X(\boldsymbol x) = \dfrac 1 {(2b)^n} \exp(- \dfrac 1 b \sum_{i=1}^n {|x_i-\mu|})$$ The easiest way to determine what statistics are sufficient for $\boldsymbol X$ is to try to use the Factorization Theorem (https://en.wikipedia.org/wiki/Sufficient_statistic#Fisher.E2.80.93Neyman_factorization_theorem). However, if you start to work with this expression, you'll see that the absolute values in the sum make it impossible to do any simplification/factorization. To answer your first question, the sample mean is not a sufficient statistic (event if $b$ is known). However, if $\mu$ is known, then $\sum_{i=1}^n |x_i-\mu|$ is a sufficient statistic for $b$. But $\mu$ will almost never be known unless it's assumed to be zero. As for your second question, I don't believe there are any theorems which directly state for some conditions, inefficiency implies insufficiency or vice-versa. However, there are theorems which connect sufficient statistics to maximum likelihood estimators and MLEs are asymptotically efficient under certain regularity conditions. So in that sense, I suppose you could view the insufficiency and inefficiency of the sample mean as related results.
Sufficiency of Sample Mean for Laplace Distribution For on observation, the Laplace pdf is $$f_X(x) = \dfrac 1 {2b} \exp(-\dfrac {|x-\mu|} b)$$ For multiple iid observations, the pdf is $$f_\boldsymbol X(\boldsymbol x) = \dfrac 1 {(2b)^n} \exp(- \dfrac
54,993
Sufficiency of Sample Mean for Laplace Distribution
The sample median is the maximum likelihood estimator of the mean if the scale is known, but it is not the sufficient statistics. I agree that the order statistics is the sufficient statistics. This can be seen using the factorization theorem and examining the dependence of the RN derivative, upon the examination of the absolute values therein, on the data across all mu values.
Sufficiency of Sample Mean for Laplace Distribution
The sample median is the maximum likelihood estimator of the mean if the scale is known, but it is not the sufficient statistics. I agree that the order statistics is the sufficient statistics. This c
Sufficiency of Sample Mean for Laplace Distribution The sample median is the maximum likelihood estimator of the mean if the scale is known, but it is not the sufficient statistics. I agree that the order statistics is the sufficient statistics. This can be seen using the factorization theorem and examining the dependence of the RN derivative, upon the examination of the absolute values therein, on the data across all mu values.
Sufficiency of Sample Mean for Laplace Distribution The sample median is the maximum likelihood estimator of the mean if the scale is known, but it is not the sufficient statistics. I agree that the order statistics is the sufficient statistics. This c
54,994
Is it possible to use two offsets?
An offset is generally just a coefficient set to a specific value. To get more than one offset, in general you just need to combine the different variables in a way that is consistent to get that fixed value. In a Poisson equation if you set $Z$ as the offset (or exposure its sometimes called): $$\log(\mathbb{E}[Y]) = \beta_0 + \beta_1X + 1\cdot Z$$ And you exponentiate both sides you then have: $$\mathbb{E}[Y] = \text{exp}(\beta_0 + \beta_1X) \cdot \text{exp}(Z)$$ You can then interpret this as a rate per some unit $t$ if $Z = \text{log}(t)$: $$\mathbb{E}[Y]/\text{exp}(\text{log}(t)) = \mathbb{E}[Y]/t = \text{exp}(\beta_0 + \beta_1X)$$ To get say two offsets we then start with: $$\log(\mathbb{E}[Y]) = \beta_0 + \beta_1X + 1\cdot Z_1 + 1\cdot Z_2$$ Which we can exponentiate and regroup to be: $$\mathbb{E}[Y] = \text{exp}(\beta_0 + \beta_1X) \cdot [\text{exp}(Z_1) \cdot \text{exp}(Z_2)]$$ Hopefully you see where I am going with this at this point. So if $Z_1 = \log(t_1)$ and $Z_2 = \log(t_2)$ we then have: $$\mathbb{E}[Y]/(t_1 \cdot t_2) = \exp(\beta_0 + \beta_1X)$$ There are plenty of times to do this. Say you have people and then you have different exposure times for individuals, so you want the rate to be # of people*exposure time. So to get two offsets in any software you simply need to add $\log(t_1) + \log(t_2)$, or equivalently $\log(t_1 \cdot t_2)$, and then specify that new variable as the offset. So in R you would just have offset(log(hours*no.males)) in your example. In other software you may need to calculate $\log(t_1 \cdot t_2)$ as one new variable and specify that (I think in Stata and SPSS you would need to do it that way at least). Note for your particular example, that when setting an exposure term it is a restricted model compared to letting the effect of say log(hours) be something other than one. So I would suggest testing this via a likelihood ratio test, so something like: Mod1 <- glmer.nb(no.aggression ~ log(no.males) + log(no.females) + (1|id_target) + offset(log(hours)), data=x) Mod2 <- glmer.nb(no.aggression ~ log(no.females) + (1|id_target) + offset(log(hours*no.males)), data=x) anova(Mod1,Mod2) Where Mod2 is a more specific case of Mod1. You could even have offset(log(hours*no.males*no.females)). This would make it a rate per all potential pairwise interactions (multiplied by hours). That does not sound too potentially inconsistent with what you have described, but currently you only have the linear no.males and no.females in the equation.
Is it possible to use two offsets?
An offset is generally just a coefficient set to a specific value. To get more than one offset, in general you just need to combine the different variables in a way that is consistent to get that fixe
Is it possible to use two offsets? An offset is generally just a coefficient set to a specific value. To get more than one offset, in general you just need to combine the different variables in a way that is consistent to get that fixed value. In a Poisson equation if you set $Z$ as the offset (or exposure its sometimes called): $$\log(\mathbb{E}[Y]) = \beta_0 + \beta_1X + 1\cdot Z$$ And you exponentiate both sides you then have: $$\mathbb{E}[Y] = \text{exp}(\beta_0 + \beta_1X) \cdot \text{exp}(Z)$$ You can then interpret this as a rate per some unit $t$ if $Z = \text{log}(t)$: $$\mathbb{E}[Y]/\text{exp}(\text{log}(t)) = \mathbb{E}[Y]/t = \text{exp}(\beta_0 + \beta_1X)$$ To get say two offsets we then start with: $$\log(\mathbb{E}[Y]) = \beta_0 + \beta_1X + 1\cdot Z_1 + 1\cdot Z_2$$ Which we can exponentiate and regroup to be: $$\mathbb{E}[Y] = \text{exp}(\beta_0 + \beta_1X) \cdot [\text{exp}(Z_1) \cdot \text{exp}(Z_2)]$$ Hopefully you see where I am going with this at this point. So if $Z_1 = \log(t_1)$ and $Z_2 = \log(t_2)$ we then have: $$\mathbb{E}[Y]/(t_1 \cdot t_2) = \exp(\beta_0 + \beta_1X)$$ There are plenty of times to do this. Say you have people and then you have different exposure times for individuals, so you want the rate to be # of people*exposure time. So to get two offsets in any software you simply need to add $\log(t_1) + \log(t_2)$, or equivalently $\log(t_1 \cdot t_2)$, and then specify that new variable as the offset. So in R you would just have offset(log(hours*no.males)) in your example. In other software you may need to calculate $\log(t_1 \cdot t_2)$ as one new variable and specify that (I think in Stata and SPSS you would need to do it that way at least). Note for your particular example, that when setting an exposure term it is a restricted model compared to letting the effect of say log(hours) be something other than one. So I would suggest testing this via a likelihood ratio test, so something like: Mod1 <- glmer.nb(no.aggression ~ log(no.males) + log(no.females) + (1|id_target) + offset(log(hours)), data=x) Mod2 <- glmer.nb(no.aggression ~ log(no.females) + (1|id_target) + offset(log(hours*no.males)), data=x) anova(Mod1,Mod2) Where Mod2 is a more specific case of Mod1. You could even have offset(log(hours*no.males*no.females)). This would make it a rate per all potential pairwise interactions (multiplied by hours). That does not sound too potentially inconsistent with what you have described, but currently you only have the linear no.males and no.females in the equation.
Is it possible to use two offsets? An offset is generally just a coefficient set to a specific value. To get more than one offset, in general you just need to combine the different variables in a way that is consistent to get that fixe
54,995
Can Dickey-Fuller be used if the residuals are non-normal?
Yes, that is not a necessary condition. Recall that all we know about the null distribution of the Dickey-Fuller test is its asymptotic representation (although the literature of course considers many refinements). As is often the case, we do not need distributional assumptions on the error terms when considering asymptotic distributions thanks to (in this case: functional) central limit theory arguments. Here is a screenshot from Phillips (Biometrika 1987) stating assumptions on the errors - as you see, these are way broader than requiring normality. That said, the asymptotic distribution does not have a closed-form solution, so that you need to simulate from the distribution to get critical values (existing infinite series representations are not practical either to generate critical vales). To perform that simulation, you must draw erros from some distribution, and the conventional choice is to simulate normal errors. But, as Phillips shows, if you were to draw the errors from some other distribution satisfying the above requirements, you would asymptotically get the same distribution. You could replace the line with rnorm(T) with some such distribution in my answer here to verify. That said, the finite-sample distribution will of course be affected by the error distribution, so that the error distribution will play a role in shorter time series. (Indeed, I did replace rnorm(T) with rt(T, df=8), and differences are still relevant for $T$ as large as 20.000.)
Can Dickey-Fuller be used if the residuals are non-normal?
Yes, that is not a necessary condition. Recall that all we know about the null distribution of the Dickey-Fuller test is its asymptotic representation (although the literature of course considers many
Can Dickey-Fuller be used if the residuals are non-normal? Yes, that is not a necessary condition. Recall that all we know about the null distribution of the Dickey-Fuller test is its asymptotic representation (although the literature of course considers many refinements). As is often the case, we do not need distributional assumptions on the error terms when considering asymptotic distributions thanks to (in this case: functional) central limit theory arguments. Here is a screenshot from Phillips (Biometrika 1987) stating assumptions on the errors - as you see, these are way broader than requiring normality. That said, the asymptotic distribution does not have a closed-form solution, so that you need to simulate from the distribution to get critical values (existing infinite series representations are not practical either to generate critical vales). To perform that simulation, you must draw erros from some distribution, and the conventional choice is to simulate normal errors. But, as Phillips shows, if you were to draw the errors from some other distribution satisfying the above requirements, you would asymptotically get the same distribution. You could replace the line with rnorm(T) with some such distribution in my answer here to verify. That said, the finite-sample distribution will of course be affected by the error distribution, so that the error distribution will play a role in shorter time series. (Indeed, I did replace rnorm(T) with rt(T, df=8), and differences are still relevant for $T$ as large as 20.000.)
Can Dickey-Fuller be used if the residuals are non-normal? Yes, that is not a necessary condition. Recall that all we know about the null distribution of the Dickey-Fuller test is its asymptotic representation (although the literature of course considers many
54,996
Can Dickey-Fuller be used if the residuals are non-normal?
Yes, the innovations need not be normal, not at all. The underlying mathematical fact that gives rise to the asymptotic null distribution of the DF statistic is Functional Central Limit Theorem, or Invariance Principle. The FCLT, if you'd like, is an infinite dimensional generalization of the CLT. The CLT holds for dependent, non-normal sequences, and similar statement can be made about the FCLT. (Conversely, FCLT implies CLT, as the finite dimensional distribution of the Brownian motion is normal. So any general condition that gives you a FCLT immediately implies a CLT.) Functional Central Limit Theorem Given a sequence of random variables $u_i$, $i = 1, 2, \cdots$. Consider the sequence of random functions $\phi_n$, $n = 1, 2, \cdots$, defined by $$ \phi_n(t) = \frac{1}{\sqrt{n}}\sum_{i = 1}^{[nt]} u_i, \; t \in [0,1]. $$ Each $\phi_n$ is a stochastic process on $[0,1]$ with sample paths in the Skorohod space $D[0,1]$. The generic form of FCLT provides sufficient conditions under which $\{ \phi_n \}$ converges weakly on $D[0,1]$ to (a scalar multiple of) the standard Brownian motion $B$. Sufficient conditions, which are more general than those from Phillips and Perron (1987) quoted above, were known prior, if not in the time series literature. See, for example, McLiesh (1975): The strong mixing condition (iv) of Phillips and Perron implies McLiesh's mixingale condition under some conditions. Condition (ii) of Phillips and Perron requiring uniform bound on $2 + \epsilon$ moments of $\{ u_i\}$ is relaxed in McLeish to the uniform integrability of $\{ u_i^2 \}$. Condition (iii) of Phillips and Perron is actually not quite correct/sufficient as intended. For a further milestone in the time series literature in this direction, see Elliott, Rothenberg, and Stock (1996), where they apply a Neyman-Pearson-like approach to benchmark the asymptotic power envelope of unit root tests. The normality assumption is long gone by then. DF Statistic It follows immediately from the FCLT and the Continuous Mapping Theorem that the DF $\tau$-statistic $\tau$ has the asymptotic distribution $$ \tau \stackrel{d}{\rightarrow} \frac{\frac12 (B(1)^2 - 1)}{ \sqrt{ \int_0^1 B(t)^2 dt} }. $$ The 5th-percentile of this distribution is the critical value for DF test with nominal size of 5%. Simulating $\tau$ with an i.i.d. normal error term and another error term that follows, say, a time series specification would lead to the same distribution as sample size gets large. Comment I am going to disagree with @mlofton's comment that Generally speaking, no one would ever use Phillips result because simulating analytically from the derived distribution is not terribly practical. People generally use the DF tables and those have nothing to do with asymptotic representation. They use the normality of the error term and allow the practitioner to obtain DF statistics for sample sizes as low as 20... It's a major contribution of Phillips to point out that an "assumption free" asymptotic distribution is possible. This is one of the reasons, along with contemporary developments in economic theory, that convinced empirical practitioners (in particular, macro-econometricians) that unit root tests belonged to their everyday toolbox. A statistic (more specifically, a null distribution) that requires normality of the data generating process is not useful at all---e.g. suppose the $t$-statistic is only valid if data is i.i.d. normal. That was the limitation of the early unit root literature.
Can Dickey-Fuller be used if the residuals are non-normal?
Yes, the innovations need not be normal, not at all. The underlying mathematical fact that gives rise to the asymptotic null distribution of the DF statistic is Functional Central Limit Theorem, or In
Can Dickey-Fuller be used if the residuals are non-normal? Yes, the innovations need not be normal, not at all. The underlying mathematical fact that gives rise to the asymptotic null distribution of the DF statistic is Functional Central Limit Theorem, or Invariance Principle. The FCLT, if you'd like, is an infinite dimensional generalization of the CLT. The CLT holds for dependent, non-normal sequences, and similar statement can be made about the FCLT. (Conversely, FCLT implies CLT, as the finite dimensional distribution of the Brownian motion is normal. So any general condition that gives you a FCLT immediately implies a CLT.) Functional Central Limit Theorem Given a sequence of random variables $u_i$, $i = 1, 2, \cdots$. Consider the sequence of random functions $\phi_n$, $n = 1, 2, \cdots$, defined by $$ \phi_n(t) = \frac{1}{\sqrt{n}}\sum_{i = 1}^{[nt]} u_i, \; t \in [0,1]. $$ Each $\phi_n$ is a stochastic process on $[0,1]$ with sample paths in the Skorohod space $D[0,1]$. The generic form of FCLT provides sufficient conditions under which $\{ \phi_n \}$ converges weakly on $D[0,1]$ to (a scalar multiple of) the standard Brownian motion $B$. Sufficient conditions, which are more general than those from Phillips and Perron (1987) quoted above, were known prior, if not in the time series literature. See, for example, McLiesh (1975): The strong mixing condition (iv) of Phillips and Perron implies McLiesh's mixingale condition under some conditions. Condition (ii) of Phillips and Perron requiring uniform bound on $2 + \epsilon$ moments of $\{ u_i\}$ is relaxed in McLeish to the uniform integrability of $\{ u_i^2 \}$. Condition (iii) of Phillips and Perron is actually not quite correct/sufficient as intended. For a further milestone in the time series literature in this direction, see Elliott, Rothenberg, and Stock (1996), where they apply a Neyman-Pearson-like approach to benchmark the asymptotic power envelope of unit root tests. The normality assumption is long gone by then. DF Statistic It follows immediately from the FCLT and the Continuous Mapping Theorem that the DF $\tau$-statistic $\tau$ has the asymptotic distribution $$ \tau \stackrel{d}{\rightarrow} \frac{\frac12 (B(1)^2 - 1)}{ \sqrt{ \int_0^1 B(t)^2 dt} }. $$ The 5th-percentile of this distribution is the critical value for DF test with nominal size of 5%. Simulating $\tau$ with an i.i.d. normal error term and another error term that follows, say, a time series specification would lead to the same distribution as sample size gets large. Comment I am going to disagree with @mlofton's comment that Generally speaking, no one would ever use Phillips result because simulating analytically from the derived distribution is not terribly practical. People generally use the DF tables and those have nothing to do with asymptotic representation. They use the normality of the error term and allow the practitioner to obtain DF statistics for sample sizes as low as 20... It's a major contribution of Phillips to point out that an "assumption free" asymptotic distribution is possible. This is one of the reasons, along with contemporary developments in economic theory, that convinced empirical practitioners (in particular, macro-econometricians) that unit root tests belonged to their everyday toolbox. A statistic (more specifically, a null distribution) that requires normality of the data generating process is not useful at all---e.g. suppose the $t$-statistic is only valid if data is i.i.d. normal. That was the limitation of the early unit root literature.
Can Dickey-Fuller be used if the residuals are non-normal? Yes, the innovations need not be normal, not at all. The underlying mathematical fact that gives rise to the asymptotic null distribution of the DF statistic is Functional Central Limit Theorem, or In
54,997
Spherical symmetry: a generalization of exchangeability?
Spherical symmetry is a special case of exchangeability. The invariance property for spherically symmetric sequences that you describe is indeed the standard definition of spherical symmetry e.g. see On Ali's Characterization of the Spherical Normal Distribution, Steven F. Arnold and James Lynch, Journal of the Royal Statistical Society. Series B (Methodological) Vol. 44, No. 1 (1982), pp. 49-51. So, spherically symmetric sequences must be invariant to all orthogonal transformations. However, exchangeable sequences need only be invariant to a subclass of these transformations, namely those transformations representing permutations of the coordinate axes. It follows that all spherically symmetric sequences are exchangeable (and it is this that Kingman uses to prove the theorem in the paper you cite)... but the converse is false. Example of an exchangeable but non-spherically symmetric sequence. Let $X_1$,$X_2$ be iid standard normal. Let $U$ be independent of the $X_i$, taking value $0$ with probability $0.5$ and taking value $100$ with probability $0.5$. Then the sequence $Y_i:= X_i+U$ is exchangeable but not spherically symmetric. Indeed, most of the mass of the joint distribution of $(Y_1,Y_2)$ surrounds the points $(0,0)$ and $(100,100)$. This is clearly not invariant to rotations.
Spherical symmetry: a generalization of exchangeability?
Spherical symmetry is a special case of exchangeability. The invariance property for spherically symmetric sequences that you describe is indeed the standard definition of spherical symmetry e.g. see
Spherical symmetry: a generalization of exchangeability? Spherical symmetry is a special case of exchangeability. The invariance property for spherically symmetric sequences that you describe is indeed the standard definition of spherical symmetry e.g. see On Ali's Characterization of the Spherical Normal Distribution, Steven F. Arnold and James Lynch, Journal of the Royal Statistical Society. Series B (Methodological) Vol. 44, No. 1 (1982), pp. 49-51. So, spherically symmetric sequences must be invariant to all orthogonal transformations. However, exchangeable sequences need only be invariant to a subclass of these transformations, namely those transformations representing permutations of the coordinate axes. It follows that all spherically symmetric sequences are exchangeable (and it is this that Kingman uses to prove the theorem in the paper you cite)... but the converse is false. Example of an exchangeable but non-spherically symmetric sequence. Let $X_1$,$X_2$ be iid standard normal. Let $U$ be independent of the $X_i$, taking value $0$ with probability $0.5$ and taking value $100$ with probability $0.5$. Then the sequence $Y_i:= X_i+U$ is exchangeable but not spherically symmetric. Indeed, most of the mass of the joint distribution of $(Y_1,Y_2)$ surrounds the points $(0,0)$ and $(100,100)$. This is clearly not invariant to rotations.
Spherical symmetry: a generalization of exchangeability? Spherical symmetry is a special case of exchangeability. The invariance property for spherically symmetric sequences that you describe is indeed the standard definition of spherical symmetry e.g. see
54,998
What is the relationship between classification and regression in Neural Network?
The difference between a classification and regression is that a classification outputs a prediction probability for class/classes and regression provides a value. We can make a neural network to output a value by simply changing the activation function in the final layer to output the values. By changing the activation function such as sigmoid,relu,tanh,etc. we can use a function ($f(x) = x$). So while back propagation simply derive $f(x)$ For illustration I will provide you the forward and back ward pass for a single layer neural network regression below: forward pass: $inputs -> x $ $weights input to hidden -> w1$ $weights hidden to output ->w2$ $z2 = w1*x$ $a2 = sigmoid(z2)$ $z3 = w2*a2$ $a3 = f(z3)$ backward pass: $targets -> Y$ $f(x) = x -> f'(x) = 1$ $sigmoid'(x) -> sigmoid(x)(1-sigmoid(x))$ $d3 = Y - a3$ $d2 = w2*d3$ $w2' = a2*d3$ $w1' = d2*a2'*x$ Here the d3 and d2 are layer wise errors. Please make sure the dimensions are properly addressed while implementing in code for the above equations.
What is the relationship between classification and regression in Neural Network?
The difference between a classification and regression is that a classification outputs a prediction probability for class/classes and regression provides a value. We can make a neural network to outp
What is the relationship between classification and regression in Neural Network? The difference between a classification and regression is that a classification outputs a prediction probability for class/classes and regression provides a value. We can make a neural network to output a value by simply changing the activation function in the final layer to output the values. By changing the activation function such as sigmoid,relu,tanh,etc. we can use a function ($f(x) = x$). So while back propagation simply derive $f(x)$ For illustration I will provide you the forward and back ward pass for a single layer neural network regression below: forward pass: $inputs -> x $ $weights input to hidden -> w1$ $weights hidden to output ->w2$ $z2 = w1*x$ $a2 = sigmoid(z2)$ $z3 = w2*a2$ $a3 = f(z3)$ backward pass: $targets -> Y$ $f(x) = x -> f'(x) = 1$ $sigmoid'(x) -> sigmoid(x)(1-sigmoid(x))$ $d3 = Y - a3$ $d2 = w2*d3$ $w2' = a2*d3$ $w1' = d2*a2'*x$ Here the d3 and d2 are layer wise errors. Please make sure the dimensions are properly addressed while implementing in code for the above equations.
What is the relationship between classification and regression in Neural Network? The difference between a classification and regression is that a classification outputs a prediction probability for class/classes and regression provides a value. We can make a neural network to outp
54,999
Estimating error from a 1% sample
Clearly a good (unbiased) estimate of the number of people in the population with the condition is $X/(1\%)=100X$. $X$ has a Binomial distribution--but we don't know its parameters, because we lack information on the population size (except to know it is at least $800/(1\%)=800\times100=80000$). If we assume the proportion of people in the population with the condition is small, then to an excellent approximation $X$ has a Poisson distribution. A good estimate of the sampling standard deviation of $X$ is its square root. If that unknown proportion is large, then the sampling standard deviation of $X$ will be smaller than its square root: so let's conservatively use that square root to make sure we produce a confidence interval that isn't overly narrow. Again because $X$ is large, its sampling distribution will also be approximately Normal. Thus, to find a two-sided confidence interval of confidence $100-100\alpha\%$, find the upper $100-100\alpha\%$ percentile of a standard normal distribution as $Z_{1-\alpha/2}$ and form the interval $$CI = [100(X -Z_{1-\alpha/2}\sqrt{X}), 100(X + Z_{1-\alpha/2}\sqrt{X})].$$ This has at least a $100-100\alpha\%$ chance of covering the true value. With $X=800$ and $\alpha=0.05$ (for a $95\%$ confidence interval), $Z_{1-\alpha/2} = 1.96$ and the interval is $$CI = [74456, 85544].$$ For a little more insight into this result, we might ask the computer to plot the limits of the CI (using a Normal approximation to a Binomial distribution) as a function of the population size. The smallest possible size is $80000$, so let's indicate any potential population size as a multiple of this minimum. The conservative 95% limits are plotted as horizontal gray lines while the Binomial limits are plotted as red curves. You can see that the Binomial intervals approach the limits quickly: unless most people in the population have the condition, the conservative interval will not be too wide. A similar plot could be produced for sampling without replacement: the intervals would be narrower at the left, but by the time the multiple reached $10$ or larger, there would be little difference between the two plots.
Estimating error from a 1% sample
Clearly a good (unbiased) estimate of the number of people in the population with the condition is $X/(1\%)=100X$. $X$ has a Binomial distribution--but we don't know its parameters, because we lack
Estimating error from a 1% sample Clearly a good (unbiased) estimate of the number of people in the population with the condition is $X/(1\%)=100X$. $X$ has a Binomial distribution--but we don't know its parameters, because we lack information on the population size (except to know it is at least $800/(1\%)=800\times100=80000$). If we assume the proportion of people in the population with the condition is small, then to an excellent approximation $X$ has a Poisson distribution. A good estimate of the sampling standard deviation of $X$ is its square root. If that unknown proportion is large, then the sampling standard deviation of $X$ will be smaller than its square root: so let's conservatively use that square root to make sure we produce a confidence interval that isn't overly narrow. Again because $X$ is large, its sampling distribution will also be approximately Normal. Thus, to find a two-sided confidence interval of confidence $100-100\alpha\%$, find the upper $100-100\alpha\%$ percentile of a standard normal distribution as $Z_{1-\alpha/2}$ and form the interval $$CI = [100(X -Z_{1-\alpha/2}\sqrt{X}), 100(X + Z_{1-\alpha/2}\sqrt{X})].$$ This has at least a $100-100\alpha\%$ chance of covering the true value. With $X=800$ and $\alpha=0.05$ (for a $95\%$ confidence interval), $Z_{1-\alpha/2} = 1.96$ and the interval is $$CI = [74456, 85544].$$ For a little more insight into this result, we might ask the computer to plot the limits of the CI (using a Normal approximation to a Binomial distribution) as a function of the population size. The smallest possible size is $80000$, so let's indicate any potential population size as a multiple of this minimum. The conservative 95% limits are plotted as horizontal gray lines while the Binomial limits are plotted as red curves. You can see that the Binomial intervals approach the limits quickly: unless most people in the population have the condition, the conservative interval will not be too wide. A similar plot could be produced for sampling without replacement: the intervals would be narrower at the left, but by the time the multiple reached $10$ or larger, there would be little difference between the two plots.
Estimating error from a 1% sample Clearly a good (unbiased) estimate of the number of people in the population with the condition is $X/(1\%)=100X$. $X$ has a Binomial distribution--but we don't know its parameters, because we lack
55,000
Estimating error from a 1% sample
Suppose we have a population of $N$ Bernoulli trials, but $N$ is unknown. Suppose $α ∈ [0, 1]$ (.01 in the example) is known and we draw a simple random sample of size $αN$ (unknown). We observe $X$ successes (known) in the $αN$ trials. We want to estimate $K$, the number of successes among the $N$ trials in the population. Since the sampling is without replacement, $X$ follows a hypergeometric distribution with population size $N$, sample size $αN$, and number of successes $K$. You could estimate $K$ in a Bayesian fashion, putting priors on $N$ and $K$, or you could find the maximum likelihood estimate of $K$ (which will probably be $X/α$, as you'd expect). Of course, there's no way to compute the error between the estimate and the true value without knowing the true value. You could get a sense of the uncertainity in the estimate using a credible interval (in the Bayesian case) or a bootstrapped confidence interval (in the MLE case).
Estimating error from a 1% sample
Suppose we have a population of $N$ Bernoulli trials, but $N$ is unknown. Suppose $α ∈ [0, 1]$ (.01 in the example) is known and we draw a simple random sample of size $αN$ (unknown). We observe $X$ s
Estimating error from a 1% sample Suppose we have a population of $N$ Bernoulli trials, but $N$ is unknown. Suppose $α ∈ [0, 1]$ (.01 in the example) is known and we draw a simple random sample of size $αN$ (unknown). We observe $X$ successes (known) in the $αN$ trials. We want to estimate $K$, the number of successes among the $N$ trials in the population. Since the sampling is without replacement, $X$ follows a hypergeometric distribution with population size $N$, sample size $αN$, and number of successes $K$. You could estimate $K$ in a Bayesian fashion, putting priors on $N$ and $K$, or you could find the maximum likelihood estimate of $K$ (which will probably be $X/α$, as you'd expect). Of course, there's no way to compute the error between the estimate and the true value without knowing the true value. You could get a sense of the uncertainity in the estimate using a credible interval (in the Bayesian case) or a bootstrapped confidence interval (in the MLE case).
Estimating error from a 1% sample Suppose we have a population of $N$ Bernoulli trials, but $N$ is unknown. Suppose $α ∈ [0, 1]$ (.01 in the example) is known and we draw a simple random sample of size $αN$ (unknown). We observe $X$ s